Jan 29 11:37:26.093313 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 09:36:13 -00 2025 Jan 29 11:37:26.093353 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=519b8fded83181f8e61f734d5291f916d7548bfba9487c78bcb50d002d81719d Jan 29 11:37:26.093368 kernel: BIOS-provided physical RAM map: Jan 29 11:37:26.093379 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 29 11:37:26.093390 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 29 11:37:26.093400 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 29 11:37:26.093416 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Jan 29 11:37:26.093427 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Jan 29 11:37:26.093438 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Jan 29 11:37:26.093449 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 29 11:37:26.093460 kernel: NX (Execute Disable) protection: active Jan 29 11:37:26.093472 kernel: APIC: Static calls initialized Jan 29 11:37:26.093483 kernel: SMBIOS 2.7 present. Jan 29 11:37:26.093495 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jan 29 11:37:26.093513 kernel: Hypervisor detected: KVM Jan 29 11:37:26.093526 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 11:37:26.093539 kernel: kvm-clock: using sched offset of 8176248470 cycles Jan 29 11:37:26.093659 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 11:37:26.093673 kernel: tsc: Detected 2499.996 MHz processor Jan 29 11:37:26.093686 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 11:37:26.093700 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 11:37:26.093717 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Jan 29 11:37:26.093731 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 29 11:37:26.093744 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 11:37:26.093757 kernel: Using GB pages for direct mapping Jan 29 11:37:26.093909 kernel: ACPI: Early table checksum verification disabled Jan 29 11:37:26.093925 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Jan 29 11:37:26.094020 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Jan 29 11:37:26.094045 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 29 11:37:26.094133 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 29 11:37:26.094154 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Jan 29 11:37:26.094168 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 29 11:37:26.094179 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 29 11:37:26.094193 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jan 29 11:37:26.094208 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 29 11:37:26.094223 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jan 29 11:37:26.094237 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jan 29 11:37:26.094252 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 29 11:37:26.094267 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Jan 29 11:37:26.094285 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Jan 29 11:37:26.094305 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Jan 29 11:37:26.094321 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Jan 29 11:37:26.094468 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Jan 29 11:37:26.094487 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Jan 29 11:37:26.094507 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Jan 29 11:37:26.094523 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Jan 29 11:37:26.094538 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Jan 29 11:37:26.094552 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Jan 29 11:37:26.094569 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 29 11:37:26.094584 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 29 11:37:26.094600 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jan 29 11:37:26.094616 kernel: NUMA: Initialized distance table, cnt=1 Jan 29 11:37:26.094632 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Jan 29 11:37:26.094651 kernel: Zone ranges: Jan 29 11:37:26.094667 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 11:37:26.094683 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Jan 29 11:37:26.094698 kernel: Normal empty Jan 29 11:37:26.094714 kernel: Movable zone start for each node Jan 29 11:37:26.094729 kernel: Early memory node ranges Jan 29 11:37:26.094745 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 29 11:37:26.094760 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Jan 29 11:37:26.094776 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Jan 29 11:37:26.094794 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 11:37:26.094810 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 29 11:37:26.095009 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Jan 29 11:37:26.095033 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 29 11:37:26.095050 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 11:37:26.095066 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jan 29 11:37:26.095082 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 11:37:26.095098 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 11:37:26.095113 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 11:37:26.095129 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 11:37:26.095149 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 11:37:26.095165 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 29 11:37:26.095181 kernel: TSC deadline timer available Jan 29 11:37:26.095197 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 29 11:37:26.095212 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 11:37:26.095228 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jan 29 11:37:26.095243 kernel: Booting paravirtualized kernel on KVM Jan 29 11:37:26.095259 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 11:37:26.095275 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 29 11:37:26.095294 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 29 11:37:26.095310 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 29 11:37:26.095325 kernel: pcpu-alloc: [0] 0 1 Jan 29 11:37:26.095340 kernel: kvm-guest: PV spinlocks enabled Jan 29 11:37:26.095355 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 29 11:37:26.095373 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=519b8fded83181f8e61f734d5291f916d7548bfba9487c78bcb50d002d81719d Jan 29 11:37:26.095389 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 11:37:26.095404 kernel: random: crng init done Jan 29 11:37:26.095423 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 11:37:26.095438 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 29 11:37:26.095453 kernel: Fallback order for Node 0: 0 Jan 29 11:37:26.095469 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Jan 29 11:37:26.095485 kernel: Policy zone: DMA32 Jan 29 11:37:26.095500 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 11:37:26.095516 kernel: Memory: 1932348K/2057760K available (12288K kernel code, 2301K rwdata, 22736K rodata, 42972K init, 2220K bss, 125152K reserved, 0K cma-reserved) Jan 29 11:37:26.095532 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 11:37:26.095551 kernel: Kernel/User page tables isolation: enabled Jan 29 11:37:26.095566 kernel: ftrace: allocating 37923 entries in 149 pages Jan 29 11:37:26.095582 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 11:37:26.095598 kernel: Dynamic Preempt: voluntary Jan 29 11:37:26.095613 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 11:37:26.095629 kernel: rcu: RCU event tracing is enabled. Jan 29 11:37:26.095645 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 11:37:26.095661 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 11:37:26.095676 kernel: Rude variant of Tasks RCU enabled. Jan 29 11:37:26.095693 kernel: Tracing variant of Tasks RCU enabled. Jan 29 11:37:26.095802 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 11:37:26.095822 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 11:37:26.095838 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 29 11:37:26.095853 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 11:37:26.095870 kernel: Console: colour VGA+ 80x25 Jan 29 11:37:26.095885 kernel: printk: console [ttyS0] enabled Jan 29 11:37:26.095901 kernel: ACPI: Core revision 20230628 Jan 29 11:37:26.095918 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jan 29 11:37:26.095934 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 11:37:26.095969 kernel: x2apic enabled Jan 29 11:37:26.095980 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 11:37:26.096003 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jan 29 11:37:26.096018 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Jan 29 11:37:26.096032 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 29 11:37:26.096047 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 29 11:37:26.096060 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 11:37:26.096074 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 11:37:26.096088 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 11:37:26.096102 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 11:37:26.096116 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 29 11:37:26.096130 kernel: RETBleed: Vulnerable Jan 29 11:37:26.096147 kernel: Speculative Store Bypass: Vulnerable Jan 29 11:37:26.096161 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jan 29 11:37:26.096175 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 29 11:37:26.096189 kernel: GDS: Unknown: Dependent on hypervisor status Jan 29 11:37:26.096203 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 11:37:26.096216 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 11:37:26.096233 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 11:37:26.096247 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 29 11:37:26.096260 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 29 11:37:26.096273 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 29 11:37:26.096287 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 29 11:37:26.096300 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 29 11:37:26.096314 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 29 11:37:26.096328 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 11:37:26.096342 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 29 11:37:26.096356 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 29 11:37:26.096369 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jan 29 11:37:26.096385 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jan 29 11:37:26.096399 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jan 29 11:37:26.096413 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jan 29 11:37:26.096427 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jan 29 11:37:26.096441 kernel: Freeing SMP alternatives memory: 32K Jan 29 11:37:26.096455 kernel: pid_max: default: 32768 minimum: 301 Jan 29 11:37:26.096469 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 11:37:26.096482 kernel: landlock: Up and running. Jan 29 11:37:26.096496 kernel: SELinux: Initializing. Jan 29 11:37:26.096509 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 11:37:26.096531 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 11:37:26.096545 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 29 11:37:26.096561 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 11:37:26.096575 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 11:37:26.096589 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 11:37:26.096603 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 29 11:37:26.096617 kernel: signal: max sigframe size: 3632 Jan 29 11:37:26.096631 kernel: rcu: Hierarchical SRCU implementation. Jan 29 11:37:26.096646 kernel: rcu: Max phase no-delay instances is 400. Jan 29 11:37:26.096660 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 29 11:37:26.096674 kernel: smp: Bringing up secondary CPUs ... Jan 29 11:37:26.096691 kernel: smpboot: x86: Booting SMP configuration: Jan 29 11:37:26.096705 kernel: .... node #0, CPUs: #1 Jan 29 11:37:26.096720 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 29 11:37:26.096734 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 29 11:37:26.096748 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 11:37:26.096761 kernel: smpboot: Max logical packages: 1 Jan 29 11:37:26.096775 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Jan 29 11:37:26.096789 kernel: devtmpfs: initialized Jan 29 11:37:26.096804 kernel: x86/mm: Memory block size: 128MB Jan 29 11:37:26.096818 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 11:37:26.096830 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 11:37:26.096843 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 11:37:26.096856 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 11:37:26.096870 kernel: audit: initializing netlink subsys (disabled) Jan 29 11:37:26.096884 kernel: audit: type=2000 audit(1738150645.470:1): state=initialized audit_enabled=0 res=1 Jan 29 11:37:26.098640 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 11:37:26.098672 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 11:37:26.098695 kernel: cpuidle: using governor menu Jan 29 11:37:26.098711 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 11:37:26.098728 kernel: dca service started, version 1.12.1 Jan 29 11:37:26.098746 kernel: PCI: Using configuration type 1 for base access Jan 29 11:37:26.098763 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 11:37:26.098780 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 11:37:26.098797 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 11:37:26.098814 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 11:37:26.098907 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 11:37:26.098929 kernel: ACPI: Added _OSI(Module Device) Jan 29 11:37:26.098963 kernel: ACPI: Added _OSI(Processor Device) Jan 29 11:37:26.098981 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 11:37:26.098997 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 11:37:26.099014 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 29 11:37:26.099030 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 11:37:26.099047 kernel: ACPI: Interpreter enabled Jan 29 11:37:26.099064 kernel: ACPI: PM: (supports S0 S5) Jan 29 11:37:26.099081 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 11:37:26.099102 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 11:37:26.099119 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 11:37:26.099136 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 29 11:37:26.099153 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 11:37:26.099396 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 29 11:37:26.099543 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 29 11:37:26.099677 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 29 11:37:26.099967 kernel: acpiphp: Slot [3] registered Jan 29 11:37:26.100032 kernel: acpiphp: Slot [4] registered Jan 29 11:37:26.100048 kernel: acpiphp: Slot [5] registered Jan 29 11:37:26.100111 kernel: acpiphp: Slot [6] registered Jan 29 11:37:26.100127 kernel: acpiphp: Slot [7] registered Jan 29 11:37:26.100140 kernel: acpiphp: Slot [8] registered Jan 29 11:37:26.100154 kernel: acpiphp: Slot [9] registered Jan 29 11:37:26.100167 kernel: acpiphp: Slot [10] registered Jan 29 11:37:26.100180 kernel: acpiphp: Slot [11] registered Jan 29 11:37:26.100193 kernel: acpiphp: Slot [12] registered Jan 29 11:37:26.100211 kernel: acpiphp: Slot [13] registered Jan 29 11:37:26.100225 kernel: acpiphp: Slot [14] registered Jan 29 11:37:26.100237 kernel: acpiphp: Slot [15] registered Jan 29 11:37:26.100251 kernel: acpiphp: Slot [16] registered Jan 29 11:37:26.100265 kernel: acpiphp: Slot [17] registered Jan 29 11:37:26.100279 kernel: acpiphp: Slot [18] registered Jan 29 11:37:26.100294 kernel: acpiphp: Slot [19] registered Jan 29 11:37:26.100308 kernel: acpiphp: Slot [20] registered Jan 29 11:37:26.100321 kernel: acpiphp: Slot [21] registered Jan 29 11:37:26.100339 kernel: acpiphp: Slot [22] registered Jan 29 11:37:26.100354 kernel: acpiphp: Slot [23] registered Jan 29 11:37:26.100417 kernel: acpiphp: Slot [24] registered Jan 29 11:37:26.100432 kernel: acpiphp: Slot [25] registered Jan 29 11:37:26.100448 kernel: acpiphp: Slot [26] registered Jan 29 11:37:26.100462 kernel: acpiphp: Slot [27] registered Jan 29 11:37:26.100476 kernel: acpiphp: Slot [28] registered Jan 29 11:37:26.100490 kernel: acpiphp: Slot [29] registered Jan 29 11:37:26.100503 kernel: acpiphp: Slot [30] registered Jan 29 11:37:26.100517 kernel: acpiphp: Slot [31] registered Jan 29 11:37:26.100543 kernel: PCI host bridge to bus 0000:00 Jan 29 11:37:26.100779 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 11:37:26.101050 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 11:37:26.101350 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 11:37:26.101494 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 29 11:37:26.101840 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 11:37:26.102217 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 29 11:37:26.102374 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 29 11:37:26.102557 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jan 29 11:37:26.103198 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 29 11:37:26.103348 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Jan 29 11:37:26.103485 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jan 29 11:37:26.103620 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jan 29 11:37:26.103761 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jan 29 11:37:26.103895 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jan 29 11:37:26.104055 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jan 29 11:37:26.104190 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jan 29 11:37:26.104337 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jan 29 11:37:26.104541 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Jan 29 11:37:26.104762 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 29 11:37:26.104900 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 11:37:26.105073 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 29 11:37:26.105209 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Jan 29 11:37:26.105348 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 29 11:37:26.105482 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Jan 29 11:37:26.105676 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 11:37:26.105700 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 11:37:26.105723 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 11:37:26.105739 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 11:37:26.105998 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 29 11:37:26.106020 kernel: iommu: Default domain type: Translated Jan 29 11:37:26.106037 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 11:37:26.106053 kernel: PCI: Using ACPI for IRQ routing Jan 29 11:37:26.106069 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 11:37:26.106085 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 29 11:37:26.106101 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Jan 29 11:37:26.106391 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jan 29 11:37:26.106536 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jan 29 11:37:26.106673 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 11:37:26.106693 kernel: vgaarb: loaded Jan 29 11:37:26.106808 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 29 11:37:26.106906 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jan 29 11:37:26.106926 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 11:37:26.106954 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 11:37:26.106977 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 11:37:26.106993 kernel: pnp: PnP ACPI init Jan 29 11:37:26.107009 kernel: pnp: PnP ACPI: found 5 devices Jan 29 11:37:26.107025 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 11:37:26.107041 kernel: NET: Registered PF_INET protocol family Jan 29 11:37:26.107057 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 11:37:26.107073 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 29 11:37:26.107090 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 11:37:26.107105 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 29 11:37:26.107125 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 29 11:37:26.107141 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 29 11:37:26.107157 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 11:37:26.107172 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 11:37:26.107189 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 11:37:26.107204 kernel: NET: Registered PF_XDP protocol family Jan 29 11:37:26.107349 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 11:37:26.107472 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 11:37:26.107595 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 11:37:26.107714 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 29 11:37:26.107849 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 29 11:37:26.107866 kernel: PCI: CLS 0 bytes, default 64 Jan 29 11:37:26.107880 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 29 11:37:26.107894 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jan 29 11:37:26.107909 kernel: clocksource: Switched to clocksource tsc Jan 29 11:37:26.107922 kernel: Initialise system trusted keyrings Jan 29 11:37:26.107935 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 29 11:37:26.107984 kernel: Key type asymmetric registered Jan 29 11:37:26.107997 kernel: Asymmetric key parser 'x509' registered Jan 29 11:37:26.108011 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 11:37:26.108026 kernel: io scheduler mq-deadline registered Jan 29 11:37:26.108040 kernel: io scheduler kyber registered Jan 29 11:37:26.108054 kernel: io scheduler bfq registered Jan 29 11:37:26.108069 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 11:37:26.108085 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 11:37:26.108099 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 11:37:26.108119 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 11:37:26.108134 kernel: i8042: Warning: Keylock active Jan 29 11:37:26.108152 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 11:37:26.108168 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 11:37:26.108323 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 29 11:37:26.108541 kernel: rtc_cmos 00:00: registered as rtc0 Jan 29 11:37:26.108670 kernel: rtc_cmos 00:00: setting system clock to 2025-01-29T11:37:25 UTC (1738150645) Jan 29 11:37:26.108786 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 29 11:37:26.108809 kernel: intel_pstate: CPU model not supported Jan 29 11:37:26.110373 kernel: NET: Registered PF_INET6 protocol family Jan 29 11:37:26.110388 kernel: Segment Routing with IPv6 Jan 29 11:37:26.110401 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 11:37:26.110415 kernel: NET: Registered PF_PACKET protocol family Jan 29 11:37:26.110429 kernel: Key type dns_resolver registered Jan 29 11:37:26.110488 kernel: IPI shorthand broadcast: enabled Jan 29 11:37:26.110502 kernel: sched_clock: Marking stable (817002406, 224582828)->(1160651187, -119065953) Jan 29 11:37:26.110515 kernel: registered taskstats version 1 Jan 29 11:37:26.110534 kernel: Loading compiled-in X.509 certificates Jan 29 11:37:26.110547 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: de92a621108c58f5771c86c5c3ccb1aa0728ed55' Jan 29 11:37:26.110560 kernel: Key type .fscrypt registered Jan 29 11:37:26.110575 kernel: Key type fscrypt-provisioning registered Jan 29 11:37:26.110590 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 11:37:26.110605 kernel: ima: Allocated hash algorithm: sha1 Jan 29 11:37:26.110619 kernel: ima: No architecture policies found Jan 29 11:37:26.110632 kernel: clk: Disabling unused clocks Jan 29 11:37:26.110649 kernel: Freeing unused kernel image (initmem) memory: 42972K Jan 29 11:37:26.110662 kernel: Write protecting the kernel read-only data: 36864k Jan 29 11:37:26.110677 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Jan 29 11:37:26.110691 kernel: Run /init as init process Jan 29 11:37:26.110707 kernel: with arguments: Jan 29 11:37:26.110723 kernel: /init Jan 29 11:37:26.110738 kernel: with environment: Jan 29 11:37:26.110753 kernel: HOME=/ Jan 29 11:37:26.110769 kernel: TERM=linux Jan 29 11:37:26.110784 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 11:37:26.110810 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:37:26.110913 systemd[1]: Detected virtualization amazon. Jan 29 11:37:26.110935 systemd[1]: Detected architecture x86-64. Jan 29 11:37:26.111000 systemd[1]: Running in initrd. Jan 29 11:37:26.111076 systemd[1]: No hostname configured, using default hostname. Jan 29 11:37:26.111094 systemd[1]: Hostname set to . Jan 29 11:37:26.111112 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:37:26.111213 systemd[1]: Queued start job for default target initrd.target. Jan 29 11:37:26.111272 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:37:26.111291 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:37:26.111309 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 11:37:26.111493 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:37:26.111519 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 11:37:26.111537 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 11:37:26.111557 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 11:37:26.111575 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 11:37:26.111590 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:37:26.111605 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:37:26.111620 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:37:26.111641 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:37:26.111656 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:37:26.111671 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:37:26.111686 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:37:26.111701 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:37:26.111717 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:37:26.111733 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:37:26.111749 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:37:26.111764 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:37:26.111783 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:37:26.111799 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:37:26.111816 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 11:37:26.111832 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 11:37:26.111853 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:37:26.111873 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 11:37:26.111890 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 11:37:26.111912 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:37:26.111929 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:37:26.112064 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:37:26.112082 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 11:37:26.112137 systemd-journald[179]: Collecting audit messages is disabled. Jan 29 11:37:26.112180 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:37:26.112198 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 11:37:26.112218 systemd-journald[179]: Journal started Jan 29 11:37:26.112258 systemd-journald[179]: Runtime Journal (/run/log/journal/ec2c9a5928d5b09795cc8defd4c072ad) is 4.8M, max 38.6M, 33.7M free. Jan 29 11:37:26.118095 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:37:26.118904 systemd-modules-load[180]: Inserted module 'overlay' Jan 29 11:37:26.124857 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:37:26.147301 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:37:26.182989 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 11:37:26.184352 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:37:26.327720 kernel: Bridge firewalling registered Jan 29 11:37:26.185283 systemd-modules-load[180]: Inserted module 'br_netfilter' Jan 29 11:37:26.341283 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:37:26.342734 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:37:26.345394 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:37:26.360197 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:37:26.371686 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:37:26.385289 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:37:26.396376 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:37:26.410186 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 11:37:26.412093 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:37:26.413779 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:37:26.428536 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:37:26.434065 dracut-cmdline[210]: dracut-dracut-053 Jan 29 11:37:26.439838 dracut-cmdline[210]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=519b8fded83181f8e61f734d5291f916d7548bfba9487c78bcb50d002d81719d Jan 29 11:37:26.506076 systemd-resolved[217]: Positive Trust Anchors: Jan 29 11:37:26.508639 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:37:26.510776 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:37:26.527110 systemd-resolved[217]: Defaulting to hostname 'linux'. Jan 29 11:37:26.531362 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:37:26.535430 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:37:26.577965 kernel: SCSI subsystem initialized Jan 29 11:37:26.588963 kernel: Loading iSCSI transport class v2.0-870. Jan 29 11:37:26.602970 kernel: iscsi: registered transport (tcp) Jan 29 11:37:26.633214 kernel: iscsi: registered transport (qla4xxx) Jan 29 11:37:26.633425 kernel: QLogic iSCSI HBA Driver Jan 29 11:37:26.692566 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 11:37:26.698156 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 11:37:26.739373 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 11:37:26.739457 kernel: device-mapper: uevent: version 1.0.3 Jan 29 11:37:26.739479 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 11:37:26.789026 kernel: raid6: avx512x4 gen() 17036 MB/s Jan 29 11:37:26.806311 kernel: raid6: avx512x2 gen() 10268 MB/s Jan 29 11:37:26.822968 kernel: raid6: avx512x1 gen() 14359 MB/s Jan 29 11:37:26.841271 kernel: raid6: avx2x4 gen() 9990 MB/s Jan 29 11:37:26.857968 kernel: raid6: avx2x2 gen() 13510 MB/s Jan 29 11:37:26.874966 kernel: raid6: avx2x1 gen() 13316 MB/s Jan 29 11:37:26.875057 kernel: raid6: using algorithm avx512x4 gen() 17036 MB/s Jan 29 11:37:26.892053 kernel: raid6: .... xor() 4293 MB/s, rmw enabled Jan 29 11:37:26.892153 kernel: raid6: using avx512x2 recovery algorithm Jan 29 11:37:26.931972 kernel: xor: automatically using best checksumming function avx Jan 29 11:37:27.157965 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 11:37:27.169671 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:37:27.175201 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:37:27.203404 systemd-udevd[397]: Using default interface naming scheme 'v255'. Jan 29 11:37:27.209503 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:37:27.218968 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 11:37:27.242553 dracut-pre-trigger[401]: rd.md=0: removing MD RAID activation Jan 29 11:37:27.275768 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:37:27.282131 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:37:27.374314 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:37:27.383201 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 11:37:27.422272 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 11:37:27.426818 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:37:27.428323 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:37:27.431370 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:37:27.441771 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 11:37:27.487195 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:37:27.520977 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 11:37:27.534853 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:37:27.535047 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:37:27.551875 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 11:37:27.536962 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:37:27.538495 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:37:27.540479 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:37:27.559092 kernel: AES CTR mode by8 optimization enabled Jan 29 11:37:27.543612 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:37:27.563610 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:37:27.587285 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 29 11:37:27.612617 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 29 11:37:27.613321 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jan 29 11:37:27.613616 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:2c:a4:9a:8c:37 Jan 29 11:37:27.623414 (udev-worker)[457]: Network interface NamePolicy= disabled on kernel command line. Jan 29 11:37:27.883935 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 29 11:37:27.884204 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 29 11:37:27.884228 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 29 11:37:27.884389 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 11:37:27.884411 kernel: GPT:9289727 != 16777215 Jan 29 11:37:27.884430 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 11:37:27.884447 kernel: GPT:9289727 != 16777215 Jan 29 11:37:27.884463 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 11:37:27.884480 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 11:37:27.884498 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (445) Jan 29 11:37:27.884528 kernel: BTRFS: device fsid 5ba3c9ea-61f2-4fe6-a507-2966757f6d44 devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (455) Jan 29 11:37:27.901006 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:37:27.931279 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:37:27.958419 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 29 11:37:27.980235 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 29 11:37:28.000437 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 29 11:37:28.000579 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 29 11:37:28.007123 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:37:28.025714 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 29 11:37:28.036841 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 11:37:28.071871 disk-uuid[630]: Primary Header is updated. Jan 29 11:37:28.071871 disk-uuid[630]: Secondary Entries is updated. Jan 29 11:37:28.071871 disk-uuid[630]: Secondary Header is updated. Jan 29 11:37:28.082172 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 11:37:28.142014 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 11:37:29.127331 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 11:37:29.128439 disk-uuid[631]: The operation has completed successfully. Jan 29 11:37:29.311340 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 11:37:29.311481 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 11:37:29.356159 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 11:37:29.382121 sh[889]: Success Jan 29 11:37:29.400983 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 29 11:37:29.601081 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 11:37:29.636212 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 11:37:29.643740 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 11:37:29.688230 kernel: BTRFS info (device dm-0): first mount of filesystem 5ba3c9ea-61f2-4fe6-a507-2966757f6d44 Jan 29 11:37:29.688301 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:37:29.688597 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 11:37:29.692411 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 11:37:29.692815 kernel: BTRFS info (device dm-0): using free space tree Jan 29 11:37:29.788002 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 29 11:37:29.792203 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 11:37:29.796316 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 11:37:29.804244 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 11:37:29.816281 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 11:37:29.871297 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:37:29.871485 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:37:29.871513 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 29 11:37:29.877969 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 29 11:37:29.901312 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 11:37:29.904768 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:37:29.920467 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 11:37:29.935564 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 11:37:30.091460 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:37:30.102388 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:37:30.169052 systemd-networkd[1081]: lo: Link UP Jan 29 11:37:30.169065 systemd-networkd[1081]: lo: Gained carrier Jan 29 11:37:30.171614 systemd-networkd[1081]: Enumeration completed Jan 29 11:37:30.172072 systemd-networkd[1081]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:37:30.172077 systemd-networkd[1081]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:37:30.173673 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:37:30.175174 systemd[1]: Reached target network.target - Network. Jan 29 11:37:30.249992 systemd-networkd[1081]: eth0: Link UP Jan 29 11:37:30.250003 systemd-networkd[1081]: eth0: Gained carrier Jan 29 11:37:30.250021 systemd-networkd[1081]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:37:30.278456 systemd-networkd[1081]: eth0: DHCPv4 address 172.31.22.18/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 29 11:37:30.321734 ignition[1019]: Ignition 2.20.0 Jan 29 11:37:30.321748 ignition[1019]: Stage: fetch-offline Jan 29 11:37:30.322013 ignition[1019]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:37:30.324366 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:37:30.322026 ignition[1019]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 11:37:30.322346 ignition[1019]: Ignition finished successfully Jan 29 11:37:30.335634 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 11:37:30.389876 ignition[1089]: Ignition 2.20.0 Jan 29 11:37:30.389891 ignition[1089]: Stage: fetch Jan 29 11:37:30.391855 ignition[1089]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:37:30.391907 ignition[1089]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 11:37:30.392064 ignition[1089]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 11:37:30.420388 ignition[1089]: PUT result: OK Jan 29 11:37:30.423007 ignition[1089]: parsed url from cmdline: "" Jan 29 11:37:30.423120 ignition[1089]: no config URL provided Jan 29 11:37:30.423128 ignition[1089]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:37:30.423141 ignition[1089]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:37:30.423164 ignition[1089]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 11:37:30.424364 ignition[1089]: PUT result: OK Jan 29 11:37:30.425831 ignition[1089]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 29 11:37:30.431047 ignition[1089]: GET result: OK Jan 29 11:37:30.431117 ignition[1089]: parsing config with SHA512: 227ee092342244041089ca162f0e1679664e78a14f2567e6ea89ecf1889156887bc890ac343651ab7b7f2311c226f747980c9a8efb4da1727fb4fd0781d160b2 Jan 29 11:37:30.435016 unknown[1089]: fetched base config from "system" Jan 29 11:37:30.435031 unknown[1089]: fetched base config from "system" Jan 29 11:37:30.435388 ignition[1089]: fetch: fetch complete Jan 29 11:37:30.435040 unknown[1089]: fetched user config from "aws" Jan 29 11:37:30.435395 ignition[1089]: fetch: fetch passed Jan 29 11:37:30.439541 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 11:37:30.435455 ignition[1089]: Ignition finished successfully Jan 29 11:37:30.448184 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 11:37:30.482389 ignition[1095]: Ignition 2.20.0 Jan 29 11:37:30.482403 ignition[1095]: Stage: kargs Jan 29 11:37:30.482843 ignition[1095]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:37:30.482856 ignition[1095]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 11:37:30.482994 ignition[1095]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 11:37:30.485656 ignition[1095]: PUT result: OK Jan 29 11:37:30.494814 ignition[1095]: kargs: kargs passed Jan 29 11:37:30.494878 ignition[1095]: Ignition finished successfully Jan 29 11:37:30.499706 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 11:37:30.507351 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 11:37:30.543242 ignition[1101]: Ignition 2.20.0 Jan 29 11:37:30.543256 ignition[1101]: Stage: disks Jan 29 11:37:30.543743 ignition[1101]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:37:30.543758 ignition[1101]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 11:37:30.543930 ignition[1101]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 11:37:30.545316 ignition[1101]: PUT result: OK Jan 29 11:37:30.554132 ignition[1101]: disks: disks passed Jan 29 11:37:30.554467 ignition[1101]: Ignition finished successfully Jan 29 11:37:30.556618 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 11:37:30.557412 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 11:37:30.561817 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:37:30.563617 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:37:30.566728 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:37:30.568422 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:37:30.584385 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 11:37:30.630317 systemd-fsck[1109]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 11:37:30.637300 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 11:37:30.645169 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 11:37:30.818438 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 2fbf9359-701e-4995-b3f7-74280bd2b1c9 r/w with ordered data mode. Quota mode: none. Jan 29 11:37:30.820270 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 11:37:30.827845 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 11:37:30.845220 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:37:30.850147 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 11:37:30.854789 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 11:37:30.855383 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 11:37:30.855421 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:37:30.873315 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 11:37:30.879966 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1128) Jan 29 11:37:30.889038 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:37:30.889105 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:37:30.889127 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 29 11:37:30.890271 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 11:37:30.904063 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 29 11:37:30.909349 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:37:31.200146 initrd-setup-root[1152]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 11:37:31.207572 initrd-setup-root[1159]: cut: /sysroot/etc/group: No such file or directory Jan 29 11:37:31.214496 initrd-setup-root[1166]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 11:37:31.220246 initrd-setup-root[1173]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 11:37:31.503625 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 11:37:31.517131 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 11:37:31.525594 systemd-networkd[1081]: eth0: Gained IPv6LL Jan 29 11:37:31.527402 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 11:37:31.556307 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:37:31.556323 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 11:37:31.596419 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 11:37:31.626217 ignition[1240]: INFO : Ignition 2.20.0 Jan 29 11:37:31.626217 ignition[1240]: INFO : Stage: mount Jan 29 11:37:31.636636 ignition[1240]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:37:31.636636 ignition[1240]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 11:37:31.636636 ignition[1240]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 11:37:31.647293 ignition[1240]: INFO : PUT result: OK Jan 29 11:37:31.647293 ignition[1240]: INFO : mount: mount passed Jan 29 11:37:31.654673 ignition[1240]: INFO : Ignition finished successfully Jan 29 11:37:31.654691 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 11:37:31.671187 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 11:37:31.833347 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:37:31.865025 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1253) Jan 29 11:37:31.867881 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:37:31.867951 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:37:31.867972 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 29 11:37:31.873981 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 29 11:37:31.875964 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:37:31.903267 ignition[1270]: INFO : Ignition 2.20.0 Jan 29 11:37:31.903267 ignition[1270]: INFO : Stage: files Jan 29 11:37:31.905691 ignition[1270]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:37:31.905691 ignition[1270]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 11:37:31.905691 ignition[1270]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 11:37:31.913729 ignition[1270]: INFO : PUT result: OK Jan 29 11:37:31.921178 ignition[1270]: DEBUG : files: compiled without relabeling support, skipping Jan 29 11:37:31.924436 ignition[1270]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 11:37:31.924436 ignition[1270]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 11:37:31.964476 ignition[1270]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 11:37:31.966672 ignition[1270]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 11:37:31.966672 ignition[1270]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 11:37:31.965497 unknown[1270]: wrote ssh authorized keys file for user: core Jan 29 11:37:31.988135 ignition[1270]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 29 11:37:31.988135 ignition[1270]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 11:37:31.988135 ignition[1270]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:37:31.988135 ignition[1270]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:37:31.988135 ignition[1270]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 11:37:31.988135 ignition[1270]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 11:37:31.988135 ignition[1270]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 11:37:31.988135 ignition[1270]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 29 11:37:32.346860 ignition[1270]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 29 11:37:32.888023 ignition[1270]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 11:37:32.890893 ignition[1270]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:37:32.890893 ignition[1270]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:37:32.890893 ignition[1270]: INFO : files: files passed Jan 29 11:37:32.890893 ignition[1270]: INFO : Ignition finished successfully Jan 29 11:37:32.894530 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 11:37:32.910476 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 11:37:32.918080 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 11:37:32.924612 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 11:37:32.924739 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 11:37:32.952227 initrd-setup-root-after-ignition[1298]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:37:32.952227 initrd-setup-root-after-ignition[1298]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:37:32.969628 initrd-setup-root-after-ignition[1302]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:37:32.974382 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:37:32.976731 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 11:37:32.989169 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 11:37:33.033101 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 11:37:33.033234 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 11:37:33.036108 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 11:37:33.038701 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 11:37:33.040058 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 11:37:33.045381 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 11:37:33.072380 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:37:33.090079 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 11:37:33.131607 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:37:33.136032 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:37:33.138738 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 11:37:33.139808 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 11:37:33.139953 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:37:33.146217 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 11:37:33.151502 systemd[1]: Stopped target basic.target - Basic System. Jan 29 11:37:33.153369 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 11:37:33.156239 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:37:33.160818 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 11:37:33.161045 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 11:37:33.164696 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:37:33.166306 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 11:37:33.170594 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 11:37:33.176656 systemd[1]: Stopped target swap.target - Swaps. Jan 29 11:37:33.176897 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 11:37:33.177351 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:37:33.186109 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:37:33.186675 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:37:33.192357 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 11:37:33.197237 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:37:33.200241 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 11:37:33.200423 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 11:37:33.208378 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 11:37:33.210099 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:37:33.212871 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 11:37:33.213064 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 11:37:33.225659 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 11:37:33.226300 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 11:37:33.226533 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:37:33.232563 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 11:37:33.241727 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 11:37:33.242380 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:37:33.248621 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 11:37:33.248881 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:37:33.264127 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 11:37:33.264277 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 11:37:33.274315 ignition[1322]: INFO : Ignition 2.20.0 Jan 29 11:37:33.277663 ignition[1322]: INFO : Stage: umount Jan 29 11:37:33.277663 ignition[1322]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:37:33.277663 ignition[1322]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 11:37:33.277663 ignition[1322]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 11:37:33.284198 ignition[1322]: INFO : PUT result: OK Jan 29 11:37:33.289709 ignition[1322]: INFO : umount: umount passed Jan 29 11:37:33.293714 ignition[1322]: INFO : Ignition finished successfully Jan 29 11:37:33.298525 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 11:37:33.298878 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 11:37:33.301660 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 11:37:33.301727 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 11:37:33.304527 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 11:37:33.304775 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 11:37:33.306565 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 11:37:33.306632 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 11:37:33.320051 systemd[1]: Stopped target network.target - Network. Jan 29 11:37:33.323020 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 11:37:33.323207 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:37:33.331437 systemd[1]: Stopped target paths.target - Path Units. Jan 29 11:37:33.333243 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 11:37:33.339174 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:37:33.339300 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 11:37:33.347262 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 11:37:33.348608 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 11:37:33.348660 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:37:33.350468 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 11:37:33.350521 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:37:33.352682 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 11:37:33.352741 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 11:37:33.355340 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 11:37:33.355411 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 11:37:33.357354 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 11:37:33.358795 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 11:37:33.370987 systemd-networkd[1081]: eth0: DHCPv6 lease lost Jan 29 11:37:33.376864 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 11:37:33.383076 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 11:37:33.383199 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 11:37:33.388718 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 11:37:33.388926 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 11:37:33.394783 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 11:37:33.394852 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:37:33.406153 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 11:37:33.408062 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 11:37:33.408259 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:37:33.414587 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:37:33.414686 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:37:33.417896 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 11:37:33.418054 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 11:37:33.422058 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 11:37:33.422121 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:37:33.424380 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:37:33.444006 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 11:37:33.444190 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:37:33.466277 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 11:37:33.466379 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 11:37:33.477146 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 11:37:33.477216 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:37:33.484286 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 11:37:33.484370 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:37:33.487935 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 11:37:33.488038 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 11:37:33.489010 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:37:33.489095 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:37:33.504299 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 11:37:33.506013 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 11:37:33.507257 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:37:33.508733 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:37:33.508789 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:37:33.516337 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 11:37:33.516659 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 11:37:33.520908 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 11:37:33.521026 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 11:37:33.524075 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 11:37:33.524175 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 11:37:33.531884 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 11:37:33.532011 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 11:37:33.533790 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 11:37:33.543126 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 11:37:33.566815 systemd[1]: Switching root. Jan 29 11:37:33.627618 systemd-journald[179]: Journal stopped Jan 29 11:37:35.576693 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Jan 29 11:37:35.576897 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 11:37:35.576934 kernel: SELinux: policy capability open_perms=1 Jan 29 11:37:35.587764 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 11:37:35.587805 kernel: SELinux: policy capability always_check_network=0 Jan 29 11:37:35.587834 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 11:37:35.587861 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 11:37:35.587883 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 11:37:35.587911 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 11:37:35.587933 kernel: audit: type=1403 audit(1738150653.886:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 11:37:35.588121 systemd[1]: Successfully loaded SELinux policy in 44.142ms. Jan 29 11:37:35.588153 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 67.488ms. Jan 29 11:37:35.588177 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:37:35.588200 systemd[1]: Detected virtualization amazon. Jan 29 11:37:35.588223 systemd[1]: Detected architecture x86-64. Jan 29 11:37:35.588244 systemd[1]: Detected first boot. Jan 29 11:37:35.588264 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:37:35.588285 zram_generator::config[1366]: No configuration found. Jan 29 11:37:35.588312 systemd[1]: Populated /etc with preset unit settings. Jan 29 11:37:35.588334 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 11:37:35.588358 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 11:37:35.588379 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 11:37:35.588401 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 11:37:35.588424 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 11:37:35.588445 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 11:37:35.588467 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 11:37:35.588495 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 11:37:35.588519 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 11:37:35.588539 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 11:37:35.588559 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 11:37:35.588578 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:37:35.588598 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:37:35.588623 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 11:37:35.588644 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 11:37:35.588663 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 11:37:35.588685 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:37:35.588707 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 11:37:35.588727 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:37:35.588746 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 11:37:35.588764 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 11:37:35.588853 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 11:37:35.588877 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 11:37:35.588898 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:37:35.588919 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:37:35.588961 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:37:35.588983 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:37:35.589002 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 11:37:35.589022 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 11:37:35.589042 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:37:35.589067 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:37:35.589087 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:37:35.589108 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 11:37:35.589129 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 11:37:35.589148 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 11:37:35.589168 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 11:37:35.589189 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:37:35.589211 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 11:37:35.589232 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 11:37:35.589256 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 11:37:35.589275 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 11:37:35.589294 systemd[1]: Reached target machines.target - Containers. Jan 29 11:37:35.589428 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 11:37:35.589449 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:37:35.589469 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:37:35.589489 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 11:37:35.589511 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:37:35.589537 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:37:35.589559 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:37:35.589581 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 11:37:35.589609 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:37:35.589631 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 11:37:35.589653 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 11:37:35.589675 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 11:37:35.589697 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 11:37:35.589722 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 11:37:35.589744 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:37:35.589765 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:37:35.589787 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 11:37:35.589808 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 11:37:35.589830 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:37:35.589852 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 11:37:35.589874 systemd[1]: Stopped verity-setup.service. Jan 29 11:37:35.589895 kernel: fuse: init (API version 7.39) Jan 29 11:37:35.589917 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:37:35.589959 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 11:37:35.589979 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 11:37:35.589999 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 11:37:35.590021 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 11:37:35.590043 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 11:37:35.590068 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 11:37:35.590089 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:37:35.590111 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 11:37:35.590133 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 11:37:35.590155 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:37:35.590176 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:37:35.590197 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:37:35.590219 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:37:35.590244 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 11:37:35.590265 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 11:37:35.590287 kernel: loop: module loaded Jan 29 11:37:35.590308 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:37:35.590329 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:37:35.590396 systemd-journald[1441]: Collecting audit messages is disabled. Jan 29 11:37:35.590437 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 11:37:35.590459 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:37:35.590481 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 11:37:35.590503 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 11:37:35.590524 systemd-journald[1441]: Journal started Jan 29 11:37:35.590569 systemd-journald[1441]: Runtime Journal (/run/log/journal/ec2c9a5928d5b09795cc8defd4c072ad) is 4.8M, max 38.6M, 33.7M free. Jan 29 11:37:34.926548 systemd[1]: Queued start job for default target multi-user.target. Jan 29 11:37:34.949519 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 29 11:37:34.950059 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 11:37:35.634813 kernel: ACPI: bus type drm_connector registered Jan 29 11:37:35.634898 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 11:37:35.634931 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:37:35.644091 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:37:35.652012 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:37:35.662074 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 11:37:35.664954 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:37:35.665237 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:37:35.667238 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 11:37:35.669419 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 11:37:35.671295 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 11:37:35.717446 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 11:37:35.717506 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:37:35.723516 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 11:37:35.734164 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 11:37:35.738142 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 11:37:35.739529 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:37:35.743647 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 11:37:35.747828 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 11:37:35.749191 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:37:35.753783 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 11:37:35.762186 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 11:37:35.767253 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 11:37:35.770807 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:37:35.774141 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:37:35.788584 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 11:37:35.809115 systemd-journald[1441]: Time spent on flushing to /var/log/journal/ec2c9a5928d5b09795cc8defd4c072ad is 67.536ms for 944 entries. Jan 29 11:37:35.809115 systemd-journald[1441]: System Journal (/var/log/journal/ec2c9a5928d5b09795cc8defd4c072ad) is 8.0M, max 195.6M, 187.6M free. Jan 29 11:37:35.906053 systemd-journald[1441]: Received client request to flush runtime journal. Jan 29 11:37:35.906121 kernel: loop0: detected capacity change from 0 to 205544 Jan 29 11:37:35.805180 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 11:37:35.877497 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 11:37:35.880577 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 11:37:35.896709 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 11:37:35.919984 udevadm[1502]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 29 11:37:35.932978 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 11:37:35.933551 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 11:37:35.958264 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 11:37:35.965231 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 11:37:35.991308 kernel: loop1: detected capacity change from 0 to 140992 Jan 29 11:37:35.993215 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 11:37:36.009245 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:37:36.125973 kernel: loop2: detected capacity change from 0 to 62848 Jan 29 11:37:36.136684 systemd-tmpfiles[1512]: ACLs are not supported, ignoring. Jan 29 11:37:36.137934 systemd-tmpfiles[1512]: ACLs are not supported, ignoring. Jan 29 11:37:36.160263 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:37:36.247255 kernel: loop3: detected capacity change from 0 to 138184 Jan 29 11:37:36.374972 kernel: loop4: detected capacity change from 0 to 205544 Jan 29 11:37:36.456981 kernel: loop5: detected capacity change from 0 to 140992 Jan 29 11:37:36.524975 kernel: loop6: detected capacity change from 0 to 62848 Jan 29 11:37:36.558979 kernel: loop7: detected capacity change from 0 to 138184 Jan 29 11:37:36.604084 (sd-merge)[1519]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 29 11:37:36.604817 (sd-merge)[1519]: Merged extensions into '/usr'. Jan 29 11:37:36.613458 systemd[1]: Reloading requested from client PID 1496 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 11:37:36.613475 systemd[1]: Reloading... Jan 29 11:37:36.813684 zram_generator::config[1545]: No configuration found. Jan 29 11:37:37.069542 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:37:37.099934 ldconfig[1492]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 11:37:37.178216 systemd[1]: Reloading finished in 563 ms. Jan 29 11:37:37.211671 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 11:37:37.214837 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 11:37:37.228335 systemd[1]: Starting ensure-sysext.service... Jan 29 11:37:37.234961 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:37:37.252999 systemd[1]: Reloading requested from client PID 1594 ('systemctl') (unit ensure-sysext.service)... Jan 29 11:37:37.253021 systemd[1]: Reloading... Jan 29 11:37:37.288291 systemd-tmpfiles[1595]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 11:37:37.290817 systemd-tmpfiles[1595]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 11:37:37.294269 systemd-tmpfiles[1595]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 11:37:37.294832 systemd-tmpfiles[1595]: ACLs are not supported, ignoring. Jan 29 11:37:37.294922 systemd-tmpfiles[1595]: ACLs are not supported, ignoring. Jan 29 11:37:37.300623 systemd-tmpfiles[1595]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:37:37.300789 systemd-tmpfiles[1595]: Skipping /boot Jan 29 11:37:37.316652 systemd-tmpfiles[1595]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:37:37.316668 systemd-tmpfiles[1595]: Skipping /boot Jan 29 11:37:37.404006 zram_generator::config[1621]: No configuration found. Jan 29 11:37:37.581194 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:37:37.637581 systemd[1]: Reloading finished in 383 ms. Jan 29 11:37:37.656981 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 11:37:37.669614 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:37:37.684368 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:37:37.690575 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 11:37:37.707597 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 11:37:37.726290 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:37:37.742539 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:37:37.756435 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 11:37:37.784530 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 11:37:37.790283 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:37:37.790706 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:37:37.808218 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:37:37.821631 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:37:37.838872 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:37:37.842801 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:37:37.843281 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:37:37.856563 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:37:37.857301 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:37:37.857533 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:37:37.857658 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:37:37.874398 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:37:37.874906 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:37:37.882629 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:37:37.883166 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:37:37.892073 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:37:37.894219 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:37:37.894927 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 11:37:37.896615 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:37:37.900575 systemd[1]: Finished ensure-sysext.service. Jan 29 11:37:37.902795 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:37:37.903807 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:37:37.905958 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:37:37.906154 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:37:37.914922 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 11:37:37.919253 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 11:37:37.926739 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:37:37.926866 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:37:37.927709 systemd-udevd[1682]: Using default interface naming scheme 'v255'. Jan 29 11:37:37.936218 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 11:37:37.938137 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:37:37.938357 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:37:37.955374 augenrules[1711]: No rules Jan 29 11:37:37.960617 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:37:37.962104 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:37:37.978711 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 11:37:37.983883 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 11:37:38.008887 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:37:38.011404 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 11:37:38.024257 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:37:38.026840 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:37:38.131271 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 11:37:38.191642 systemd-networkd[1730]: lo: Link UP Jan 29 11:37:38.191661 systemd-networkd[1730]: lo: Gained carrier Jan 29 11:37:38.192603 systemd-networkd[1730]: Enumeration completed Jan 29 11:37:38.192739 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:37:38.205161 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 11:37:38.209966 systemd-resolved[1681]: Positive Trust Anchors: Jan 29 11:37:38.209982 systemd-resolved[1681]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:37:38.210032 systemd-resolved[1681]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:37:38.220788 (udev-worker)[1738]: Network interface NamePolicy= disabled on kernel command line. Jan 29 11:37:38.226237 systemd-resolved[1681]: Defaulting to hostname 'linux'. Jan 29 11:37:38.234342 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:37:38.235751 systemd[1]: Reached target network.target - Network. Jan 29 11:37:38.236796 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:37:38.250964 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 29 11:37:38.251869 systemd-networkd[1730]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:37:38.251885 systemd-networkd[1730]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:37:38.256242 systemd-networkd[1730]: eth0: Link UP Jan 29 11:37:38.256498 systemd-networkd[1730]: eth0: Gained carrier Jan 29 11:37:38.256533 systemd-networkd[1730]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:37:38.267088 systemd-networkd[1730]: eth0: DHCPv4 address 172.31.22.18/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 29 11:37:38.270970 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Jan 29 11:37:38.280260 kernel: ACPI: button: Power Button [PWRF] Jan 29 11:37:38.280294 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jan 29 11:37:38.284654 kernel: ACPI: button: Sleep Button [SLPF] Jan 29 11:37:38.311028 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Jan 29 11:37:38.338963 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 11:37:38.343474 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:37:38.352981 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1734) Jan 29 11:37:38.507572 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 29 11:37:38.610962 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 11:37:38.613710 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:37:38.624240 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 11:37:38.630013 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 11:37:38.651239 lvm[1842]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:37:38.670091 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 11:37:38.693208 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 11:37:38.694905 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:37:38.696348 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:37:38.699900 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 11:37:38.701754 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 11:37:38.703442 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 11:37:38.706189 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 11:37:38.707684 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 11:37:38.709097 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 11:37:38.709123 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:37:38.710423 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:37:38.714336 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 11:37:38.719655 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 11:37:38.732689 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 11:37:38.747418 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 11:37:38.754366 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 11:37:38.757392 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:37:38.757485 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:37:38.760202 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:37:38.760237 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:37:38.780196 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 11:37:38.804853 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 11:37:38.813107 lvm[1849]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:37:38.820272 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 11:37:38.832183 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 11:37:38.842894 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 11:37:38.844637 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 11:37:38.863276 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 11:37:38.887193 systemd[1]: Started ntpd.service - Network Time Service. Jan 29 11:37:38.918677 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 29 11:37:38.942259 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 11:37:38.948176 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 11:37:38.957179 jq[1853]: false Jan 29 11:37:38.969172 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 11:37:38.973848 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 11:37:38.974872 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 11:37:38.982216 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 11:37:39.001072 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 11:37:39.004407 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 11:37:39.017916 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 11:37:39.019673 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 11:37:39.043750 ntpd[1856]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:04:56 UTC 2025 (1): Starting Jan 29 11:37:39.077310 ntpd[1856]: 29 Jan 11:37:39 ntpd[1856]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:04:56 UTC 2025 (1): Starting Jan 29 11:37:39.077310 ntpd[1856]: 29 Jan 11:37:39 ntpd[1856]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 29 11:37:39.077310 ntpd[1856]: 29 Jan 11:37:39 ntpd[1856]: ---------------------------------------------------- Jan 29 11:37:39.077310 ntpd[1856]: 29 Jan 11:37:39 ntpd[1856]: ntp-4 is maintained by Network Time Foundation, Jan 29 11:37:39.077310 ntpd[1856]: 29 Jan 11:37:39 ntpd[1856]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 29 11:37:39.077310 ntpd[1856]: 29 Jan 11:37:39 ntpd[1856]: corporation. Support and training for ntp-4 are Jan 29 11:37:39.077310 ntpd[1856]: 29 Jan 11:37:39 ntpd[1856]: available at https://www.nwtime.org/support Jan 29 11:37:39.077310 ntpd[1856]: 29 Jan 11:37:39 ntpd[1856]: ---------------------------------------------------- Jan 29 11:37:39.077310 ntpd[1856]: 29 Jan 11:37:39 ntpd[1856]: proto: precision = 0.098 usec (-23) Jan 29 11:37:39.077310 ntpd[1856]: 29 Jan 11:37:39 ntpd[1856]: basedate set to 2025-01-17 Jan 29 11:37:39.077310 ntpd[1856]: 29 Jan 11:37:39 ntpd[1856]: gps base set to 2025-01-19 (week 2350) Jan 29 11:37:39.077310 ntpd[1856]: 29 Jan 11:37:39 ntpd[1856]: Listen and drop on 0 v6wildcard [::]:123 Jan 29 11:37:39.077310 ntpd[1856]: 29 Jan 11:37:39 ntpd[1856]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 29 11:37:39.077310 ntpd[1856]: 29 Jan 11:37:39 ntpd[1856]: Listen normally on 2 lo 127.0.0.1:123 Jan 29 11:37:39.077310 ntpd[1856]: 29 Jan 11:37:39 ntpd[1856]: Listen normally on 3 eth0 172.31.22.18:123 Jan 29 11:37:39.077310 ntpd[1856]: 29 Jan 11:37:39 ntpd[1856]: Listen normally on 4 lo [::1]:123 Jan 29 11:37:39.077310 ntpd[1856]: 29 Jan 11:37:39 ntpd[1856]: bind(21) AF_INET6 fe80::42c:a4ff:fe9a:8c37%2#123 flags 0x11 failed: Cannot assign requested address Jan 29 11:37:39.077310 ntpd[1856]: 29 Jan 11:37:39 ntpd[1856]: unable to create socket on eth0 (5) for fe80::42c:a4ff:fe9a:8c37%2#123 Jan 29 11:37:39.077310 ntpd[1856]: 29 Jan 11:37:39 ntpd[1856]: failed to init interface for address fe80::42c:a4ff:fe9a:8c37%2 Jan 29 11:37:39.077310 ntpd[1856]: 29 Jan 11:37:39 ntpd[1856]: Listening on routing socket on fd #21 for interface updates Jan 29 11:37:39.043785 ntpd[1856]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 29 11:37:39.125152 jq[1867]: true Jan 29 11:37:39.125534 ntpd[1856]: 29 Jan 11:37:39 ntpd[1856]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 11:37:39.125534 ntpd[1856]: 29 Jan 11:37:39 ntpd[1856]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 11:37:39.043796 ntpd[1856]: ---------------------------------------------------- Jan 29 11:37:39.097856 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 11:37:39.043806 ntpd[1856]: ntp-4 is maintained by Network Time Foundation, Jan 29 11:37:39.121247 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 11:37:39.043815 ntpd[1856]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 29 11:37:39.123410 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 11:37:39.043824 ntpd[1856]: corporation. Support and training for ntp-4 are Jan 29 11:37:39.123650 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 11:37:39.139915 jq[1883]: true Jan 29 11:37:39.043834 ntpd[1856]: available at https://www.nwtime.org/support Jan 29 11:37:39.128544 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 11:37:39.043843 ntpd[1856]: ---------------------------------------------------- Jan 29 11:37:39.153749 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 11:37:39.049918 ntpd[1856]: proto: precision = 0.098 usec (-23) Jan 29 11:37:39.153797 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 11:37:39.053534 ntpd[1856]: basedate set to 2025-01-17 Jan 29 11:37:39.155429 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 11:37:39.053558 ntpd[1856]: gps base set to 2025-01-19 (week 2350) Jan 29 11:37:39.155453 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 11:37:39.058654 ntpd[1856]: Listen and drop on 0 v6wildcard [::]:123 Jan 29 11:37:39.058713 ntpd[1856]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 29 11:37:39.070365 ntpd[1856]: Listen normally on 2 lo 127.0.0.1:123 Jan 29 11:37:39.070421 ntpd[1856]: Listen normally on 3 eth0 172.31.22.18:123 Jan 29 11:37:39.070466 ntpd[1856]: Listen normally on 4 lo [::1]:123 Jan 29 11:37:39.070538 ntpd[1856]: bind(21) AF_INET6 fe80::42c:a4ff:fe9a:8c37%2#123 flags 0x11 failed: Cannot assign requested address Jan 29 11:37:39.070561 ntpd[1856]: unable to create socket on eth0 (5) for fe80::42c:a4ff:fe9a:8c37%2#123 Jan 29 11:37:39.070579 ntpd[1856]: failed to init interface for address fe80::42c:a4ff:fe9a:8c37%2 Jan 29 11:37:39.070616 ntpd[1856]: Listening on routing socket on fd #21 for interface updates Jan 29 11:37:39.114293 ntpd[1856]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 11:37:39.114333 ntpd[1856]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 11:37:39.127581 dbus-daemon[1852]: [system] SELinux support is enabled Jan 29 11:37:39.152265 dbus-daemon[1852]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1730 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 29 11:37:39.177986 (ntainerd)[1875]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 11:37:39.179908 extend-filesystems[1854]: Found loop4 Jan 29 11:37:39.179908 extend-filesystems[1854]: Found loop5 Jan 29 11:37:39.179908 extend-filesystems[1854]: Found loop6 Jan 29 11:37:39.179908 extend-filesystems[1854]: Found loop7 Jan 29 11:37:39.179908 extend-filesystems[1854]: Found nvme0n1 Jan 29 11:37:39.179908 extend-filesystems[1854]: Found nvme0n1p1 Jan 29 11:37:39.179908 extend-filesystems[1854]: Found nvme0n1p2 Jan 29 11:37:39.179908 extend-filesystems[1854]: Found nvme0n1p3 Jan 29 11:37:39.179908 extend-filesystems[1854]: Found usr Jan 29 11:37:39.179908 extend-filesystems[1854]: Found nvme0n1p4 Jan 29 11:37:39.179908 extend-filesystems[1854]: Found nvme0n1p6 Jan 29 11:37:39.179908 extend-filesystems[1854]: Found nvme0n1p7 Jan 29 11:37:39.179908 extend-filesystems[1854]: Found nvme0n1p9 Jan 29 11:37:39.179908 extend-filesystems[1854]: Checking size of /dev/nvme0n1p9 Jan 29 11:37:39.169480 dbus-daemon[1852]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 29 11:37:39.190819 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 29 11:37:39.216674 update_engine[1863]: I20250129 11:37:39.188498 1863 main.cc:92] Flatcar Update Engine starting Jan 29 11:37:39.216674 update_engine[1863]: I20250129 11:37:39.190784 1863 update_check_scheduler.cc:74] Next update check in 5m2s Jan 29 11:37:39.205213 systemd[1]: Started update-engine.service - Update Engine. Jan 29 11:37:39.218121 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 11:37:39.233155 coreos-metadata[1851]: Jan 29 11:37:39.231 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 29 11:37:39.233155 coreos-metadata[1851]: Jan 29 11:37:39.232 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 29 11:37:39.229638 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 29 11:37:39.238729 coreos-metadata[1851]: Jan 29 11:37:39.233 INFO Fetch successful Jan 29 11:37:39.238729 coreos-metadata[1851]: Jan 29 11:37:39.233 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 29 11:37:39.238729 coreos-metadata[1851]: Jan 29 11:37:39.234 INFO Fetch successful Jan 29 11:37:39.238729 coreos-metadata[1851]: Jan 29 11:37:39.234 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 29 11:37:39.238729 coreos-metadata[1851]: Jan 29 11:37:39.236 INFO Fetch successful Jan 29 11:37:39.238729 coreos-metadata[1851]: Jan 29 11:37:39.236 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 29 11:37:39.245553 coreos-metadata[1851]: Jan 29 11:37:39.240 INFO Fetch successful Jan 29 11:37:39.245553 coreos-metadata[1851]: Jan 29 11:37:39.240 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 29 11:37:39.245553 coreos-metadata[1851]: Jan 29 11:37:39.242 INFO Fetch failed with 404: resource not found Jan 29 11:37:39.245553 coreos-metadata[1851]: Jan 29 11:37:39.242 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 29 11:37:39.245553 coreos-metadata[1851]: Jan 29 11:37:39.243 INFO Fetch successful Jan 29 11:37:39.245553 coreos-metadata[1851]: Jan 29 11:37:39.243 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 29 11:37:39.246574 coreos-metadata[1851]: Jan 29 11:37:39.246 INFO Fetch successful Jan 29 11:37:39.246574 coreos-metadata[1851]: Jan 29 11:37:39.246 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 29 11:37:39.257263 coreos-metadata[1851]: Jan 29 11:37:39.254 INFO Fetch successful Jan 29 11:37:39.257263 coreos-metadata[1851]: Jan 29 11:37:39.254 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 29 11:37:39.257263 coreos-metadata[1851]: Jan 29 11:37:39.257 INFO Fetch successful Jan 29 11:37:39.257263 coreos-metadata[1851]: Jan 29 11:37:39.257 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 29 11:37:39.260747 coreos-metadata[1851]: Jan 29 11:37:39.259 INFO Fetch successful Jan 29 11:37:39.265292 systemd-networkd[1730]: eth0: Gained IPv6LL Jan 29 11:37:39.300833 extend-filesystems[1854]: Resized partition /dev/nvme0n1p9 Jan 29 11:37:39.290754 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 11:37:39.302366 extend-filesystems[1917]: resize2fs 1.47.1 (20-May-2024) Jan 29 11:37:39.302609 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 11:37:39.315308 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 29 11:37:39.319964 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 29 11:37:39.329158 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:37:39.340248 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 11:37:39.477991 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1734) Jan 29 11:37:39.513774 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 29 11:37:39.516724 systemd-logind[1861]: Watching system buttons on /dev/input/event1 (Power Button) Jan 29 11:37:39.520692 extend-filesystems[1917]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 29 11:37:39.520692 extend-filesystems[1917]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 11:37:39.520692 extend-filesystems[1917]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 29 11:37:39.526742 extend-filesystems[1854]: Resized filesystem in /dev/nvme0n1p9 Jan 29 11:37:39.527816 bash[1916]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:37:39.529115 systemd-logind[1861]: Watching system buttons on /dev/input/event2 (Sleep Button) Jan 29 11:37:39.529151 systemd-logind[1861]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 11:37:39.536849 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 11:37:39.537255 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 11:37:39.545174 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 11:37:39.547096 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 11:37:39.552536 systemd-logind[1861]: New seat seat0. Jan 29 11:37:39.553929 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 11:37:39.568266 systemd[1]: Starting sshkeys.service... Jan 29 11:37:39.570670 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 11:37:39.609422 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 11:37:39.724932 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 11:37:39.736476 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 11:37:39.743558 amazon-ssm-agent[1922]: Initializing new seelog logger Jan 29 11:37:39.747407 amazon-ssm-agent[1922]: New Seelog Logger Creation Complete Jan 29 11:37:39.747407 amazon-ssm-agent[1922]: 2025/01/29 11:37:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 11:37:39.747407 amazon-ssm-agent[1922]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 11:37:39.748336 amazon-ssm-agent[1922]: 2025/01/29 11:37:39 processing appconfig overrides Jan 29 11:37:39.748635 amazon-ssm-agent[1922]: 2025/01/29 11:37:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 11:37:39.748635 amazon-ssm-agent[1922]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 11:37:39.749960 amazon-ssm-agent[1922]: 2025-01-29 11:37:39 INFO Proxy environment variables: Jan 29 11:37:39.750128 amazon-ssm-agent[1922]: 2025/01/29 11:37:39 processing appconfig overrides Jan 29 11:37:39.750930 amazon-ssm-agent[1922]: 2025/01/29 11:37:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 11:37:39.751023 amazon-ssm-agent[1922]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 11:37:39.751236 amazon-ssm-agent[1922]: 2025/01/29 11:37:39 processing appconfig overrides Jan 29 11:37:39.774495 amazon-ssm-agent[1922]: 2025/01/29 11:37:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 11:37:39.774495 amazon-ssm-agent[1922]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 11:37:39.774632 amazon-ssm-agent[1922]: 2025/01/29 11:37:39 processing appconfig overrides Jan 29 11:37:39.865119 amazon-ssm-agent[1922]: 2025-01-29 11:37:39 INFO http_proxy: Jan 29 11:37:39.872883 locksmithd[1905]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 11:37:39.932825 coreos-metadata[1992]: Jan 29 11:37:39.932 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 29 11:37:39.933752 coreos-metadata[1992]: Jan 29 11:37:39.933 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 29 11:37:39.934412 coreos-metadata[1992]: Jan 29 11:37:39.934 INFO Fetch successful Jan 29 11:37:39.934412 coreos-metadata[1992]: Jan 29 11:37:39.934 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 29 11:37:39.935435 coreos-metadata[1992]: Jan 29 11:37:39.935 INFO Fetch successful Jan 29 11:37:39.943688 unknown[1992]: wrote ssh authorized keys file for user: core Jan 29 11:37:39.965122 amazon-ssm-agent[1922]: 2025-01-29 11:37:39 INFO no_proxy: Jan 29 11:37:39.993735 dbus-daemon[1852]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 29 11:37:39.994455 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 29 11:37:40.001672 dbus-daemon[1852]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1898 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 29 11:37:40.005980 update-ssh-keys[2043]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:37:40.007313 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 11:37:40.020344 systemd[1]: Starting polkit.service - Authorization Manager... Jan 29 11:37:40.022621 systemd[1]: Finished sshkeys.service. Jan 29 11:37:40.073203 amazon-ssm-agent[1922]: 2025-01-29 11:37:39 INFO https_proxy: Jan 29 11:37:40.111983 polkitd[2045]: Started polkitd version 121 Jan 29 11:37:40.137844 polkitd[2045]: Loading rules from directory /etc/polkit-1/rules.d Jan 29 11:37:40.138085 polkitd[2045]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 29 11:37:40.142778 polkitd[2045]: Finished loading, compiling and executing 2 rules Jan 29 11:37:40.144268 dbus-daemon[1852]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 29 11:37:40.149752 polkitd[2045]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 29 11:37:40.151929 systemd[1]: Started polkit.service - Authorization Manager. Jan 29 11:37:40.169069 amazon-ssm-agent[1922]: 2025-01-29 11:37:39 INFO Checking if agent identity type OnPrem can be assumed Jan 29 11:37:40.206144 sshd_keygen[1899]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 11:37:40.214327 systemd-hostnamed[1898]: Hostname set to (transient) Jan 29 11:37:40.217173 systemd-resolved[1681]: System hostname changed to 'ip-172-31-22-18'. Jan 29 11:37:40.269351 amazon-ssm-agent[1922]: 2025-01-29 11:37:39 INFO Checking if agent identity type EC2 can be assumed Jan 29 11:37:40.309660 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 11:37:40.320717 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 11:37:40.355313 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 11:37:40.355684 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 11:37:40.366410 amazon-ssm-agent[1922]: 2025-01-29 11:37:40 INFO Agent will take identity from EC2 Jan 29 11:37:40.372199 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 11:37:40.382801 containerd[1875]: time="2025-01-29T11:37:40.381135897Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 11:37:40.422005 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 11:37:40.431447 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 11:37:40.443715 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 11:37:40.445804 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 11:37:40.465647 amazon-ssm-agent[1922]: 2025-01-29 11:37:40 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 29 11:37:40.489482 containerd[1875]: time="2025-01-29T11:37:40.489193552Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:37:40.493973 containerd[1875]: time="2025-01-29T11:37:40.492146583Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:37:40.493973 containerd[1875]: time="2025-01-29T11:37:40.492195590Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 11:37:40.493973 containerd[1875]: time="2025-01-29T11:37:40.492220820Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 11:37:40.493973 containerd[1875]: time="2025-01-29T11:37:40.492661909Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 11:37:40.493973 containerd[1875]: time="2025-01-29T11:37:40.492690587Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 11:37:40.493973 containerd[1875]: time="2025-01-29T11:37:40.492773834Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:37:40.493973 containerd[1875]: time="2025-01-29T11:37:40.492790912Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:37:40.493973 containerd[1875]: time="2025-01-29T11:37:40.493030345Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:37:40.493973 containerd[1875]: time="2025-01-29T11:37:40.493048852Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 11:37:40.493973 containerd[1875]: time="2025-01-29T11:37:40.493067261Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:37:40.493973 containerd[1875]: time="2025-01-29T11:37:40.493082570Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 11:37:40.495590 containerd[1875]: time="2025-01-29T11:37:40.493174116Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:37:40.495590 containerd[1875]: time="2025-01-29T11:37:40.493814122Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:37:40.496570 containerd[1875]: time="2025-01-29T11:37:40.496502250Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:37:40.496690 containerd[1875]: time="2025-01-29T11:37:40.496671806Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 11:37:40.497020 containerd[1875]: time="2025-01-29T11:37:40.496998609Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 11:37:40.497461 containerd[1875]: time="2025-01-29T11:37:40.497438382Z" level=info msg="metadata content store policy set" policy=shared Jan 29 11:37:40.509503 containerd[1875]: time="2025-01-29T11:37:40.509442154Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 11:37:40.510394 containerd[1875]: time="2025-01-29T11:37:40.509760419Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 11:37:40.510394 containerd[1875]: time="2025-01-29T11:37:40.509880314Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 11:37:40.510394 containerd[1875]: time="2025-01-29T11:37:40.509910115Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 11:37:40.510394 containerd[1875]: time="2025-01-29T11:37:40.509933062Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 11:37:40.510394 containerd[1875]: time="2025-01-29T11:37:40.510146848Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 11:37:40.513989 containerd[1875]: time="2025-01-29T11:37:40.512911778Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 11:37:40.513989 containerd[1875]: time="2025-01-29T11:37:40.513250607Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 11:37:40.513989 containerd[1875]: time="2025-01-29T11:37:40.513280100Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 11:37:40.513989 containerd[1875]: time="2025-01-29T11:37:40.513639203Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 11:37:40.513989 containerd[1875]: time="2025-01-29T11:37:40.513685183Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 11:37:40.513989 containerd[1875]: time="2025-01-29T11:37:40.513709394Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 11:37:40.513989 containerd[1875]: time="2025-01-29T11:37:40.513728846Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 11:37:40.513989 containerd[1875]: time="2025-01-29T11:37:40.513750248Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 11:37:40.513989 containerd[1875]: time="2025-01-29T11:37:40.513771259Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 11:37:40.513989 containerd[1875]: time="2025-01-29T11:37:40.513790741Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 11:37:40.513989 containerd[1875]: time="2025-01-29T11:37:40.513809871Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 11:37:40.513989 containerd[1875]: time="2025-01-29T11:37:40.513827606Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 11:37:40.513989 containerd[1875]: time="2025-01-29T11:37:40.513856797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 11:37:40.513989 containerd[1875]: time="2025-01-29T11:37:40.513878529Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 11:37:40.515422 containerd[1875]: time="2025-01-29T11:37:40.513897730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 11:37:40.515422 containerd[1875]: time="2025-01-29T11:37:40.513925714Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 11:37:40.517340 containerd[1875]: time="2025-01-29T11:37:40.515690103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 11:37:40.517340 containerd[1875]: time="2025-01-29T11:37:40.515726392Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 11:37:40.517340 containerd[1875]: time="2025-01-29T11:37:40.515749515Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 11:37:40.517340 containerd[1875]: time="2025-01-29T11:37:40.515770862Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 11:37:40.517340 containerd[1875]: time="2025-01-29T11:37:40.515792394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 11:37:40.517340 containerd[1875]: time="2025-01-29T11:37:40.515817650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 11:37:40.517340 containerd[1875]: time="2025-01-29T11:37:40.515836559Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 11:37:40.517340 containerd[1875]: time="2025-01-29T11:37:40.515856370Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 11:37:40.517340 containerd[1875]: time="2025-01-29T11:37:40.517177956Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 11:37:40.517340 containerd[1875]: time="2025-01-29T11:37:40.517265610Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 11:37:40.517340 containerd[1875]: time="2025-01-29T11:37:40.517311646Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 11:37:40.519575 containerd[1875]: time="2025-01-29T11:37:40.517931473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 11:37:40.519575 containerd[1875]: time="2025-01-29T11:37:40.517970535Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 11:37:40.519575 containerd[1875]: time="2025-01-29T11:37:40.518056493Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 11:37:40.519575 containerd[1875]: time="2025-01-29T11:37:40.518171219Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 11:37:40.519575 containerd[1875]: time="2025-01-29T11:37:40.518188690Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 11:37:40.519575 containerd[1875]: time="2025-01-29T11:37:40.518207219Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 11:37:40.519575 containerd[1875]: time="2025-01-29T11:37:40.518221478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 11:37:40.519575 containerd[1875]: time="2025-01-29T11:37:40.518241616Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 11:37:40.519575 containerd[1875]: time="2025-01-29T11:37:40.518256896Z" level=info msg="NRI interface is disabled by configuration." Jan 29 11:37:40.519575 containerd[1875]: time="2025-01-29T11:37:40.518272040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 11:37:40.520027 containerd[1875]: time="2025-01-29T11:37:40.518817184Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 11:37:40.520027 containerd[1875]: time="2025-01-29T11:37:40.518887843Z" level=info msg="Connect containerd service" Jan 29 11:37:40.520027 containerd[1875]: time="2025-01-29T11:37:40.518933012Z" level=info msg="using legacy CRI server" Jan 29 11:37:40.520027 containerd[1875]: time="2025-01-29T11:37:40.518956819Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 11:37:40.520027 containerd[1875]: time="2025-01-29T11:37:40.519117843Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 11:37:40.522110 containerd[1875]: time="2025-01-29T11:37:40.521995369Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:37:40.522748 containerd[1875]: time="2025-01-29T11:37:40.522313176Z" level=info msg="Start subscribing containerd event" Jan 29 11:37:40.522748 containerd[1875]: time="2025-01-29T11:37:40.522370004Z" level=info msg="Start recovering state" Jan 29 11:37:40.522748 containerd[1875]: time="2025-01-29T11:37:40.522447169Z" level=info msg="Start event monitor" Jan 29 11:37:40.522748 containerd[1875]: time="2025-01-29T11:37:40.522462329Z" level=info msg="Start snapshots syncer" Jan 29 11:37:40.522748 containerd[1875]: time="2025-01-29T11:37:40.522475298Z" level=info msg="Start cni network conf syncer for default" Jan 29 11:37:40.522748 containerd[1875]: time="2025-01-29T11:37:40.522487266Z" level=info msg="Start streaming server" Jan 29 11:37:40.524123 containerd[1875]: time="2025-01-29T11:37:40.524101353Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 11:37:40.524271 containerd[1875]: time="2025-01-29T11:37:40.524229471Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 11:37:40.524490 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 11:37:40.529098 containerd[1875]: time="2025-01-29T11:37:40.526704625Z" level=info msg="containerd successfully booted in 0.148601s" Jan 29 11:37:40.565022 amazon-ssm-agent[1922]: 2025-01-29 11:37:40 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 29 11:37:40.665101 amazon-ssm-agent[1922]: 2025-01-29 11:37:40 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 29 11:37:40.765618 amazon-ssm-agent[1922]: 2025-01-29 11:37:40 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 29 11:37:40.865494 amazon-ssm-agent[1922]: 2025-01-29 11:37:40 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jan 29 11:37:40.966958 amazon-ssm-agent[1922]: 2025-01-29 11:37:40 INFO [amazon-ssm-agent] Starting Core Agent Jan 29 11:37:41.066951 amazon-ssm-agent[1922]: 2025-01-29 11:37:40 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 29 11:37:41.105383 amazon-ssm-agent[1922]: 2025-01-29 11:37:40 INFO [Registrar] Starting registrar module Jan 29 11:37:41.105383 amazon-ssm-agent[1922]: 2025-01-29 11:37:40 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 29 11:37:41.105383 amazon-ssm-agent[1922]: 2025-01-29 11:37:41 INFO [EC2Identity] EC2 registration was successful. Jan 29 11:37:41.105383 amazon-ssm-agent[1922]: 2025-01-29 11:37:41 INFO [CredentialRefresher] credentialRefresher has started Jan 29 11:37:41.105383 amazon-ssm-agent[1922]: 2025-01-29 11:37:41 INFO [CredentialRefresher] Starting credentials refresher loop Jan 29 11:37:41.105662 amazon-ssm-agent[1922]: 2025-01-29 11:37:41 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 29 11:37:41.167720 amazon-ssm-agent[1922]: 2025-01-29 11:37:41 INFO [CredentialRefresher] Next credential rotation will be in 30.34166208976667 minutes Jan 29 11:37:41.507199 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:37:41.510466 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 11:37:41.516098 systemd[1]: Startup finished in 986ms (kernel) + 8.124s (initrd) + 7.669s (userspace) = 16.781s. Jan 29 11:37:41.649510 (kubelet)[2092]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:37:42.045455 ntpd[1856]: Listen normally on 6 eth0 [fe80::42c:a4ff:fe9a:8c37%2]:123 Jan 29 11:37:42.046057 ntpd[1856]: 29 Jan 11:37:42 ntpd[1856]: Listen normally on 6 eth0 [fe80::42c:a4ff:fe9a:8c37%2]:123 Jan 29 11:37:42.126216 amazon-ssm-agent[1922]: 2025-01-29 11:37:42 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 29 11:37:42.226864 amazon-ssm-agent[1922]: 2025-01-29 11:37:42 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2104) started Jan 29 11:37:42.327847 amazon-ssm-agent[1922]: 2025-01-29 11:37:42 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 29 11:37:42.568620 kubelet[2092]: E0129 11:37:42.568526 2092 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:37:42.570986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:37:42.571268 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:37:42.571811 systemd[1]: kubelet.service: Consumed 1.018s CPU time. Jan 29 11:37:43.295019 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 11:37:43.304573 systemd[1]: Started sshd@0-172.31.22.18:22-139.178.68.195:33994.service - OpenSSH per-connection server daemon (139.178.68.195:33994). Jan 29 11:37:43.537494 sshd[2117]: Accepted publickey for core from 139.178.68.195 port 33994 ssh2: RSA SHA256:ucroplBE8DfUKARYUXLTx/d82p9pINzh/nPi5nZglro Jan 29 11:37:43.542833 sshd-session[2117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:37:43.572323 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 11:37:43.582323 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 11:37:43.585720 systemd-logind[1861]: New session 1 of user core. Jan 29 11:37:43.608850 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 11:37:43.617815 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 11:37:43.631276 (systemd)[2121]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 11:37:43.824420 systemd[2121]: Queued start job for default target default.target. Jan 29 11:37:43.835278 systemd[2121]: Created slice app.slice - User Application Slice. Jan 29 11:37:43.835324 systemd[2121]: Reached target paths.target - Paths. Jan 29 11:37:43.835349 systemd[2121]: Reached target timers.target - Timers. Jan 29 11:37:43.837009 systemd[2121]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 11:37:43.855442 systemd[2121]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 11:37:43.855605 systemd[2121]: Reached target sockets.target - Sockets. Jan 29 11:37:43.855627 systemd[2121]: Reached target basic.target - Basic System. Jan 29 11:37:43.855687 systemd[2121]: Reached target default.target - Main User Target. Jan 29 11:37:43.855727 systemd[2121]: Startup finished in 212ms. Jan 29 11:37:43.856023 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 11:37:43.865413 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 11:37:44.030130 systemd[1]: Started sshd@1-172.31.22.18:22-139.178.68.195:46872.service - OpenSSH per-connection server daemon (139.178.68.195:46872). Jan 29 11:37:44.218117 sshd[2132]: Accepted publickey for core from 139.178.68.195 port 46872 ssh2: RSA SHA256:ucroplBE8DfUKARYUXLTx/d82p9pINzh/nPi5nZglro Jan 29 11:37:44.219620 sshd-session[2132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:37:44.228507 systemd-logind[1861]: New session 2 of user core. Jan 29 11:37:44.236179 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 11:37:44.362181 sshd[2134]: Connection closed by 139.178.68.195 port 46872 Jan 29 11:37:44.364111 sshd-session[2132]: pam_unix(sshd:session): session closed for user core Jan 29 11:37:44.369307 systemd[1]: sshd@1-172.31.22.18:22-139.178.68.195:46872.service: Deactivated successfully. Jan 29 11:37:44.371500 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 11:37:44.373818 systemd-logind[1861]: Session 2 logged out. Waiting for processes to exit. Jan 29 11:37:44.376442 systemd-logind[1861]: Removed session 2. Jan 29 11:37:44.401359 systemd[1]: Started sshd@2-172.31.22.18:22-139.178.68.195:46874.service - OpenSSH per-connection server daemon (139.178.68.195:46874). Jan 29 11:37:44.570809 sshd[2139]: Accepted publickey for core from 139.178.68.195 port 46874 ssh2: RSA SHA256:ucroplBE8DfUKARYUXLTx/d82p9pINzh/nPi5nZglro Jan 29 11:37:44.576105 sshd-session[2139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:37:44.585908 systemd-logind[1861]: New session 3 of user core. Jan 29 11:37:44.592182 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 11:37:44.713042 sshd[2141]: Connection closed by 139.178.68.195 port 46874 Jan 29 11:37:44.714909 sshd-session[2139]: pam_unix(sshd:session): session closed for user core Jan 29 11:37:44.723534 systemd-logind[1861]: Session 3 logged out. Waiting for processes to exit. Jan 29 11:37:44.725056 systemd[1]: sshd@2-172.31.22.18:22-139.178.68.195:46874.service: Deactivated successfully. Jan 29 11:37:44.729064 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 11:37:44.731501 systemd-logind[1861]: Removed session 3. Jan 29 11:37:44.753686 systemd[1]: Started sshd@3-172.31.22.18:22-139.178.68.195:46888.service - OpenSSH per-connection server daemon (139.178.68.195:46888). Jan 29 11:37:44.933143 sshd[2146]: Accepted publickey for core from 139.178.68.195 port 46888 ssh2: RSA SHA256:ucroplBE8DfUKARYUXLTx/d82p9pINzh/nPi5nZglro Jan 29 11:37:44.935174 sshd-session[2146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:37:44.941091 systemd-logind[1861]: New session 4 of user core. Jan 29 11:37:44.951179 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 11:37:45.070593 sshd[2148]: Connection closed by 139.178.68.195 port 46888 Jan 29 11:37:45.071605 sshd-session[2146]: pam_unix(sshd:session): session closed for user core Jan 29 11:37:45.077432 systemd[1]: sshd@3-172.31.22.18:22-139.178.68.195:46888.service: Deactivated successfully. Jan 29 11:37:45.079694 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 11:37:45.081032 systemd-logind[1861]: Session 4 logged out. Waiting for processes to exit. Jan 29 11:37:45.082230 systemd-logind[1861]: Removed session 4. Jan 29 11:37:45.116677 systemd[1]: Started sshd@4-172.31.22.18:22-139.178.68.195:46896.service - OpenSSH per-connection server daemon (139.178.68.195:46896). Jan 29 11:37:45.292862 sshd[2153]: Accepted publickey for core from 139.178.68.195 port 46896 ssh2: RSA SHA256:ucroplBE8DfUKARYUXLTx/d82p9pINzh/nPi5nZglro Jan 29 11:37:45.295737 sshd-session[2153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:37:45.303485 systemd-logind[1861]: New session 5 of user core. Jan 29 11:37:45.310678 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 11:37:45.433469 sudo[2156]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 11:37:45.433985 sudo[2156]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:37:45.454098 sudo[2156]: pam_unix(sudo:session): session closed for user root Jan 29 11:37:45.476853 sshd[2155]: Connection closed by 139.178.68.195 port 46896 Jan 29 11:37:45.478729 sshd-session[2153]: pam_unix(sshd:session): session closed for user core Jan 29 11:37:45.483595 systemd[1]: sshd@4-172.31.22.18:22-139.178.68.195:46896.service: Deactivated successfully. Jan 29 11:37:45.486174 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 11:37:45.488696 systemd-logind[1861]: Session 5 logged out. Waiting for processes to exit. Jan 29 11:37:45.490623 systemd-logind[1861]: Removed session 5. Jan 29 11:37:45.517964 systemd[1]: Started sshd@5-172.31.22.18:22-139.178.68.195:46904.service - OpenSSH per-connection server daemon (139.178.68.195:46904). Jan 29 11:37:45.701660 sshd[2161]: Accepted publickey for core from 139.178.68.195 port 46904 ssh2: RSA SHA256:ucroplBE8DfUKARYUXLTx/d82p9pINzh/nPi5nZglro Jan 29 11:37:45.703322 sshd-session[2161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:37:45.710278 systemd-logind[1861]: New session 6 of user core. Jan 29 11:37:45.717350 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 11:37:45.821642 sudo[2165]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 11:37:45.822069 sudo[2165]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:37:45.830321 sudo[2165]: pam_unix(sudo:session): session closed for user root Jan 29 11:37:45.843186 sudo[2164]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 11:37:45.843835 sudo[2164]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:37:45.880488 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:37:45.926338 augenrules[2187]: No rules Jan 29 11:37:45.928370 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:37:45.928652 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:37:45.929798 sudo[2164]: pam_unix(sudo:session): session closed for user root Jan 29 11:37:45.953649 sshd[2163]: Connection closed by 139.178.68.195 port 46904 Jan 29 11:37:45.954229 sshd-session[2161]: pam_unix(sshd:session): session closed for user core Jan 29 11:37:45.958035 systemd[1]: sshd@5-172.31.22.18:22-139.178.68.195:46904.service: Deactivated successfully. Jan 29 11:37:45.960584 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 11:37:45.962165 systemd-logind[1861]: Session 6 logged out. Waiting for processes to exit. Jan 29 11:37:45.963731 systemd-logind[1861]: Removed session 6. Jan 29 11:37:46.011413 systemd[1]: Started sshd@6-172.31.22.18:22-139.178.68.195:46906.service - OpenSSH per-connection server daemon (139.178.68.195:46906). Jan 29 11:37:47.065301 systemd-resolved[1681]: Clock change detected. Flushing caches. Jan 29 11:37:47.285706 sshd[2195]: Accepted publickey for core from 139.178.68.195 port 46906 ssh2: RSA SHA256:ucroplBE8DfUKARYUXLTx/d82p9pINzh/nPi5nZglro Jan 29 11:37:47.289359 sshd-session[2195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:37:47.303406 systemd-logind[1861]: New session 7 of user core. Jan 29 11:37:47.314665 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 11:37:47.441161 sudo[2198]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 11:37:47.441578 sudo[2198]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:37:48.401869 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:37:48.402481 systemd[1]: kubelet.service: Consumed 1.018s CPU time. Jan 29 11:37:48.409746 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:37:48.463379 systemd[1]: Reloading requested from client PID 2229 ('systemctl') (unit session-7.scope)... Jan 29 11:37:48.463403 systemd[1]: Reloading... Jan 29 11:37:48.699940 zram_generator::config[2273]: No configuration found. Jan 29 11:37:48.971795 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:37:49.137052 systemd[1]: Reloading finished in 672 ms. Jan 29 11:37:49.235989 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 11:37:49.236199 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 11:37:49.237786 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:37:49.247844 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:37:49.570287 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:37:49.580378 (kubelet)[2329]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:37:49.673861 kubelet[2329]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:37:49.673861 kubelet[2329]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:37:49.673861 kubelet[2329]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:37:49.673861 kubelet[2329]: I0129 11:37:49.673500 2329 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:37:50.295925 kubelet[2329]: I0129 11:37:50.295868 2329 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 11:37:50.295925 kubelet[2329]: I0129 11:37:50.295918 2329 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:37:50.297113 kubelet[2329]: I0129 11:37:50.297072 2329 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 11:37:50.359162 kubelet[2329]: I0129 11:37:50.358858 2329 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:37:50.380410 kubelet[2329]: E0129 11:37:50.380367 2329 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 11:37:50.380410 kubelet[2329]: I0129 11:37:50.380407 2329 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 11:37:50.385559 kubelet[2329]: I0129 11:37:50.385530 2329 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:37:50.387491 kubelet[2329]: I0129 11:37:50.387453 2329 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 11:37:50.388258 kubelet[2329]: I0129 11:37:50.387708 2329 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:37:50.388454 kubelet[2329]: I0129 11:37:50.387738 2329 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.22.18","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 11:37:50.388454 kubelet[2329]: I0129 11:37:50.388409 2329 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:37:50.388454 kubelet[2329]: I0129 11:37:50.388426 2329 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 11:37:50.388667 kubelet[2329]: I0129 11:37:50.388559 2329 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:37:50.390656 kubelet[2329]: I0129 11:37:50.390624 2329 kubelet.go:408] "Attempting to sync node with API server" Jan 29 11:37:50.390656 kubelet[2329]: I0129 11:37:50.390657 2329 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:37:50.390796 kubelet[2329]: I0129 11:37:50.390696 2329 kubelet.go:314] "Adding apiserver pod source" Jan 29 11:37:50.390796 kubelet[2329]: I0129 11:37:50.390716 2329 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:37:50.391853 kubelet[2329]: E0129 11:37:50.391424 2329 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:50.391853 kubelet[2329]: E0129 11:37:50.391475 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:50.398487 kubelet[2329]: I0129 11:37:50.398448 2329 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:37:50.399008 kubelet[2329]: W0129 11:37:50.398957 2329 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 29 11:37:50.399138 kubelet[2329]: E0129 11:37:50.399048 2329 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 29 11:37:50.402651 kubelet[2329]: I0129 11:37:50.402519 2329 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:37:50.403514 kubelet[2329]: W0129 11:37:50.403486 2329 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 11:37:50.404377 kubelet[2329]: I0129 11:37:50.404353 2329 server.go:1269] "Started kubelet" Jan 29 11:37:50.407849 kubelet[2329]: W0129 11:37:50.407038 2329 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.31.22.18" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 29 11:37:50.407849 kubelet[2329]: E0129 11:37:50.407073 2329 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"172.31.22.18\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 29 11:37:50.407849 kubelet[2329]: I0129 11:37:50.407109 2329 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:37:50.410478 kubelet[2329]: I0129 11:37:50.410452 2329 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:37:50.414048 kubelet[2329]: I0129 11:37:50.413984 2329 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:37:50.414667 kubelet[2329]: I0129 11:37:50.414643 2329 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:37:50.416056 kubelet[2329]: I0129 11:37:50.416030 2329 server.go:460] "Adding debug handlers to kubelet server" Jan 29 11:37:50.418377 kubelet[2329]: I0129 11:37:50.418353 2329 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 11:37:50.421808 kubelet[2329]: I0129 11:37:50.421782 2329 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 11:37:50.422855 kubelet[2329]: I0129 11:37:50.422686 2329 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 11:37:50.422855 kubelet[2329]: I0129 11:37:50.422759 2329 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:37:50.424441 kubelet[2329]: E0129 11:37:50.424416 2329 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.22.18\" not found" Jan 29 11:37:50.427469 kubelet[2329]: E0129 11:37:50.427427 2329 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:37:50.429406 kubelet[2329]: I0129 11:37:50.429257 2329 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:37:50.429406 kubelet[2329]: I0129 11:37:50.429274 2329 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:37:50.429700 kubelet[2329]: I0129 11:37:50.429564 2329 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:37:50.450528 kubelet[2329]: E0129 11:37:50.450490 2329 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.22.18\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 29 11:37:50.451443 kubelet[2329]: W0129 11:37:50.451364 2329 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 29 11:37:50.451443 kubelet[2329]: E0129 11:37:50.451402 2329 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Jan 29 11:37:50.463938 kubelet[2329]: E0129 11:37:50.451082 2329 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.22.18.181f26d35b8dd83f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.22.18,UID:172.31.22.18,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.31.22.18,},FirstTimestamp:2025-01-29 11:37:50.404278335 +0000 UTC m=+0.814466031,LastTimestamp:2025-01-29 11:37:50.404278335 +0000 UTC m=+0.814466031,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.22.18,}" Jan 29 11:37:50.476197 kubelet[2329]: I0129 11:37:50.476156 2329 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:37:50.476197 kubelet[2329]: I0129 11:37:50.476178 2329 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:37:50.476197 kubelet[2329]: I0129 11:37:50.476199 2329 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:37:50.483901 kubelet[2329]: I0129 11:37:50.483087 2329 policy_none.go:49] "None policy: Start" Jan 29 11:37:50.484379 kubelet[2329]: I0129 11:37:50.484249 2329 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:37:50.484379 kubelet[2329]: I0129 11:37:50.484280 2329 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:37:50.493952 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 11:37:50.498481 kubelet[2329]: E0129 11:37:50.496645 2329 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.22.18.181f26d35ceec299 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.22.18,UID:172.31.22.18,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:172.31.22.18,},FirstTimestamp:2025-01-29 11:37:50.427407001 +0000 UTC m=+0.837594700,LastTimestamp:2025-01-29 11:37:50.427407001 +0000 UTC m=+0.837594700,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.22.18,}" Jan 29 11:37:50.508259 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 11:37:50.515820 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 11:37:50.523628 kubelet[2329]: I0129 11:37:50.522282 2329 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:37:50.523628 kubelet[2329]: I0129 11:37:50.522543 2329 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 11:37:50.523628 kubelet[2329]: I0129 11:37:50.522556 2329 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:37:50.523628 kubelet[2329]: I0129 11:37:50.523309 2329 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:37:50.528662 kubelet[2329]: E0129 11:37:50.528635 2329 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.22.18\" not found" Jan 29 11:37:50.541745 kubelet[2329]: I0129 11:37:50.541709 2329 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:37:50.544967 kubelet[2329]: I0129 11:37:50.544935 2329 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:37:50.545148 kubelet[2329]: I0129 11:37:50.545137 2329 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:37:50.545300 kubelet[2329]: I0129 11:37:50.545289 2329 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 11:37:50.545504 kubelet[2329]: E0129 11:37:50.545480 2329 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 29 11:37:50.625875 kubelet[2329]: I0129 11:37:50.623814 2329 kubelet_node_status.go:72] "Attempting to register node" node="172.31.22.18" Jan 29 11:37:50.639608 kubelet[2329]: I0129 11:37:50.639565 2329 kubelet_node_status.go:75] "Successfully registered node" node="172.31.22.18" Jan 29 11:37:50.639608 kubelet[2329]: E0129 11:37:50.639603 2329 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.31.22.18\": node \"172.31.22.18\" not found" Jan 29 11:37:50.723971 kubelet[2329]: E0129 11:37:50.723858 2329 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.22.18\" not found" Jan 29 11:37:50.824271 kubelet[2329]: E0129 11:37:50.824227 2329 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.22.18\" not found" Jan 29 11:37:50.924825 kubelet[2329]: E0129 11:37:50.924701 2329 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.22.18\" not found" Jan 29 11:37:51.025360 kubelet[2329]: E0129 11:37:51.025312 2329 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.22.18\" not found" Jan 29 11:37:51.119630 sudo[2198]: pam_unix(sudo:session): session closed for user root Jan 29 11:37:51.126036 kubelet[2329]: E0129 11:37:51.125992 2329 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.22.18\" not found" Jan 29 11:37:51.142928 sshd[2197]: Connection closed by 139.178.68.195 port 46906 Jan 29 11:37:51.143559 sshd-session[2195]: pam_unix(sshd:session): session closed for user core Jan 29 11:37:51.150615 systemd[1]: sshd@6-172.31.22.18:22-139.178.68.195:46906.service: Deactivated successfully. Jan 29 11:37:51.158881 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 11:37:51.167063 systemd-logind[1861]: Session 7 logged out. Waiting for processes to exit. Jan 29 11:37:51.169015 systemd-logind[1861]: Removed session 7. Jan 29 11:37:51.226592 kubelet[2329]: E0129 11:37:51.226555 2329 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.22.18\" not found" Jan 29 11:37:51.299196 kubelet[2329]: I0129 11:37:51.299140 2329 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 29 11:37:51.299432 kubelet[2329]: W0129 11:37:51.299346 2329 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 11:37:51.299498 kubelet[2329]: W0129 11:37:51.299439 2329 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 11:37:51.328719 kubelet[2329]: I0129 11:37:51.328681 2329 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 29 11:37:51.329067 containerd[1875]: time="2025-01-29T11:37:51.329024150Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 11:37:51.331397 kubelet[2329]: I0129 11:37:51.331353 2329 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 29 11:37:51.392401 kubelet[2329]: I0129 11:37:51.392351 2329 apiserver.go:52] "Watching apiserver" Jan 29 11:37:51.392554 kubelet[2329]: E0129 11:37:51.392348 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:51.413410 kubelet[2329]: E0129 11:37:51.412001 2329 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nwzz8" podUID="60248392-cc03-4e9c-8d46-1318728a4ee1" Jan 29 11:37:51.420549 systemd[1]: Created slice kubepods-besteffort-pod9e2489ca_5bfb_4cab_bab3_29857d01de17.slice - libcontainer container kubepods-besteffort-pod9e2489ca_5bfb_4cab_bab3_29857d01de17.slice. Jan 29 11:37:51.423454 kubelet[2329]: I0129 11:37:51.423417 2329 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 11:37:51.437994 kubelet[2329]: I0129 11:37:51.436033 2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/60248392-cc03-4e9c-8d46-1318728a4ee1-socket-dir\") pod \"csi-node-driver-nwzz8\" (UID: \"60248392-cc03-4e9c-8d46-1318728a4ee1\") " pod="calico-system/csi-node-driver-nwzz8" Jan 29 11:37:51.437994 kubelet[2329]: I0129 11:37:51.436191 2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt9x5\" (UniqueName: \"kubernetes.io/projected/9b45f559-93b9-41f4-b6b4-6e3b5a4ebf8f-kube-api-access-wt9x5\") pod \"kube-proxy-m2h78\" (UID: \"9b45f559-93b9-41f4-b6b4-6e3b5a4ebf8f\") " pod="kube-system/kube-proxy-m2h78" Jan 29 11:37:51.437994 kubelet[2329]: I0129 11:37:51.436224 2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9e2489ca-5bfb-4cab-bab3-29857d01de17-xtables-lock\") pod \"calico-node-7p5lq\" (UID: \"9e2489ca-5bfb-4cab-bab3-29857d01de17\") " pod="calico-system/calico-node-7p5lq" Jan 29 11:37:51.437994 kubelet[2329]: I0129 11:37:51.436248 2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9e2489ca-5bfb-4cab-bab3-29857d01de17-var-run-calico\") pod \"calico-node-7p5lq\" (UID: \"9e2489ca-5bfb-4cab-bab3-29857d01de17\") " pod="calico-system/calico-node-7p5lq" Jan 29 11:37:51.437994 kubelet[2329]: I0129 11:37:51.436399 2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b45f559-93b9-41f4-b6b4-6e3b5a4ebf8f-xtables-lock\") pod \"kube-proxy-m2h78\" (UID: \"9b45f559-93b9-41f4-b6b4-6e3b5a4ebf8f\") " pod="kube-system/kube-proxy-m2h78" Jan 29 11:37:51.438439 kubelet[2329]: I0129 11:37:51.436431 2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e2489ca-5bfb-4cab-bab3-29857d01de17-tigera-ca-bundle\") pod \"calico-node-7p5lq\" (UID: \"9e2489ca-5bfb-4cab-bab3-29857d01de17\") " pod="calico-system/calico-node-7p5lq" Jan 29 11:37:51.438439 kubelet[2329]: I0129 11:37:51.436456 2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9e2489ca-5bfb-4cab-bab3-29857d01de17-node-certs\") pod \"calico-node-7p5lq\" (UID: \"9e2489ca-5bfb-4cab-bab3-29857d01de17\") " pod="calico-system/calico-node-7p5lq" Jan 29 11:37:51.438439 kubelet[2329]: I0129 11:37:51.436478 2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9e2489ca-5bfb-4cab-bab3-29857d01de17-cni-net-dir\") pod \"calico-node-7p5lq\" (UID: \"9e2489ca-5bfb-4cab-bab3-29857d01de17\") " pod="calico-system/calico-node-7p5lq" Jan 29 11:37:51.438439 kubelet[2329]: I0129 11:37:51.436511 2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9e2489ca-5bfb-4cab-bab3-29857d01de17-flexvol-driver-host\") pod \"calico-node-7p5lq\" (UID: \"9e2489ca-5bfb-4cab-bab3-29857d01de17\") " pod="calico-system/calico-node-7p5lq" Jan 29 11:37:51.438439 kubelet[2329]: I0129 11:37:51.436536 2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/60248392-cc03-4e9c-8d46-1318728a4ee1-varrun\") pod \"csi-node-driver-nwzz8\" (UID: \"60248392-cc03-4e9c-8d46-1318728a4ee1\") " pod="calico-system/csi-node-driver-nwzz8" Jan 29 11:37:51.447141 kubelet[2329]: I0129 11:37:51.436616 2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/60248392-cc03-4e9c-8d46-1318728a4ee1-registration-dir\") pod \"csi-node-driver-nwzz8\" (UID: \"60248392-cc03-4e9c-8d46-1318728a4ee1\") " pod="calico-system/csi-node-driver-nwzz8" Jan 29 11:37:51.447141 kubelet[2329]: I0129 11:37:51.436640 2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9e2489ca-5bfb-4cab-bab3-29857d01de17-lib-modules\") pod \"calico-node-7p5lq\" (UID: \"9e2489ca-5bfb-4cab-bab3-29857d01de17\") " pod="calico-system/calico-node-7p5lq" Jan 29 11:37:51.447141 kubelet[2329]: I0129 11:37:51.436665 2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9e2489ca-5bfb-4cab-bab3-29857d01de17-var-lib-calico\") pod \"calico-node-7p5lq\" (UID: \"9e2489ca-5bfb-4cab-bab3-29857d01de17\") " pod="calico-system/calico-node-7p5lq" Jan 29 11:37:51.447141 kubelet[2329]: I0129 11:37:51.436686 2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9e2489ca-5bfb-4cab-bab3-29857d01de17-cni-log-dir\") pod \"calico-node-7p5lq\" (UID: \"9e2489ca-5bfb-4cab-bab3-29857d01de17\") " pod="calico-system/calico-node-7p5lq" Jan 29 11:37:51.447141 kubelet[2329]: I0129 11:37:51.436708 2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvvgh\" (UniqueName: \"kubernetes.io/projected/9e2489ca-5bfb-4cab-bab3-29857d01de17-kube-api-access-wvvgh\") pod \"calico-node-7p5lq\" (UID: \"9e2489ca-5bfb-4cab-bab3-29857d01de17\") " pod="calico-system/calico-node-7p5lq" Jan 29 11:37:51.446310 systemd[1]: Created slice kubepods-besteffort-pod9b45f559_93b9_41f4_b6b4_6e3b5a4ebf8f.slice - libcontainer container kubepods-besteffort-pod9b45f559_93b9_41f4_b6b4_6e3b5a4ebf8f.slice. Jan 29 11:37:51.447466 kubelet[2329]: I0129 11:37:51.436731 2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/60248392-cc03-4e9c-8d46-1318728a4ee1-kubelet-dir\") pod \"csi-node-driver-nwzz8\" (UID: \"60248392-cc03-4e9c-8d46-1318728a4ee1\") " pod="calico-system/csi-node-driver-nwzz8" Jan 29 11:37:51.447466 kubelet[2329]: I0129 11:37:51.436754 2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxncd\" (UniqueName: \"kubernetes.io/projected/60248392-cc03-4e9c-8d46-1318728a4ee1-kube-api-access-bxncd\") pod \"csi-node-driver-nwzz8\" (UID: \"60248392-cc03-4e9c-8d46-1318728a4ee1\") " pod="calico-system/csi-node-driver-nwzz8" Jan 29 11:37:51.447466 kubelet[2329]: I0129 11:37:51.436779 2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9b45f559-93b9-41f4-b6b4-6e3b5a4ebf8f-kube-proxy\") pod \"kube-proxy-m2h78\" (UID: \"9b45f559-93b9-41f4-b6b4-6e3b5a4ebf8f\") " pod="kube-system/kube-proxy-m2h78" Jan 29 11:37:51.447466 kubelet[2329]: I0129 11:37:51.436811 2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b45f559-93b9-41f4-b6b4-6e3b5a4ebf8f-lib-modules\") pod \"kube-proxy-m2h78\" (UID: \"9b45f559-93b9-41f4-b6b4-6e3b5a4ebf8f\") " pod="kube-system/kube-proxy-m2h78" Jan 29 11:37:51.447466 kubelet[2329]: I0129 11:37:51.446251 2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9e2489ca-5bfb-4cab-bab3-29857d01de17-policysync\") pod \"calico-node-7p5lq\" (UID: \"9e2489ca-5bfb-4cab-bab3-29857d01de17\") " pod="calico-system/calico-node-7p5lq" Jan 29 11:37:51.447919 kubelet[2329]: I0129 11:37:51.446337 2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9e2489ca-5bfb-4cab-bab3-29857d01de17-cni-bin-dir\") pod \"calico-node-7p5lq\" (UID: \"9e2489ca-5bfb-4cab-bab3-29857d01de17\") " pod="calico-system/calico-node-7p5lq" Jan 29 11:37:51.566617 kubelet[2329]: E0129 11:37:51.565435 2329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:37:51.566617 kubelet[2329]: W0129 11:37:51.565462 2329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:37:51.566617 kubelet[2329]: E0129 11:37:51.565485 2329 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:37:51.578650 kubelet[2329]: E0129 11:37:51.575480 2329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:37:51.578650 kubelet[2329]: W0129 11:37:51.575504 2329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:37:51.578650 kubelet[2329]: E0129 11:37:51.575540 2329 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:37:51.584054 kubelet[2329]: E0129 11:37:51.578932 2329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:37:51.584054 kubelet[2329]: W0129 11:37:51.578948 2329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:37:51.584054 kubelet[2329]: E0129 11:37:51.578973 2329 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:37:51.600546 kubelet[2329]: E0129 11:37:51.600443 2329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:37:51.600546 kubelet[2329]: W0129 11:37:51.600467 2329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:37:51.600546 kubelet[2329]: E0129 11:37:51.600492 2329 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:37:51.603874 kubelet[2329]: E0129 11:37:51.603679 2329 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:37:51.603874 kubelet[2329]: W0129 11:37:51.603699 2329 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:37:51.603874 kubelet[2329]: E0129 11:37:51.603720 2329 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:37:51.734714 containerd[1875]: time="2025-01-29T11:37:51.734670613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7p5lq,Uid:9e2489ca-5bfb-4cab-bab3-29857d01de17,Namespace:calico-system,Attempt:0,}" Jan 29 11:37:51.791252 containerd[1875]: time="2025-01-29T11:37:51.788057087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m2h78,Uid:9b45f559-93b9-41f4-b6b4-6e3b5a4ebf8f,Namespace:kube-system,Attempt:0,}" Jan 29 11:37:52.393425 kubelet[2329]: E0129 11:37:52.393292 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:52.423098 containerd[1875]: time="2025-01-29T11:37:52.423001930Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:37:52.425135 containerd[1875]: time="2025-01-29T11:37:52.425093238Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:37:52.428977 containerd[1875]: time="2025-01-29T11:37:52.428921374Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:37:52.428977 containerd[1875]: time="2025-01-29T11:37:52.428972689Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 29 11:37:52.430324 containerd[1875]: time="2025-01-29T11:37:52.430245483Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:37:52.442170 containerd[1875]: time="2025-01-29T11:37:52.442118820Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:37:52.445436 containerd[1875]: time="2025-01-29T11:37:52.444987681Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 656.811331ms" Jan 29 11:37:52.451462 containerd[1875]: time="2025-01-29T11:37:52.451392702Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 716.461711ms" Jan 29 11:37:52.550442 kubelet[2329]: E0129 11:37:52.548166 2329 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nwzz8" podUID="60248392-cc03-4e9c-8d46-1318728a4ee1" Jan 29 11:37:52.581823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1781629240.mount: Deactivated successfully. Jan 29 11:37:52.745313 containerd[1875]: time="2025-01-29T11:37:52.743523946Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:37:52.745313 containerd[1875]: time="2025-01-29T11:37:52.743590613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:37:52.745313 containerd[1875]: time="2025-01-29T11:37:52.743607386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:37:52.745313 containerd[1875]: time="2025-01-29T11:37:52.743698934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:37:52.754214 containerd[1875]: time="2025-01-29T11:37:52.739480893Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:37:52.754214 containerd[1875]: time="2025-01-29T11:37:52.753922241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:37:52.754214 containerd[1875]: time="2025-01-29T11:37:52.753965053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:37:52.754214 containerd[1875]: time="2025-01-29T11:37:52.754087155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:37:52.955530 systemd[1]: Started cri-containerd-efd77e7e8d167eb93caa4fcef4dc4d7ee05be46bbb87f44411784ff77f7a254f.scope - libcontainer container efd77e7e8d167eb93caa4fcef4dc4d7ee05be46bbb87f44411784ff77f7a254f. Jan 29 11:37:52.965573 systemd[1]: Started cri-containerd-36faf7f1e62430512affcf010ef8d5d391adb98af7bac34b436daed59ea13f30.scope - libcontainer container 36faf7f1e62430512affcf010ef8d5d391adb98af7bac34b436daed59ea13f30. Jan 29 11:37:53.025133 containerd[1875]: time="2025-01-29T11:37:53.024950325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7p5lq,Uid:9e2489ca-5bfb-4cab-bab3-29857d01de17,Namespace:calico-system,Attempt:0,} returns sandbox id \"efd77e7e8d167eb93caa4fcef4dc4d7ee05be46bbb87f44411784ff77f7a254f\"" Jan 29 11:37:53.038346 containerd[1875]: time="2025-01-29T11:37:53.037985383Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 29 11:37:53.047248 containerd[1875]: time="2025-01-29T11:37:53.047201325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m2h78,Uid:9b45f559-93b9-41f4-b6b4-6e3b5a4ebf8f,Namespace:kube-system,Attempt:0,} returns sandbox id \"36faf7f1e62430512affcf010ef8d5d391adb98af7bac34b436daed59ea13f30\"" Jan 29 11:37:53.394189 kubelet[2329]: E0129 11:37:53.394039 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:54.280531 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3641469191.mount: Deactivated successfully. Jan 29 11:37:54.394694 kubelet[2329]: E0129 11:37:54.394647 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:54.435059 containerd[1875]: time="2025-01-29T11:37:54.435005143Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:37:54.437392 containerd[1875]: time="2025-01-29T11:37:54.437266275Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 29 11:37:54.439537 containerd[1875]: time="2025-01-29T11:37:54.438244443Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:37:54.442857 containerd[1875]: time="2025-01-29T11:37:54.441539516Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:37:54.446126 containerd[1875]: time="2025-01-29T11:37:54.446081245Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.408040838s" Jan 29 11:37:54.446258 containerd[1875]: time="2025-01-29T11:37:54.446130920Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 29 11:37:54.451702 containerd[1875]: time="2025-01-29T11:37:54.451514093Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 29 11:37:54.454330 containerd[1875]: time="2025-01-29T11:37:54.454273885Z" level=info msg="CreateContainer within sandbox \"efd77e7e8d167eb93caa4fcef4dc4d7ee05be46bbb87f44411784ff77f7a254f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 11:37:54.473862 containerd[1875]: time="2025-01-29T11:37:54.473809131Z" level=info msg="CreateContainer within sandbox \"efd77e7e8d167eb93caa4fcef4dc4d7ee05be46bbb87f44411784ff77f7a254f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6d8cb8ee5149114d279f322b517d64308b4e6187d5c5a5a85af8a4da21a8be26\"" Jan 29 11:37:54.474965 containerd[1875]: time="2025-01-29T11:37:54.474933188Z" level=info msg="StartContainer for \"6d8cb8ee5149114d279f322b517d64308b4e6187d5c5a5a85af8a4da21a8be26\"" Jan 29 11:37:54.525033 systemd[1]: Started cri-containerd-6d8cb8ee5149114d279f322b517d64308b4e6187d5c5a5a85af8a4da21a8be26.scope - libcontainer container 6d8cb8ee5149114d279f322b517d64308b4e6187d5c5a5a85af8a4da21a8be26. Jan 29 11:37:54.547725 kubelet[2329]: E0129 11:37:54.547110 2329 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nwzz8" podUID="60248392-cc03-4e9c-8d46-1318728a4ee1" Jan 29 11:37:54.575599 containerd[1875]: time="2025-01-29T11:37:54.575548894Z" level=info msg="StartContainer for \"6d8cb8ee5149114d279f322b517d64308b4e6187d5c5a5a85af8a4da21a8be26\" returns successfully" Jan 29 11:37:54.593997 systemd[1]: cri-containerd-6d8cb8ee5149114d279f322b517d64308b4e6187d5c5a5a85af8a4da21a8be26.scope: Deactivated successfully. Jan 29 11:37:54.706099 containerd[1875]: time="2025-01-29T11:37:54.705879721Z" level=info msg="shim disconnected" id=6d8cb8ee5149114d279f322b517d64308b4e6187d5c5a5a85af8a4da21a8be26 namespace=k8s.io Jan 29 11:37:54.706759 containerd[1875]: time="2025-01-29T11:37:54.706718926Z" level=warning msg="cleaning up after shim disconnected" id=6d8cb8ee5149114d279f322b517d64308b4e6187d5c5a5a85af8a4da21a8be26 namespace=k8s.io Jan 29 11:37:54.706759 containerd[1875]: time="2025-01-29T11:37:54.706749262Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:37:55.214337 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d8cb8ee5149114d279f322b517d64308b4e6187d5c5a5a85af8a4da21a8be26-rootfs.mount: Deactivated successfully. Jan 29 11:37:55.395408 kubelet[2329]: E0129 11:37:55.395367 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:55.991018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1608312464.mount: Deactivated successfully. Jan 29 11:37:56.396098 kubelet[2329]: E0129 11:37:56.395985 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:56.547039 kubelet[2329]: E0129 11:37:56.546567 2329 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nwzz8" podUID="60248392-cc03-4e9c-8d46-1318728a4ee1" Jan 29 11:37:56.874087 containerd[1875]: time="2025-01-29T11:37:56.874031311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:37:56.875215 containerd[1875]: time="2025-01-29T11:37:56.875164849Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=30231128" Jan 29 11:37:56.878665 containerd[1875]: time="2025-01-29T11:37:56.878616410Z" level=info msg="ImageCreate event name:\"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:37:56.883494 containerd[1875]: time="2025-01-29T11:37:56.883246522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:37:56.884822 containerd[1875]: time="2025-01-29T11:37:56.884248116Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"30230147\" in 2.432541945s" Jan 29 11:37:56.884822 containerd[1875]: time="2025-01-29T11:37:56.884291559Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\"" Jan 29 11:37:56.886101 containerd[1875]: time="2025-01-29T11:37:56.885850078Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 29 11:37:56.887409 containerd[1875]: time="2025-01-29T11:37:56.887281549Z" level=info msg="CreateContainer within sandbox \"36faf7f1e62430512affcf010ef8d5d391adb98af7bac34b436daed59ea13f30\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 11:37:56.905141 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount955900895.mount: Deactivated successfully. Jan 29 11:37:56.907777 containerd[1875]: time="2025-01-29T11:37:56.907736673Z" level=info msg="CreateContainer within sandbox \"36faf7f1e62430512affcf010ef8d5d391adb98af7bac34b436daed59ea13f30\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bb9b5080b52ec68e7e09a7649b8be19774c8e0a335d2c75bc7f7587fb1476301\"" Jan 29 11:37:56.908862 containerd[1875]: time="2025-01-29T11:37:56.908584918Z" level=info msg="StartContainer for \"bb9b5080b52ec68e7e09a7649b8be19774c8e0a335d2c75bc7f7587fb1476301\"" Jan 29 11:37:56.960401 systemd[1]: Started cri-containerd-bb9b5080b52ec68e7e09a7649b8be19774c8e0a335d2c75bc7f7587fb1476301.scope - libcontainer container bb9b5080b52ec68e7e09a7649b8be19774c8e0a335d2c75bc7f7587fb1476301. Jan 29 11:37:57.003403 containerd[1875]: time="2025-01-29T11:37:57.003208003Z" level=info msg="StartContainer for \"bb9b5080b52ec68e7e09a7649b8be19774c8e0a335d2c75bc7f7587fb1476301\" returns successfully" Jan 29 11:37:57.397459 kubelet[2329]: E0129 11:37:57.396138 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:58.397854 kubelet[2329]: E0129 11:37:58.397268 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:58.550942 kubelet[2329]: E0129 11:37:58.549745 2329 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nwzz8" podUID="60248392-cc03-4e9c-8d46-1318728a4ee1" Jan 29 11:37:59.397801 kubelet[2329]: E0129 11:37:59.397743 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:00.398935 kubelet[2329]: E0129 11:38:00.398865 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:00.549853 kubelet[2329]: E0129 11:38:00.547854 2329 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nwzz8" podUID="60248392-cc03-4e9c-8d46-1318728a4ee1" Jan 29 11:38:01.400063 kubelet[2329]: E0129 11:38:01.400010 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:01.699933 containerd[1875]: time="2025-01-29T11:38:01.699796387Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:38:01.701767 containerd[1875]: time="2025-01-29T11:38:01.701393153Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 29 11:38:01.703488 containerd[1875]: time="2025-01-29T11:38:01.703414597Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:38:01.706864 containerd[1875]: time="2025-01-29T11:38:01.706727837Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:38:01.708292 containerd[1875]: time="2025-01-29T11:38:01.707744649Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.821857439s" Jan 29 11:38:01.708292 containerd[1875]: time="2025-01-29T11:38:01.707787986Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 29 11:38:01.712557 containerd[1875]: time="2025-01-29T11:38:01.712513319Z" level=info msg="CreateContainer within sandbox \"efd77e7e8d167eb93caa4fcef4dc4d7ee05be46bbb87f44411784ff77f7a254f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 11:38:01.741559 containerd[1875]: time="2025-01-29T11:38:01.741511016Z" level=info msg="CreateContainer within sandbox \"efd77e7e8d167eb93caa4fcef4dc4d7ee05be46bbb87f44411784ff77f7a254f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"de2c7656f33af5c5cfb31fa7abcbff28ed92f2f56be0d039a99454d98e5f3782\"" Jan 29 11:38:01.742480 containerd[1875]: time="2025-01-29T11:38:01.742444791Z" level=info msg="StartContainer for \"de2c7656f33af5c5cfb31fa7abcbff28ed92f2f56be0d039a99454d98e5f3782\"" Jan 29 11:38:01.894332 systemd[1]: run-containerd-runc-k8s.io-de2c7656f33af5c5cfb31fa7abcbff28ed92f2f56be0d039a99454d98e5f3782-runc.iL0tnr.mount: Deactivated successfully. Jan 29 11:38:01.909191 systemd[1]: Started cri-containerd-de2c7656f33af5c5cfb31fa7abcbff28ed92f2f56be0d039a99454d98e5f3782.scope - libcontainer container de2c7656f33af5c5cfb31fa7abcbff28ed92f2f56be0d039a99454d98e5f3782. Jan 29 11:38:02.035406 containerd[1875]: time="2025-01-29T11:38:02.035350232Z" level=info msg="StartContainer for \"de2c7656f33af5c5cfb31fa7abcbff28ed92f2f56be0d039a99454d98e5f3782\" returns successfully" Jan 29 11:38:02.401096 kubelet[2329]: E0129 11:38:02.400924 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:02.546801 kubelet[2329]: E0129 11:38:02.546397 2329 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nwzz8" podUID="60248392-cc03-4e9c-8d46-1318728a4ee1" Jan 29 11:38:02.664543 kubelet[2329]: I0129 11:38:02.663595 2329 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-m2h78" podStartSLOduration=8.827002004 podStartE2EDuration="12.663569393s" podCreationTimestamp="2025-01-29 11:37:50 +0000 UTC" firstStartedPulling="2025-01-29 11:37:53.049093862 +0000 UTC m=+3.459281554" lastFinishedPulling="2025-01-29 11:37:56.885661253 +0000 UTC m=+7.295848943" observedRunningTime="2025-01-29 11:37:57.621569014 +0000 UTC m=+8.031756713" watchObservedRunningTime="2025-01-29 11:38:02.663569393 +0000 UTC m=+13.073757128" Jan 29 11:38:02.880824 containerd[1875]: time="2025-01-29T11:38:02.880760427Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:38:02.885825 systemd[1]: cri-containerd-de2c7656f33af5c5cfb31fa7abcbff28ed92f2f56be0d039a99454d98e5f3782.scope: Deactivated successfully. Jan 29 11:38:02.897894 kubelet[2329]: I0129 11:38:02.897280 2329 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 29 11:38:02.922090 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de2c7656f33af5c5cfb31fa7abcbff28ed92f2f56be0d039a99454d98e5f3782-rootfs.mount: Deactivated successfully. Jan 29 11:38:03.001273 containerd[1875]: time="2025-01-29T11:38:03.001200754Z" level=info msg="shim disconnected" id=de2c7656f33af5c5cfb31fa7abcbff28ed92f2f56be0d039a99454d98e5f3782 namespace=k8s.io Jan 29 11:38:03.001629 containerd[1875]: time="2025-01-29T11:38:03.001283837Z" level=warning msg="cleaning up after shim disconnected" id=de2c7656f33af5c5cfb31fa7abcbff28ed92f2f56be0d039a99454d98e5f3782 namespace=k8s.io Jan 29 11:38:03.001629 containerd[1875]: time="2025-01-29T11:38:03.001297940Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:38:03.402119 kubelet[2329]: E0129 11:38:03.402057 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:03.643661 containerd[1875]: time="2025-01-29T11:38:03.643606815Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 29 11:38:04.402784 kubelet[2329]: E0129 11:38:04.402730 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:04.553673 systemd[1]: Created slice kubepods-besteffort-pod60248392_cc03_4e9c_8d46_1318728a4ee1.slice - libcontainer container kubepods-besteffort-pod60248392_cc03_4e9c_8d46_1318728a4ee1.slice. Jan 29 11:38:04.556961 containerd[1875]: time="2025-01-29T11:38:04.556917268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nwzz8,Uid:60248392-cc03-4e9c-8d46-1318728a4ee1,Namespace:calico-system,Attempt:0,}" Jan 29 11:38:04.646641 containerd[1875]: time="2025-01-29T11:38:04.646564893Z" level=error msg="Failed to destroy network for sandbox \"216c7811ac130a3fb67a0705d8faf937266b3b9566fbec00e15f684f8a1e3795\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:04.650632 containerd[1875]: time="2025-01-29T11:38:04.650577109Z" level=error msg="encountered an error cleaning up failed sandbox \"216c7811ac130a3fb67a0705d8faf937266b3b9566fbec00e15f684f8a1e3795\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:04.650800 containerd[1875]: time="2025-01-29T11:38:04.650720448Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nwzz8,Uid:60248392-cc03-4e9c-8d46-1318728a4ee1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"216c7811ac130a3fb67a0705d8faf937266b3b9566fbec00e15f684f8a1e3795\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:04.651565 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-216c7811ac130a3fb67a0705d8faf937266b3b9566fbec00e15f684f8a1e3795-shm.mount: Deactivated successfully. Jan 29 11:38:04.652504 kubelet[2329]: E0129 11:38:04.652329 2329 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"216c7811ac130a3fb67a0705d8faf937266b3b9566fbec00e15f684f8a1e3795\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:04.653378 kubelet[2329]: E0129 11:38:04.652794 2329 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"216c7811ac130a3fb67a0705d8faf937266b3b9566fbec00e15f684f8a1e3795\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nwzz8" Jan 29 11:38:04.653378 kubelet[2329]: E0129 11:38:04.652936 2329 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"216c7811ac130a3fb67a0705d8faf937266b3b9566fbec00e15f684f8a1e3795\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nwzz8" Jan 29 11:38:04.653378 kubelet[2329]: E0129 11:38:04.653010 2329 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-nwzz8_calico-system(60248392-cc03-4e9c-8d46-1318728a4ee1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-nwzz8_calico-system(60248392-cc03-4e9c-8d46-1318728a4ee1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"216c7811ac130a3fb67a0705d8faf937266b3b9566fbec00e15f684f8a1e3795\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nwzz8" podUID="60248392-cc03-4e9c-8d46-1318728a4ee1" Jan 29 11:38:05.260800 systemd[1]: Created slice kubepods-besteffort-pod076cc005_5bb4_41ad_b987_7c6aaa46808b.slice - libcontainer container kubepods-besteffort-pod076cc005_5bb4_41ad_b987_7c6aaa46808b.slice. Jan 29 11:38:05.288549 kubelet[2329]: I0129 11:38:05.288500 2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9v4v\" (UniqueName: \"kubernetes.io/projected/076cc005-5bb4-41ad-b987-7c6aaa46808b-kube-api-access-p9v4v\") pod \"nginx-deployment-8587fbcb89-mn4rh\" (UID: \"076cc005-5bb4-41ad-b987-7c6aaa46808b\") " pod="default/nginx-deployment-8587fbcb89-mn4rh" Jan 29 11:38:05.408936 kubelet[2329]: E0129 11:38:05.407332 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:05.578155 containerd[1875]: time="2025-01-29T11:38:05.577410424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-mn4rh,Uid:076cc005-5bb4-41ad-b987-7c6aaa46808b,Namespace:default,Attempt:0,}" Jan 29 11:38:05.673719 kubelet[2329]: I0129 11:38:05.673685 2329 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="216c7811ac130a3fb67a0705d8faf937266b3b9566fbec00e15f684f8a1e3795" Jan 29 11:38:05.675514 containerd[1875]: time="2025-01-29T11:38:05.675470453Z" level=info msg="StopPodSandbox for \"216c7811ac130a3fb67a0705d8faf937266b3b9566fbec00e15f684f8a1e3795\"" Jan 29 11:38:05.675927 containerd[1875]: time="2025-01-29T11:38:05.675897176Z" level=info msg="Ensure that sandbox 216c7811ac130a3fb67a0705d8faf937266b3b9566fbec00e15f684f8a1e3795 in task-service has been cleanup successfully" Jan 29 11:38:05.679859 containerd[1875]: time="2025-01-29T11:38:05.677432399Z" level=info msg="TearDown network for sandbox \"216c7811ac130a3fb67a0705d8faf937266b3b9566fbec00e15f684f8a1e3795\" successfully" Jan 29 11:38:05.679859 containerd[1875]: time="2025-01-29T11:38:05.677470555Z" level=info msg="StopPodSandbox for \"216c7811ac130a3fb67a0705d8faf937266b3b9566fbec00e15f684f8a1e3795\" returns successfully" Jan 29 11:38:05.681081 systemd[1]: run-netns-cni\x2d1d5f23f4\x2d6deb\x2d4837\x2dd0fc\x2de5f89d90bd47.mount: Deactivated successfully. Jan 29 11:38:05.684864 containerd[1875]: time="2025-01-29T11:38:05.684128451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nwzz8,Uid:60248392-cc03-4e9c-8d46-1318728a4ee1,Namespace:calico-system,Attempt:1,}" Jan 29 11:38:05.835114 containerd[1875]: time="2025-01-29T11:38:05.834983791Z" level=error msg="Failed to destroy network for sandbox \"ea5a24182ae6641fccbbbf881afee34b1e37581cdea07e541dd4c20d8bb134c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:05.836164 containerd[1875]: time="2025-01-29T11:38:05.836123218Z" level=error msg="encountered an error cleaning up failed sandbox \"ea5a24182ae6641fccbbbf881afee34b1e37581cdea07e541dd4c20d8bb134c4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:05.837236 containerd[1875]: time="2025-01-29T11:38:05.836933777Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-mn4rh,Uid:076cc005-5bb4-41ad-b987-7c6aaa46808b,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ea5a24182ae6641fccbbbf881afee34b1e37581cdea07e541dd4c20d8bb134c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:05.837385 kubelet[2329]: E0129 11:38:05.837265 2329 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea5a24182ae6641fccbbbf881afee34b1e37581cdea07e541dd4c20d8bb134c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:05.837385 kubelet[2329]: E0129 11:38:05.837337 2329 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea5a24182ae6641fccbbbf881afee34b1e37581cdea07e541dd4c20d8bb134c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-mn4rh" Jan 29 11:38:05.837385 kubelet[2329]: E0129 11:38:05.837362 2329 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea5a24182ae6641fccbbbf881afee34b1e37581cdea07e541dd4c20d8bb134c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-mn4rh" Jan 29 11:38:05.837529 kubelet[2329]: E0129 11:38:05.837419 2329 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-mn4rh_default(076cc005-5bb4-41ad-b987-7c6aaa46808b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-mn4rh_default(076cc005-5bb4-41ad-b987-7c6aaa46808b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ea5a24182ae6641fccbbbf881afee34b1e37581cdea07e541dd4c20d8bb134c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-mn4rh" podUID="076cc005-5bb4-41ad-b987-7c6aaa46808b" Jan 29 11:38:05.862127 containerd[1875]: time="2025-01-29T11:38:05.862072363Z" level=error msg="Failed to destroy network for sandbox \"81f0bf6a50e4fabde7dd62d12aea35a8ff4ba29aa94292e5b0dbcc3bdd34217e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:05.862718 containerd[1875]: time="2025-01-29T11:38:05.862610271Z" level=error msg="encountered an error cleaning up failed sandbox \"81f0bf6a50e4fabde7dd62d12aea35a8ff4ba29aa94292e5b0dbcc3bdd34217e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:05.862871 containerd[1875]: time="2025-01-29T11:38:05.862781520Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nwzz8,Uid:60248392-cc03-4e9c-8d46-1318728a4ee1,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"81f0bf6a50e4fabde7dd62d12aea35a8ff4ba29aa94292e5b0dbcc3bdd34217e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:05.863680 kubelet[2329]: E0129 11:38:05.863074 2329 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81f0bf6a50e4fabde7dd62d12aea35a8ff4ba29aa94292e5b0dbcc3bdd34217e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:05.863680 kubelet[2329]: E0129 11:38:05.863237 2329 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81f0bf6a50e4fabde7dd62d12aea35a8ff4ba29aa94292e5b0dbcc3bdd34217e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nwzz8" Jan 29 11:38:05.863680 kubelet[2329]: E0129 11:38:05.863270 2329 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81f0bf6a50e4fabde7dd62d12aea35a8ff4ba29aa94292e5b0dbcc3bdd34217e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nwzz8" Jan 29 11:38:05.863898 kubelet[2329]: E0129 11:38:05.863332 2329 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-nwzz8_calico-system(60248392-cc03-4e9c-8d46-1318728a4ee1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-nwzz8_calico-system(60248392-cc03-4e9c-8d46-1318728a4ee1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"81f0bf6a50e4fabde7dd62d12aea35a8ff4ba29aa94292e5b0dbcc3bdd34217e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nwzz8" podUID="60248392-cc03-4e9c-8d46-1318728a4ee1" Jan 29 11:38:06.408238 kubelet[2329]: E0129 11:38:06.408073 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:06.604139 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-81f0bf6a50e4fabde7dd62d12aea35a8ff4ba29aa94292e5b0dbcc3bdd34217e-shm.mount: Deactivated successfully. Jan 29 11:38:06.604265 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ea5a24182ae6641fccbbbf881afee34b1e37581cdea07e541dd4c20d8bb134c4-shm.mount: Deactivated successfully. Jan 29 11:38:06.678817 kubelet[2329]: I0129 11:38:06.678715 2329 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81f0bf6a50e4fabde7dd62d12aea35a8ff4ba29aa94292e5b0dbcc3bdd34217e" Jan 29 11:38:06.682039 containerd[1875]: time="2025-01-29T11:38:06.682004033Z" level=info msg="StopPodSandbox for \"81f0bf6a50e4fabde7dd62d12aea35a8ff4ba29aa94292e5b0dbcc3bdd34217e\"" Jan 29 11:38:06.685247 containerd[1875]: time="2025-01-29T11:38:06.682323780Z" level=info msg="Ensure that sandbox 81f0bf6a50e4fabde7dd62d12aea35a8ff4ba29aa94292e5b0dbcc3bdd34217e in task-service has been cleanup successfully" Jan 29 11:38:06.688038 kubelet[2329]: I0129 11:38:06.688001 2329 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea5a24182ae6641fccbbbf881afee34b1e37581cdea07e541dd4c20d8bb134c4" Jan 29 11:38:06.688434 systemd[1]: run-netns-cni\x2d5ea79033\x2d9dfc\x2d7536\x2d9406\x2d7a832648366b.mount: Deactivated successfully. Jan 29 11:38:06.690430 containerd[1875]: time="2025-01-29T11:38:06.690398476Z" level=info msg="TearDown network for sandbox \"81f0bf6a50e4fabde7dd62d12aea35a8ff4ba29aa94292e5b0dbcc3bdd34217e\" successfully" Jan 29 11:38:06.690430 containerd[1875]: time="2025-01-29T11:38:06.690427419Z" level=info msg="StopPodSandbox for \"81f0bf6a50e4fabde7dd62d12aea35a8ff4ba29aa94292e5b0dbcc3bdd34217e\" returns successfully" Jan 29 11:38:06.691033 containerd[1875]: time="2025-01-29T11:38:06.690694400Z" level=info msg="StopPodSandbox for \"ea5a24182ae6641fccbbbf881afee34b1e37581cdea07e541dd4c20d8bb134c4\"" Jan 29 11:38:06.691564 containerd[1875]: time="2025-01-29T11:38:06.691340785Z" level=info msg="Ensure that sandbox ea5a24182ae6641fccbbbf881afee34b1e37581cdea07e541dd4c20d8bb134c4 in task-service has been cleanup successfully" Jan 29 11:38:06.694568 containerd[1875]: time="2025-01-29T11:38:06.692581527Z" level=info msg="StopPodSandbox for \"216c7811ac130a3fb67a0705d8faf937266b3b9566fbec00e15f684f8a1e3795\"" Jan 29 11:38:06.694568 containerd[1875]: time="2025-01-29T11:38:06.693449157Z" level=info msg="TearDown network for sandbox \"216c7811ac130a3fb67a0705d8faf937266b3b9566fbec00e15f684f8a1e3795\" successfully" Jan 29 11:38:06.694568 containerd[1875]: time="2025-01-29T11:38:06.693698597Z" level=info msg="StopPodSandbox for \"216c7811ac130a3fb67a0705d8faf937266b3b9566fbec00e15f684f8a1e3795\" returns successfully" Jan 29 11:38:06.694568 containerd[1875]: time="2025-01-29T11:38:06.693969492Z" level=info msg="TearDown network for sandbox \"ea5a24182ae6641fccbbbf881afee34b1e37581cdea07e541dd4c20d8bb134c4\" successfully" Jan 29 11:38:06.694568 containerd[1875]: time="2025-01-29T11:38:06.693988857Z" level=info msg="StopPodSandbox for \"ea5a24182ae6641fccbbbf881afee34b1e37581cdea07e541dd4c20d8bb134c4\" returns successfully" Jan 29 11:38:06.696110 containerd[1875]: time="2025-01-29T11:38:06.696082220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-mn4rh,Uid:076cc005-5bb4-41ad-b987-7c6aaa46808b,Namespace:default,Attempt:1,}" Jan 29 11:38:06.697339 containerd[1875]: time="2025-01-29T11:38:06.697303386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nwzz8,Uid:60248392-cc03-4e9c-8d46-1318728a4ee1,Namespace:calico-system,Attempt:2,}" Jan 29 11:38:06.701289 systemd[1]: run-netns-cni\x2d50361037\x2dc903\x2d412a\x2dc650\x2d27208a4ea6bf.mount: Deactivated successfully. Jan 29 11:38:06.979814 containerd[1875]: time="2025-01-29T11:38:06.979762899Z" level=error msg="Failed to destroy network for sandbox \"538fd0d5057dc6a28263a4d9c1006cd354e6dff76f43d1a1d0a350bbaa3c2ea6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:06.981058 containerd[1875]: time="2025-01-29T11:38:06.981011818Z" level=error msg="encountered an error cleaning up failed sandbox \"538fd0d5057dc6a28263a4d9c1006cd354e6dff76f43d1a1d0a350bbaa3c2ea6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:06.981261 containerd[1875]: time="2025-01-29T11:38:06.981234005Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nwzz8,Uid:60248392-cc03-4e9c-8d46-1318728a4ee1,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"538fd0d5057dc6a28263a4d9c1006cd354e6dff76f43d1a1d0a350bbaa3c2ea6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:06.981738 kubelet[2329]: E0129 11:38:06.981699 2329 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"538fd0d5057dc6a28263a4d9c1006cd354e6dff76f43d1a1d0a350bbaa3c2ea6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:06.982049 kubelet[2329]: E0129 11:38:06.982024 2329 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"538fd0d5057dc6a28263a4d9c1006cd354e6dff76f43d1a1d0a350bbaa3c2ea6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nwzz8" Jan 29 11:38:06.982219 kubelet[2329]: E0129 11:38:06.982191 2329 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"538fd0d5057dc6a28263a4d9c1006cd354e6dff76f43d1a1d0a350bbaa3c2ea6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nwzz8" Jan 29 11:38:06.982565 kubelet[2329]: E0129 11:38:06.982533 2329 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-nwzz8_calico-system(60248392-cc03-4e9c-8d46-1318728a4ee1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-nwzz8_calico-system(60248392-cc03-4e9c-8d46-1318728a4ee1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"538fd0d5057dc6a28263a4d9c1006cd354e6dff76f43d1a1d0a350bbaa3c2ea6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nwzz8" podUID="60248392-cc03-4e9c-8d46-1318728a4ee1" Jan 29 11:38:06.993041 containerd[1875]: time="2025-01-29T11:38:06.992976352Z" level=error msg="Failed to destroy network for sandbox \"81a591738eb1ac16cb42f5ed39c3920ea2214d2f5fc3f1d787a92fa777a5b834\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:06.993778 containerd[1875]: time="2025-01-29T11:38:06.993736283Z" level=error msg="encountered an error cleaning up failed sandbox \"81a591738eb1ac16cb42f5ed39c3920ea2214d2f5fc3f1d787a92fa777a5b834\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:06.993977 containerd[1875]: time="2025-01-29T11:38:06.993950406Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-mn4rh,Uid:076cc005-5bb4-41ad-b987-7c6aaa46808b,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"81a591738eb1ac16cb42f5ed39c3920ea2214d2f5fc3f1d787a92fa777a5b834\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:06.994905 kubelet[2329]: E0129 11:38:06.994385 2329 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81a591738eb1ac16cb42f5ed39c3920ea2214d2f5fc3f1d787a92fa777a5b834\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:06.994905 kubelet[2329]: E0129 11:38:06.994464 2329 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81a591738eb1ac16cb42f5ed39c3920ea2214d2f5fc3f1d787a92fa777a5b834\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-mn4rh" Jan 29 11:38:06.994905 kubelet[2329]: E0129 11:38:06.994494 2329 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81a591738eb1ac16cb42f5ed39c3920ea2214d2f5fc3f1d787a92fa777a5b834\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-mn4rh" Jan 29 11:38:06.995122 kubelet[2329]: E0129 11:38:06.994563 2329 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-mn4rh_default(076cc005-5bb4-41ad-b987-7c6aaa46808b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-mn4rh_default(076cc005-5bb4-41ad-b987-7c6aaa46808b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"81a591738eb1ac16cb42f5ed39c3920ea2214d2f5fc3f1d787a92fa777a5b834\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-mn4rh" podUID="076cc005-5bb4-41ad-b987-7c6aaa46808b" Jan 29 11:38:07.409279 kubelet[2329]: E0129 11:38:07.409168 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:07.599325 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-81a591738eb1ac16cb42f5ed39c3920ea2214d2f5fc3f1d787a92fa777a5b834-shm.mount: Deactivated successfully. Jan 29 11:38:07.599457 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-538fd0d5057dc6a28263a4d9c1006cd354e6dff76f43d1a1d0a350bbaa3c2ea6-shm.mount: Deactivated successfully. Jan 29 11:38:07.697022 kubelet[2329]: I0129 11:38:07.696919 2329 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="538fd0d5057dc6a28263a4d9c1006cd354e6dff76f43d1a1d0a350bbaa3c2ea6" Jan 29 11:38:07.703147 containerd[1875]: time="2025-01-29T11:38:07.702014470Z" level=info msg="StopPodSandbox for \"538fd0d5057dc6a28263a4d9c1006cd354e6dff76f43d1a1d0a350bbaa3c2ea6\"" Jan 29 11:38:07.704458 kubelet[2329]: I0129 11:38:07.703989 2329 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81a591738eb1ac16cb42f5ed39c3920ea2214d2f5fc3f1d787a92fa777a5b834" Jan 29 11:38:07.704797 containerd[1875]: time="2025-01-29T11:38:07.704391187Z" level=info msg="Ensure that sandbox 538fd0d5057dc6a28263a4d9c1006cd354e6dff76f43d1a1d0a350bbaa3c2ea6 in task-service has been cleanup successfully" Jan 29 11:38:07.704979 containerd[1875]: time="2025-01-29T11:38:07.704945096Z" level=info msg="TearDown network for sandbox \"538fd0d5057dc6a28263a4d9c1006cd354e6dff76f43d1a1d0a350bbaa3c2ea6\" successfully" Jan 29 11:38:07.705611 containerd[1875]: time="2025-01-29T11:38:07.705516913Z" level=info msg="StopPodSandbox for \"538fd0d5057dc6a28263a4d9c1006cd354e6dff76f43d1a1d0a350bbaa3c2ea6\" returns successfully" Jan 29 11:38:07.706000 containerd[1875]: time="2025-01-29T11:38:07.705872516Z" level=info msg="StopPodSandbox for \"81a591738eb1ac16cb42f5ed39c3920ea2214d2f5fc3f1d787a92fa777a5b834\"" Jan 29 11:38:07.706278 containerd[1875]: time="2025-01-29T11:38:07.706174951Z" level=info msg="Ensure that sandbox 81a591738eb1ac16cb42f5ed39c3920ea2214d2f5fc3f1d787a92fa777a5b834 in task-service has been cleanup successfully" Jan 29 11:38:07.706498 containerd[1875]: time="2025-01-29T11:38:07.706177534Z" level=info msg="StopPodSandbox for \"81f0bf6a50e4fabde7dd62d12aea35a8ff4ba29aa94292e5b0dbcc3bdd34217e\"" Jan 29 11:38:07.707033 containerd[1875]: time="2025-01-29T11:38:07.706631820Z" level=info msg="TearDown network for sandbox \"81f0bf6a50e4fabde7dd62d12aea35a8ff4ba29aa94292e5b0dbcc3bdd34217e\" successfully" Jan 29 11:38:07.707033 containerd[1875]: time="2025-01-29T11:38:07.706647741Z" level=info msg="StopPodSandbox for \"81f0bf6a50e4fabde7dd62d12aea35a8ff4ba29aa94292e5b0dbcc3bdd34217e\" returns successfully" Jan 29 11:38:07.708754 containerd[1875]: time="2025-01-29T11:38:07.708729614Z" level=info msg="StopPodSandbox for \"216c7811ac130a3fb67a0705d8faf937266b3b9566fbec00e15f684f8a1e3795\"" Jan 29 11:38:07.709260 containerd[1875]: time="2025-01-29T11:38:07.709218799Z" level=info msg="TearDown network for sandbox \"216c7811ac130a3fb67a0705d8faf937266b3b9566fbec00e15f684f8a1e3795\" successfully" Jan 29 11:38:07.709423 containerd[1875]: time="2025-01-29T11:38:07.709356075Z" level=info msg="StopPodSandbox for \"216c7811ac130a3fb67a0705d8faf937266b3b9566fbec00e15f684f8a1e3795\" returns successfully" Jan 29 11:38:07.710551 containerd[1875]: time="2025-01-29T11:38:07.710526597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nwzz8,Uid:60248392-cc03-4e9c-8d46-1318728a4ee1,Namespace:calico-system,Attempt:3,}" Jan 29 11:38:07.712304 containerd[1875]: time="2025-01-29T11:38:07.712125886Z" level=info msg="TearDown network for sandbox \"81a591738eb1ac16cb42f5ed39c3920ea2214d2f5fc3f1d787a92fa777a5b834\" successfully" Jan 29 11:38:07.712304 containerd[1875]: time="2025-01-29T11:38:07.712154351Z" level=info msg="StopPodSandbox for \"81a591738eb1ac16cb42f5ed39c3920ea2214d2f5fc3f1d787a92fa777a5b834\" returns successfully" Jan 29 11:38:07.713027 containerd[1875]: time="2025-01-29T11:38:07.712988008Z" level=info msg="StopPodSandbox for \"ea5a24182ae6641fccbbbf881afee34b1e37581cdea07e541dd4c20d8bb134c4\"" Jan 29 11:38:07.713196 containerd[1875]: time="2025-01-29T11:38:07.713097118Z" level=info msg="TearDown network for sandbox \"ea5a24182ae6641fccbbbf881afee34b1e37581cdea07e541dd4c20d8bb134c4\" successfully" Jan 29 11:38:07.713196 containerd[1875]: time="2025-01-29T11:38:07.713113437Z" level=info msg="StopPodSandbox for \"ea5a24182ae6641fccbbbf881afee34b1e37581cdea07e541dd4c20d8bb134c4\" returns successfully" Jan 29 11:38:07.715820 systemd[1]: run-netns-cni\x2d00ab40b0\x2da203\x2d1c6e\x2dea7e\x2db7c1d5954d69.mount: Deactivated successfully. Jan 29 11:38:07.716127 systemd[1]: run-netns-cni\x2da41b40d5\x2d604b\x2db953\x2de164\x2df228882fdb09.mount: Deactivated successfully. Jan 29 11:38:07.723067 containerd[1875]: time="2025-01-29T11:38:07.719971705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-mn4rh,Uid:076cc005-5bb4-41ad-b987-7c6aaa46808b,Namespace:default,Attempt:2,}" Jan 29 11:38:07.942011 containerd[1875]: time="2025-01-29T11:38:07.941958521Z" level=error msg="Failed to destroy network for sandbox \"737379651b3aa2f389b43b4dbcf0a0b2c65c0b9dfd90df300c1cc78223bef4cf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:07.942759 containerd[1875]: time="2025-01-29T11:38:07.942565447Z" level=error msg="encountered an error cleaning up failed sandbox \"737379651b3aa2f389b43b4dbcf0a0b2c65c0b9dfd90df300c1cc78223bef4cf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:07.942759 containerd[1875]: time="2025-01-29T11:38:07.942645830Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nwzz8,Uid:60248392-cc03-4e9c-8d46-1318728a4ee1,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"737379651b3aa2f389b43b4dbcf0a0b2c65c0b9dfd90df300c1cc78223bef4cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:07.943672 kubelet[2329]: E0129 11:38:07.943066 2329 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"737379651b3aa2f389b43b4dbcf0a0b2c65c0b9dfd90df300c1cc78223bef4cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:07.943672 kubelet[2329]: E0129 11:38:07.943233 2329 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"737379651b3aa2f389b43b4dbcf0a0b2c65c0b9dfd90df300c1cc78223bef4cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nwzz8" Jan 29 11:38:07.943672 kubelet[2329]: E0129 11:38:07.943265 2329 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"737379651b3aa2f389b43b4dbcf0a0b2c65c0b9dfd90df300c1cc78223bef4cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nwzz8" Jan 29 11:38:07.944108 kubelet[2329]: E0129 11:38:07.943313 2329 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-nwzz8_calico-system(60248392-cc03-4e9c-8d46-1318728a4ee1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-nwzz8_calico-system(60248392-cc03-4e9c-8d46-1318728a4ee1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"737379651b3aa2f389b43b4dbcf0a0b2c65c0b9dfd90df300c1cc78223bef4cf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nwzz8" podUID="60248392-cc03-4e9c-8d46-1318728a4ee1" Jan 29 11:38:08.000606 containerd[1875]: time="2025-01-29T11:38:08.000493956Z" level=error msg="Failed to destroy network for sandbox \"927a3e368c857aee56fec13a0102c646b1689a908268eddb6e01637928fdad6c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:08.002591 containerd[1875]: time="2025-01-29T11:38:08.002195362Z" level=error msg="encountered an error cleaning up failed sandbox \"927a3e368c857aee56fec13a0102c646b1689a908268eddb6e01637928fdad6c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:08.002591 containerd[1875]: time="2025-01-29T11:38:08.002448062Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-mn4rh,Uid:076cc005-5bb4-41ad-b987-7c6aaa46808b,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"927a3e368c857aee56fec13a0102c646b1689a908268eddb6e01637928fdad6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:08.003672 kubelet[2329]: E0129 11:38:08.003247 2329 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"927a3e368c857aee56fec13a0102c646b1689a908268eddb6e01637928fdad6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:08.003672 kubelet[2329]: E0129 11:38:08.003326 2329 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"927a3e368c857aee56fec13a0102c646b1689a908268eddb6e01637928fdad6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-mn4rh" Jan 29 11:38:08.003672 kubelet[2329]: E0129 11:38:08.003352 2329 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"927a3e368c857aee56fec13a0102c646b1689a908268eddb6e01637928fdad6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-mn4rh" Jan 29 11:38:08.004023 kubelet[2329]: E0129 11:38:08.003405 2329 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-mn4rh_default(076cc005-5bb4-41ad-b987-7c6aaa46808b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-mn4rh_default(076cc005-5bb4-41ad-b987-7c6aaa46808b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"927a3e368c857aee56fec13a0102c646b1689a908268eddb6e01637928fdad6c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-mn4rh" podUID="076cc005-5bb4-41ad-b987-7c6aaa46808b" Jan 29 11:38:08.414087 kubelet[2329]: E0129 11:38:08.410364 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:08.601129 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-927a3e368c857aee56fec13a0102c646b1689a908268eddb6e01637928fdad6c-shm.mount: Deactivated successfully. Jan 29 11:38:08.601585 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-737379651b3aa2f389b43b4dbcf0a0b2c65c0b9dfd90df300c1cc78223bef4cf-shm.mount: Deactivated successfully. Jan 29 11:38:08.711984 kubelet[2329]: I0129 11:38:08.711802 2329 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="737379651b3aa2f389b43b4dbcf0a0b2c65c0b9dfd90df300c1cc78223bef4cf" Jan 29 11:38:08.719565 containerd[1875]: time="2025-01-29T11:38:08.713690324Z" level=info msg="StopPodSandbox for \"737379651b3aa2f389b43b4dbcf0a0b2c65c0b9dfd90df300c1cc78223bef4cf\"" Jan 29 11:38:08.719565 containerd[1875]: time="2025-01-29T11:38:08.714203712Z" level=info msg="Ensure that sandbox 737379651b3aa2f389b43b4dbcf0a0b2c65c0b9dfd90df300c1cc78223bef4cf in task-service has been cleanup successfully" Jan 29 11:38:08.719565 containerd[1875]: time="2025-01-29T11:38:08.715104766Z" level=info msg="TearDown network for sandbox \"737379651b3aa2f389b43b4dbcf0a0b2c65c0b9dfd90df300c1cc78223bef4cf\" successfully" Jan 29 11:38:08.719565 containerd[1875]: time="2025-01-29T11:38:08.715124616Z" level=info msg="StopPodSandbox for \"737379651b3aa2f389b43b4dbcf0a0b2c65c0b9dfd90df300c1cc78223bef4cf\" returns successfully" Jan 29 11:38:08.719565 containerd[1875]: time="2025-01-29T11:38:08.719278865Z" level=info msg="StopPodSandbox for \"538fd0d5057dc6a28263a4d9c1006cd354e6dff76f43d1a1d0a350bbaa3c2ea6\"" Jan 29 11:38:08.719565 containerd[1875]: time="2025-01-29T11:38:08.719382002Z" level=info msg="TearDown network for sandbox \"538fd0d5057dc6a28263a4d9c1006cd354e6dff76f43d1a1d0a350bbaa3c2ea6\" successfully" Jan 29 11:38:08.719565 containerd[1875]: time="2025-01-29T11:38:08.719397424Z" level=info msg="StopPodSandbox for \"538fd0d5057dc6a28263a4d9c1006cd354e6dff76f43d1a1d0a350bbaa3c2ea6\" returns successfully" Jan 29 11:38:08.719066 systemd[1]: run-netns-cni\x2d8ca3b749\x2d49b8\x2d495d\x2d070a\x2d8f15638556c3.mount: Deactivated successfully. Jan 29 11:38:08.722392 containerd[1875]: time="2025-01-29T11:38:08.721942557Z" level=info msg="StopPodSandbox for \"81f0bf6a50e4fabde7dd62d12aea35a8ff4ba29aa94292e5b0dbcc3bdd34217e\"" Jan 29 11:38:08.722392 containerd[1875]: time="2025-01-29T11:38:08.722036189Z" level=info msg="TearDown network for sandbox \"81f0bf6a50e4fabde7dd62d12aea35a8ff4ba29aa94292e5b0dbcc3bdd34217e\" successfully" Jan 29 11:38:08.722392 containerd[1875]: time="2025-01-29T11:38:08.722051009Z" level=info msg="StopPodSandbox for \"81f0bf6a50e4fabde7dd62d12aea35a8ff4ba29aa94292e5b0dbcc3bdd34217e\" returns successfully" Jan 29 11:38:08.723449 containerd[1875]: time="2025-01-29T11:38:08.723419803Z" level=info msg="StopPodSandbox for \"216c7811ac130a3fb67a0705d8faf937266b3b9566fbec00e15f684f8a1e3795\"" Jan 29 11:38:08.723557 containerd[1875]: time="2025-01-29T11:38:08.723520510Z" level=info msg="TearDown network for sandbox \"216c7811ac130a3fb67a0705d8faf937266b3b9566fbec00e15f684f8a1e3795\" successfully" Jan 29 11:38:08.723557 containerd[1875]: time="2025-01-29T11:38:08.723536163Z" level=info msg="StopPodSandbox for \"216c7811ac130a3fb67a0705d8faf937266b3b9566fbec00e15f684f8a1e3795\" returns successfully" Jan 29 11:38:08.725864 containerd[1875]: time="2025-01-29T11:38:08.724622845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nwzz8,Uid:60248392-cc03-4e9c-8d46-1318728a4ee1,Namespace:calico-system,Attempt:4,}" Jan 29 11:38:08.725961 kubelet[2329]: I0129 11:38:08.725229 2329 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="927a3e368c857aee56fec13a0102c646b1689a908268eddb6e01637928fdad6c" Jan 29 11:38:08.727479 containerd[1875]: time="2025-01-29T11:38:08.727323157Z" level=info msg="StopPodSandbox for \"927a3e368c857aee56fec13a0102c646b1689a908268eddb6e01637928fdad6c\"" Jan 29 11:38:08.728689 containerd[1875]: time="2025-01-29T11:38:08.728643713Z" level=info msg="Ensure that sandbox 927a3e368c857aee56fec13a0102c646b1689a908268eddb6e01637928fdad6c in task-service has been cleanup successfully" Jan 29 11:38:08.733750 containerd[1875]: time="2025-01-29T11:38:08.730723932Z" level=info msg="TearDown network for sandbox \"927a3e368c857aee56fec13a0102c646b1689a908268eddb6e01637928fdad6c\" successfully" Jan 29 11:38:08.733750 containerd[1875]: time="2025-01-29T11:38:08.730825830Z" level=info msg="StopPodSandbox for \"927a3e368c857aee56fec13a0102c646b1689a908268eddb6e01637928fdad6c\" returns successfully" Jan 29 11:38:08.738112 systemd[1]: run-netns-cni\x2dac102803\x2d3446\x2dcb09\x2df920\x2d70fd9ec24868.mount: Deactivated successfully. Jan 29 11:38:08.750711 containerd[1875]: time="2025-01-29T11:38:08.750664480Z" level=info msg="StopPodSandbox for \"81a591738eb1ac16cb42f5ed39c3920ea2214d2f5fc3f1d787a92fa777a5b834\"" Jan 29 11:38:08.751537 containerd[1875]: time="2025-01-29T11:38:08.751380517Z" level=info msg="TearDown network for sandbox \"81a591738eb1ac16cb42f5ed39c3920ea2214d2f5fc3f1d787a92fa777a5b834\" successfully" Jan 29 11:38:08.751537 containerd[1875]: time="2025-01-29T11:38:08.751503827Z" level=info msg="StopPodSandbox for \"81a591738eb1ac16cb42f5ed39c3920ea2214d2f5fc3f1d787a92fa777a5b834\" returns successfully" Jan 29 11:38:08.752590 containerd[1875]: time="2025-01-29T11:38:08.752563718Z" level=info msg="StopPodSandbox for \"ea5a24182ae6641fccbbbf881afee34b1e37581cdea07e541dd4c20d8bb134c4\"" Jan 29 11:38:08.755212 containerd[1875]: time="2025-01-29T11:38:08.755128951Z" level=info msg="TearDown network for sandbox \"ea5a24182ae6641fccbbbf881afee34b1e37581cdea07e541dd4c20d8bb134c4\" successfully" Jan 29 11:38:08.755212 containerd[1875]: time="2025-01-29T11:38:08.755162457Z" level=info msg="StopPodSandbox for \"ea5a24182ae6641fccbbbf881afee34b1e37581cdea07e541dd4c20d8bb134c4\" returns successfully" Jan 29 11:38:08.757290 containerd[1875]: time="2025-01-29T11:38:08.755778080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-mn4rh,Uid:076cc005-5bb4-41ad-b987-7c6aaa46808b,Namespace:default,Attempt:3,}" Jan 29 11:38:08.970203 containerd[1875]: time="2025-01-29T11:38:08.970090380Z" level=error msg="Failed to destroy network for sandbox \"c3274db9cdf7d82669680a6dc6fdb4b3a67aa0130c064a5f25b0b77ef4b98a10\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:08.972500 containerd[1875]: time="2025-01-29T11:38:08.970992249Z" level=error msg="encountered an error cleaning up failed sandbox \"c3274db9cdf7d82669680a6dc6fdb4b3a67aa0130c064a5f25b0b77ef4b98a10\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:08.972500 containerd[1875]: time="2025-01-29T11:38:08.971133945Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nwzz8,Uid:60248392-cc03-4e9c-8d46-1318728a4ee1,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"c3274db9cdf7d82669680a6dc6fdb4b3a67aa0130c064a5f25b0b77ef4b98a10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:08.973365 kubelet[2329]: E0129 11:38:08.972911 2329 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3274db9cdf7d82669680a6dc6fdb4b3a67aa0130c064a5f25b0b77ef4b98a10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:08.973365 kubelet[2329]: E0129 11:38:08.972976 2329 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3274db9cdf7d82669680a6dc6fdb4b3a67aa0130c064a5f25b0b77ef4b98a10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nwzz8" Jan 29 11:38:08.973365 kubelet[2329]: E0129 11:38:08.973019 2329 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3274db9cdf7d82669680a6dc6fdb4b3a67aa0130c064a5f25b0b77ef4b98a10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nwzz8" Jan 29 11:38:08.973706 kubelet[2329]: E0129 11:38:08.973249 2329 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-nwzz8_calico-system(60248392-cc03-4e9c-8d46-1318728a4ee1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-nwzz8_calico-system(60248392-cc03-4e9c-8d46-1318728a4ee1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c3274db9cdf7d82669680a6dc6fdb4b3a67aa0130c064a5f25b0b77ef4b98a10\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nwzz8" podUID="60248392-cc03-4e9c-8d46-1318728a4ee1" Jan 29 11:38:09.005959 containerd[1875]: time="2025-01-29T11:38:09.005556888Z" level=error msg="Failed to destroy network for sandbox \"79a8470c61dc8ab98e22503772b8c52b83ce790d03c5e564d5676ba3252ca1fe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:09.007209 containerd[1875]: time="2025-01-29T11:38:09.007158791Z" level=error msg="encountered an error cleaning up failed sandbox \"79a8470c61dc8ab98e22503772b8c52b83ce790d03c5e564d5676ba3252ca1fe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:09.009788 containerd[1875]: time="2025-01-29T11:38:09.007251760Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-mn4rh,Uid:076cc005-5bb4-41ad-b987-7c6aaa46808b,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"79a8470c61dc8ab98e22503772b8c52b83ce790d03c5e564d5676ba3252ca1fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:09.011023 kubelet[2329]: E0129 11:38:09.009965 2329 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79a8470c61dc8ab98e22503772b8c52b83ce790d03c5e564d5676ba3252ca1fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:09.011023 kubelet[2329]: E0129 11:38:09.010110 2329 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79a8470c61dc8ab98e22503772b8c52b83ce790d03c5e564d5676ba3252ca1fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-mn4rh" Jan 29 11:38:09.011023 kubelet[2329]: E0129 11:38:09.010141 2329 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79a8470c61dc8ab98e22503772b8c52b83ce790d03c5e564d5676ba3252ca1fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-mn4rh" Jan 29 11:38:09.011158 kubelet[2329]: E0129 11:38:09.010204 2329 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-mn4rh_default(076cc005-5bb4-41ad-b987-7c6aaa46808b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-mn4rh_default(076cc005-5bb4-41ad-b987-7c6aaa46808b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"79a8470c61dc8ab98e22503772b8c52b83ce790d03c5e564d5676ba3252ca1fe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-mn4rh" podUID="076cc005-5bb4-41ad-b987-7c6aaa46808b" Jan 29 11:38:09.412522 kubelet[2329]: E0129 11:38:09.411579 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:09.600602 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c3274db9cdf7d82669680a6dc6fdb4b3a67aa0130c064a5f25b0b77ef4b98a10-shm.mount: Deactivated successfully. Jan 29 11:38:09.738617 kubelet[2329]: I0129 11:38:09.738491 2329 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79a8470c61dc8ab98e22503772b8c52b83ce790d03c5e564d5676ba3252ca1fe" Jan 29 11:38:09.740156 containerd[1875]: time="2025-01-29T11:38:09.739753507Z" level=info msg="StopPodSandbox for \"79a8470c61dc8ab98e22503772b8c52b83ce790d03c5e564d5676ba3252ca1fe\"" Jan 29 11:38:09.740156 containerd[1875]: time="2025-01-29T11:38:09.740030370Z" level=info msg="Ensure that sandbox 79a8470c61dc8ab98e22503772b8c52b83ce790d03c5e564d5676ba3252ca1fe in task-service has been cleanup successfully" Jan 29 11:38:09.744382 containerd[1875]: time="2025-01-29T11:38:09.742944150Z" level=info msg="TearDown network for sandbox \"79a8470c61dc8ab98e22503772b8c52b83ce790d03c5e564d5676ba3252ca1fe\" successfully" Jan 29 11:38:09.744382 containerd[1875]: time="2025-01-29T11:38:09.743034115Z" level=info msg="StopPodSandbox for \"79a8470c61dc8ab98e22503772b8c52b83ce790d03c5e564d5676ba3252ca1fe\" returns successfully" Jan 29 11:38:09.743823 systemd[1]: run-netns-cni\x2df95151ff\x2d59d5\x2d17b8\x2deba5\x2d6b3ed971ae27.mount: Deactivated successfully. Jan 29 11:38:09.749423 containerd[1875]: time="2025-01-29T11:38:09.748229378Z" level=info msg="StopPodSandbox for \"927a3e368c857aee56fec13a0102c646b1689a908268eddb6e01637928fdad6c\"" Jan 29 11:38:09.749423 containerd[1875]: time="2025-01-29T11:38:09.748405349Z" level=info msg="TearDown network for sandbox \"927a3e368c857aee56fec13a0102c646b1689a908268eddb6e01637928fdad6c\" successfully" Jan 29 11:38:09.749423 containerd[1875]: time="2025-01-29T11:38:09.748422195Z" level=info msg="StopPodSandbox for \"927a3e368c857aee56fec13a0102c646b1689a908268eddb6e01637928fdad6c\" returns successfully" Jan 29 11:38:09.752080 containerd[1875]: time="2025-01-29T11:38:09.751935395Z" level=info msg="StopPodSandbox for \"81a591738eb1ac16cb42f5ed39c3920ea2214d2f5fc3f1d787a92fa777a5b834\"" Jan 29 11:38:09.752819 containerd[1875]: time="2025-01-29T11:38:09.752772093Z" level=info msg="TearDown network for sandbox \"81a591738eb1ac16cb42f5ed39c3920ea2214d2f5fc3f1d787a92fa777a5b834\" successfully" Jan 29 11:38:09.753064 containerd[1875]: time="2025-01-29T11:38:09.752920131Z" level=info msg="StopPodSandbox for \"81a591738eb1ac16cb42f5ed39c3920ea2214d2f5fc3f1d787a92fa777a5b834\" returns successfully" Jan 29 11:38:09.754403 containerd[1875]: time="2025-01-29T11:38:09.754372772Z" level=info msg="StopPodSandbox for \"ea5a24182ae6641fccbbbf881afee34b1e37581cdea07e541dd4c20d8bb134c4\"" Jan 29 11:38:09.754497 containerd[1875]: time="2025-01-29T11:38:09.754480055Z" level=info msg="TearDown network for sandbox \"ea5a24182ae6641fccbbbf881afee34b1e37581cdea07e541dd4c20d8bb134c4\" successfully" Jan 29 11:38:09.754556 containerd[1875]: time="2025-01-29T11:38:09.754498599Z" level=info msg="StopPodSandbox for \"ea5a24182ae6641fccbbbf881afee34b1e37581cdea07e541dd4c20d8bb134c4\" returns successfully" Jan 29 11:38:09.755443 containerd[1875]: time="2025-01-29T11:38:09.755359846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-mn4rh,Uid:076cc005-5bb4-41ad-b987-7c6aaa46808b,Namespace:default,Attempt:4,}" Jan 29 11:38:09.763936 kubelet[2329]: I0129 11:38:09.763571 2329 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3274db9cdf7d82669680a6dc6fdb4b3a67aa0130c064a5f25b0b77ef4b98a10" Jan 29 11:38:09.766005 containerd[1875]: time="2025-01-29T11:38:09.765956556Z" level=info msg="StopPodSandbox for \"c3274db9cdf7d82669680a6dc6fdb4b3a67aa0130c064a5f25b0b77ef4b98a10\"" Jan 29 11:38:09.766536 containerd[1875]: time="2025-01-29T11:38:09.766185957Z" level=info msg="Ensure that sandbox c3274db9cdf7d82669680a6dc6fdb4b3a67aa0130c064a5f25b0b77ef4b98a10 in task-service has been cleanup successfully" Jan 29 11:38:09.770724 containerd[1875]: time="2025-01-29T11:38:09.770616867Z" level=info msg="TearDown network for sandbox \"c3274db9cdf7d82669680a6dc6fdb4b3a67aa0130c064a5f25b0b77ef4b98a10\" successfully" Jan 29 11:38:09.770930 containerd[1875]: time="2025-01-29T11:38:09.770905395Z" level=info msg="StopPodSandbox for \"c3274db9cdf7d82669680a6dc6fdb4b3a67aa0130c064a5f25b0b77ef4b98a10\" returns successfully" Jan 29 11:38:09.772068 systemd[1]: run-netns-cni\x2d2bcfc2d8\x2d6513\x2df66c\x2d04a1\x2d4f49cb5197f2.mount: Deactivated successfully. Jan 29 11:38:09.777148 containerd[1875]: time="2025-01-29T11:38:09.777022750Z" level=info msg="StopPodSandbox for \"737379651b3aa2f389b43b4dbcf0a0b2c65c0b9dfd90df300c1cc78223bef4cf\"" Jan 29 11:38:09.777148 containerd[1875]: time="2025-01-29T11:38:09.777140802Z" level=info msg="TearDown network for sandbox \"737379651b3aa2f389b43b4dbcf0a0b2c65c0b9dfd90df300c1cc78223bef4cf\" successfully" Jan 29 11:38:09.777529 containerd[1875]: time="2025-01-29T11:38:09.777154601Z" level=info msg="StopPodSandbox for \"737379651b3aa2f389b43b4dbcf0a0b2c65c0b9dfd90df300c1cc78223bef4cf\" returns successfully" Jan 29 11:38:09.777934 containerd[1875]: time="2025-01-29T11:38:09.777772005Z" level=info msg="StopPodSandbox for \"538fd0d5057dc6a28263a4d9c1006cd354e6dff76f43d1a1d0a350bbaa3c2ea6\"" Jan 29 11:38:09.778580 containerd[1875]: time="2025-01-29T11:38:09.778546549Z" level=info msg="TearDown network for sandbox \"538fd0d5057dc6a28263a4d9c1006cd354e6dff76f43d1a1d0a350bbaa3c2ea6\" successfully" Jan 29 11:38:09.778580 containerd[1875]: time="2025-01-29T11:38:09.778575106Z" level=info msg="StopPodSandbox for \"538fd0d5057dc6a28263a4d9c1006cd354e6dff76f43d1a1d0a350bbaa3c2ea6\" returns successfully" Jan 29 11:38:09.780384 containerd[1875]: time="2025-01-29T11:38:09.779176244Z" level=info msg="StopPodSandbox for \"81f0bf6a50e4fabde7dd62d12aea35a8ff4ba29aa94292e5b0dbcc3bdd34217e\"" Jan 29 11:38:09.780384 containerd[1875]: time="2025-01-29T11:38:09.779271960Z" level=info msg="TearDown network for sandbox \"81f0bf6a50e4fabde7dd62d12aea35a8ff4ba29aa94292e5b0dbcc3bdd34217e\" successfully" Jan 29 11:38:09.780384 containerd[1875]: time="2025-01-29T11:38:09.779286904Z" level=info msg="StopPodSandbox for \"81f0bf6a50e4fabde7dd62d12aea35a8ff4ba29aa94292e5b0dbcc3bdd34217e\" returns successfully" Jan 29 11:38:09.780384 containerd[1875]: time="2025-01-29T11:38:09.779560469Z" level=info msg="StopPodSandbox for \"216c7811ac130a3fb67a0705d8faf937266b3b9566fbec00e15f684f8a1e3795\"" Jan 29 11:38:09.780384 containerd[1875]: time="2025-01-29T11:38:09.779780045Z" level=info msg="TearDown network for sandbox \"216c7811ac130a3fb67a0705d8faf937266b3b9566fbec00e15f684f8a1e3795\" successfully" Jan 29 11:38:09.780384 containerd[1875]: time="2025-01-29T11:38:09.779799673Z" level=info msg="StopPodSandbox for \"216c7811ac130a3fb67a0705d8faf937266b3b9566fbec00e15f684f8a1e3795\" returns successfully" Jan 29 11:38:09.781853 containerd[1875]: time="2025-01-29T11:38:09.780991501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nwzz8,Uid:60248392-cc03-4e9c-8d46-1318728a4ee1,Namespace:calico-system,Attempt:5,}" Jan 29 11:38:10.051602 containerd[1875]: time="2025-01-29T11:38:10.051468358Z" level=error msg="Failed to destroy network for sandbox \"462027f11a01e789a82061dc7f1cc3d297ae7466649a4dc801b2538dd3a2355b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:10.053974 containerd[1875]: time="2025-01-29T11:38:10.053919091Z" level=error msg="encountered an error cleaning up failed sandbox \"462027f11a01e789a82061dc7f1cc3d297ae7466649a4dc801b2538dd3a2355b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:10.055107 containerd[1875]: time="2025-01-29T11:38:10.055062984Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nwzz8,Uid:60248392-cc03-4e9c-8d46-1318728a4ee1,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"462027f11a01e789a82061dc7f1cc3d297ae7466649a4dc801b2538dd3a2355b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:10.056107 kubelet[2329]: E0129 11:38:10.055527 2329 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"462027f11a01e789a82061dc7f1cc3d297ae7466649a4dc801b2538dd3a2355b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:10.056107 kubelet[2329]: E0129 11:38:10.055601 2329 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"462027f11a01e789a82061dc7f1cc3d297ae7466649a4dc801b2538dd3a2355b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nwzz8" Jan 29 11:38:10.056107 kubelet[2329]: E0129 11:38:10.055629 2329 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"462027f11a01e789a82061dc7f1cc3d297ae7466649a4dc801b2538dd3a2355b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nwzz8" Jan 29 11:38:10.056342 kubelet[2329]: E0129 11:38:10.055683 2329 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-nwzz8_calico-system(60248392-cc03-4e9c-8d46-1318728a4ee1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-nwzz8_calico-system(60248392-cc03-4e9c-8d46-1318728a4ee1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"462027f11a01e789a82061dc7f1cc3d297ae7466649a4dc801b2538dd3a2355b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nwzz8" podUID="60248392-cc03-4e9c-8d46-1318728a4ee1" Jan 29 11:38:10.078040 containerd[1875]: time="2025-01-29T11:38:10.077985767Z" level=error msg="Failed to destroy network for sandbox \"516f171545da168b11d937fb409e6e45a7dc6cae7e97e358df301dc83844bbcc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:10.078771 containerd[1875]: time="2025-01-29T11:38:10.078729549Z" level=error msg="encountered an error cleaning up failed sandbox \"516f171545da168b11d937fb409e6e45a7dc6cae7e97e358df301dc83844bbcc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:10.078897 containerd[1875]: time="2025-01-29T11:38:10.078805321Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-mn4rh,Uid:076cc005-5bb4-41ad-b987-7c6aaa46808b,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"516f171545da168b11d937fb409e6e45a7dc6cae7e97e358df301dc83844bbcc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:10.079824 kubelet[2329]: E0129 11:38:10.079745 2329 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"516f171545da168b11d937fb409e6e45a7dc6cae7e97e358df301dc83844bbcc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:10.079946 kubelet[2329]: E0129 11:38:10.079821 2329 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"516f171545da168b11d937fb409e6e45a7dc6cae7e97e358df301dc83844bbcc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-mn4rh" Jan 29 11:38:10.079946 kubelet[2329]: E0129 11:38:10.079891 2329 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"516f171545da168b11d937fb409e6e45a7dc6cae7e97e358df301dc83844bbcc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-mn4rh" Jan 29 11:38:10.080050 kubelet[2329]: E0129 11:38:10.079960 2329 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-mn4rh_default(076cc005-5bb4-41ad-b987-7c6aaa46808b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-mn4rh_default(076cc005-5bb4-41ad-b987-7c6aaa46808b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"516f171545da168b11d937fb409e6e45a7dc6cae7e97e358df301dc83844bbcc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-mn4rh" podUID="076cc005-5bb4-41ad-b987-7c6aaa46808b" Jan 29 11:38:10.391947 kubelet[2329]: E0129 11:38:10.391266 2329 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:10.412320 kubelet[2329]: E0129 11:38:10.412283 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:10.600100 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-516f171545da168b11d937fb409e6e45a7dc6cae7e97e358df301dc83844bbcc-shm.mount: Deactivated successfully. Jan 29 11:38:10.770594 kubelet[2329]: I0129 11:38:10.770553 2329 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="462027f11a01e789a82061dc7f1cc3d297ae7466649a4dc801b2538dd3a2355b" Jan 29 11:38:10.771776 containerd[1875]: time="2025-01-29T11:38:10.771737765Z" level=info msg="StopPodSandbox for \"462027f11a01e789a82061dc7f1cc3d297ae7466649a4dc801b2538dd3a2355b\"" Jan 29 11:38:10.772216 containerd[1875]: time="2025-01-29T11:38:10.772009037Z" level=info msg="Ensure that sandbox 462027f11a01e789a82061dc7f1cc3d297ae7466649a4dc801b2538dd3a2355b in task-service has been cleanup successfully" Jan 29 11:38:10.774174 containerd[1875]: time="2025-01-29T11:38:10.772658610Z" level=info msg="TearDown network for sandbox \"462027f11a01e789a82061dc7f1cc3d297ae7466649a4dc801b2538dd3a2355b\" successfully" Jan 29 11:38:10.774174 containerd[1875]: time="2025-01-29T11:38:10.772681345Z" level=info msg="StopPodSandbox for \"462027f11a01e789a82061dc7f1cc3d297ae7466649a4dc801b2538dd3a2355b\" returns successfully" Jan 29 11:38:10.775728 containerd[1875]: time="2025-01-29T11:38:10.774885742Z" level=info msg="StopPodSandbox for \"c3274db9cdf7d82669680a6dc6fdb4b3a67aa0130c064a5f25b0b77ef4b98a10\"" Jan 29 11:38:10.775728 containerd[1875]: time="2025-01-29T11:38:10.774994900Z" level=info msg="TearDown network for sandbox \"c3274db9cdf7d82669680a6dc6fdb4b3a67aa0130c064a5f25b0b77ef4b98a10\" successfully" Jan 29 11:38:10.775728 containerd[1875]: time="2025-01-29T11:38:10.775011813Z" level=info msg="StopPodSandbox for \"c3274db9cdf7d82669680a6dc6fdb4b3a67aa0130c064a5f25b0b77ef4b98a10\" returns successfully" Jan 29 11:38:10.776414 systemd[1]: run-netns-cni\x2d3f45ea5f\x2dde69\x2dab0f\x2d2bef\x2dd55615569301.mount: Deactivated successfully. Jan 29 11:38:10.778804 containerd[1875]: time="2025-01-29T11:38:10.778770387Z" level=info msg="StopPodSandbox for \"737379651b3aa2f389b43b4dbcf0a0b2c65c0b9dfd90df300c1cc78223bef4cf\"" Jan 29 11:38:10.778929 containerd[1875]: time="2025-01-29T11:38:10.778893447Z" level=info msg="TearDown network for sandbox \"737379651b3aa2f389b43b4dbcf0a0b2c65c0b9dfd90df300c1cc78223bef4cf\" successfully" Jan 29 11:38:10.778929 containerd[1875]: time="2025-01-29T11:38:10.778911842Z" level=info msg="StopPodSandbox for \"737379651b3aa2f389b43b4dbcf0a0b2c65c0b9dfd90df300c1cc78223bef4cf\" returns successfully" Jan 29 11:38:10.780228 containerd[1875]: time="2025-01-29T11:38:10.779410662Z" level=info msg="StopPodSandbox for \"538fd0d5057dc6a28263a4d9c1006cd354e6dff76f43d1a1d0a350bbaa3c2ea6\"" Jan 29 11:38:10.780228 containerd[1875]: time="2025-01-29T11:38:10.779496470Z" level=info msg="TearDown network for sandbox \"538fd0d5057dc6a28263a4d9c1006cd354e6dff76f43d1a1d0a350bbaa3c2ea6\" successfully" Jan 29 11:38:10.780228 containerd[1875]: time="2025-01-29T11:38:10.779509701Z" level=info msg="StopPodSandbox for \"538fd0d5057dc6a28263a4d9c1006cd354e6dff76f43d1a1d0a350bbaa3c2ea6\" returns successfully" Jan 29 11:38:10.780541 containerd[1875]: time="2025-01-29T11:38:10.780515810Z" level=info msg="StopPodSandbox for \"81f0bf6a50e4fabde7dd62d12aea35a8ff4ba29aa94292e5b0dbcc3bdd34217e\"" Jan 29 11:38:10.780980 containerd[1875]: time="2025-01-29T11:38:10.780615392Z" level=info msg="TearDown network for sandbox \"81f0bf6a50e4fabde7dd62d12aea35a8ff4ba29aa94292e5b0dbcc3bdd34217e\" successfully" Jan 29 11:38:10.781059 containerd[1875]: time="2025-01-29T11:38:10.780984662Z" level=info msg="StopPodSandbox for \"81f0bf6a50e4fabde7dd62d12aea35a8ff4ba29aa94292e5b0dbcc3bdd34217e\" returns successfully" Jan 29 11:38:10.781486 containerd[1875]: time="2025-01-29T11:38:10.781459015Z" level=info msg="StopPodSandbox for \"216c7811ac130a3fb67a0705d8faf937266b3b9566fbec00e15f684f8a1e3795\"" Jan 29 11:38:10.781592 containerd[1875]: time="2025-01-29T11:38:10.781573258Z" level=info msg="TearDown network for sandbox \"216c7811ac130a3fb67a0705d8faf937266b3b9566fbec00e15f684f8a1e3795\" successfully" Jan 29 11:38:10.781653 containerd[1875]: time="2025-01-29T11:38:10.781593211Z" level=info msg="StopPodSandbox for \"216c7811ac130a3fb67a0705d8faf937266b3b9566fbec00e15f684f8a1e3795\" returns successfully" Jan 29 11:38:10.783131 kubelet[2329]: I0129 11:38:10.781914 2329 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="516f171545da168b11d937fb409e6e45a7dc6cae7e97e358df301dc83844bbcc" Jan 29 11:38:10.783488 containerd[1875]: time="2025-01-29T11:38:10.783237437Z" level=info msg="StopPodSandbox for \"516f171545da168b11d937fb409e6e45a7dc6cae7e97e358df301dc83844bbcc\"" Jan 29 11:38:10.784303 containerd[1875]: time="2025-01-29T11:38:10.784277420Z" level=info msg="Ensure that sandbox 516f171545da168b11d937fb409e6e45a7dc6cae7e97e358df301dc83844bbcc in task-service has been cleanup successfully" Jan 29 11:38:10.785279 containerd[1875]: time="2025-01-29T11:38:10.785168647Z" level=info msg="TearDown network for sandbox \"516f171545da168b11d937fb409e6e45a7dc6cae7e97e358df301dc83844bbcc\" successfully" Jan 29 11:38:10.785279 containerd[1875]: time="2025-01-29T11:38:10.785195807Z" level=info msg="StopPodSandbox for \"516f171545da168b11d937fb409e6e45a7dc6cae7e97e358df301dc83844bbcc\" returns successfully" Jan 29 11:38:10.785435 containerd[1875]: time="2025-01-29T11:38:10.785405693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nwzz8,Uid:60248392-cc03-4e9c-8d46-1318728a4ee1,Namespace:calico-system,Attempt:6,}" Jan 29 11:38:10.788777 containerd[1875]: time="2025-01-29T11:38:10.788748111Z" level=info msg="StopPodSandbox for \"79a8470c61dc8ab98e22503772b8c52b83ce790d03c5e564d5676ba3252ca1fe\"" Jan 29 11:38:10.789339 containerd[1875]: time="2025-01-29T11:38:10.789074418Z" level=info msg="TearDown network for sandbox \"79a8470c61dc8ab98e22503772b8c52b83ce790d03c5e564d5676ba3252ca1fe\" successfully" Jan 29 11:38:10.789339 containerd[1875]: time="2025-01-29T11:38:10.789096796Z" level=info msg="StopPodSandbox for \"79a8470c61dc8ab98e22503772b8c52b83ce790d03c5e564d5676ba3252ca1fe\" returns successfully" Jan 29 11:38:10.789338 systemd[1]: run-netns-cni\x2d24cae279\x2d534a\x2d02a4\x2d8d4c\x2da3a6b46dce31.mount: Deactivated successfully. Jan 29 11:38:10.790546 containerd[1875]: time="2025-01-29T11:38:10.789941361Z" level=info msg="StopPodSandbox for \"927a3e368c857aee56fec13a0102c646b1689a908268eddb6e01637928fdad6c\"" Jan 29 11:38:10.790546 containerd[1875]: time="2025-01-29T11:38:10.790030339Z" level=info msg="TearDown network for sandbox \"927a3e368c857aee56fec13a0102c646b1689a908268eddb6e01637928fdad6c\" successfully" Jan 29 11:38:10.790546 containerd[1875]: time="2025-01-29T11:38:10.790043151Z" level=info msg="StopPodSandbox for \"927a3e368c857aee56fec13a0102c646b1689a908268eddb6e01637928fdad6c\" returns successfully" Jan 29 11:38:10.791656 containerd[1875]: time="2025-01-29T11:38:10.791168888Z" level=info msg="StopPodSandbox for \"81a591738eb1ac16cb42f5ed39c3920ea2214d2f5fc3f1d787a92fa777a5b834\"" Jan 29 11:38:10.791656 containerd[1875]: time="2025-01-29T11:38:10.791262623Z" level=info msg="TearDown network for sandbox \"81a591738eb1ac16cb42f5ed39c3920ea2214d2f5fc3f1d787a92fa777a5b834\" successfully" Jan 29 11:38:10.791656 containerd[1875]: time="2025-01-29T11:38:10.791276978Z" level=info msg="StopPodSandbox for \"81a591738eb1ac16cb42f5ed39c3920ea2214d2f5fc3f1d787a92fa777a5b834\" returns successfully" Jan 29 11:38:10.792365 containerd[1875]: time="2025-01-29T11:38:10.792338116Z" level=info msg="StopPodSandbox for \"ea5a24182ae6641fccbbbf881afee34b1e37581cdea07e541dd4c20d8bb134c4\"" Jan 29 11:38:10.792446 containerd[1875]: time="2025-01-29T11:38:10.792428709Z" level=info msg="TearDown network for sandbox \"ea5a24182ae6641fccbbbf881afee34b1e37581cdea07e541dd4c20d8bb134c4\" successfully" Jan 29 11:38:10.792495 containerd[1875]: time="2025-01-29T11:38:10.792447695Z" level=info msg="StopPodSandbox for \"ea5a24182ae6641fccbbbf881afee34b1e37581cdea07e541dd4c20d8bb134c4\" returns successfully" Jan 29 11:38:10.793447 containerd[1875]: time="2025-01-29T11:38:10.793286779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-mn4rh,Uid:076cc005-5bb4-41ad-b987-7c6aaa46808b,Namespace:default,Attempt:5,}" Jan 29 11:38:10.978059 containerd[1875]: time="2025-01-29T11:38:10.977600843Z" level=error msg="Failed to destroy network for sandbox \"bcc9a08010fb35b7f0936222545b7870fb3e5df61743b71d0b8a8644b1d8b912\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:10.978542 containerd[1875]: time="2025-01-29T11:38:10.978501178Z" level=error msg="encountered an error cleaning up failed sandbox \"bcc9a08010fb35b7f0936222545b7870fb3e5df61743b71d0b8a8644b1d8b912\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:10.978817 containerd[1875]: time="2025-01-29T11:38:10.978691858Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nwzz8,Uid:60248392-cc03-4e9c-8d46-1318728a4ee1,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"bcc9a08010fb35b7f0936222545b7870fb3e5df61743b71d0b8a8644b1d8b912\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:10.979632 kubelet[2329]: E0129 11:38:10.979184 2329 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcc9a08010fb35b7f0936222545b7870fb3e5df61743b71d0b8a8644b1d8b912\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:10.979632 kubelet[2329]: E0129 11:38:10.979251 2329 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcc9a08010fb35b7f0936222545b7870fb3e5df61743b71d0b8a8644b1d8b912\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nwzz8" Jan 29 11:38:10.979632 kubelet[2329]: E0129 11:38:10.979284 2329 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcc9a08010fb35b7f0936222545b7870fb3e5df61743b71d0b8a8644b1d8b912\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nwzz8" Jan 29 11:38:10.979808 kubelet[2329]: E0129 11:38:10.979354 2329 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-nwzz8_calico-system(60248392-cc03-4e9c-8d46-1318728a4ee1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-nwzz8_calico-system(60248392-cc03-4e9c-8d46-1318728a4ee1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bcc9a08010fb35b7f0936222545b7870fb3e5df61743b71d0b8a8644b1d8b912\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nwzz8" podUID="60248392-cc03-4e9c-8d46-1318728a4ee1" Jan 29 11:38:11.010350 containerd[1875]: time="2025-01-29T11:38:11.010298529Z" level=error msg="Failed to destroy network for sandbox \"8fb7ed5229073ad6a7b25d1fcef69e50ce85e02f26f53254af79cd4b8c55a731\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:11.010881 containerd[1875]: time="2025-01-29T11:38:11.010844809Z" level=error msg="encountered an error cleaning up failed sandbox \"8fb7ed5229073ad6a7b25d1fcef69e50ce85e02f26f53254af79cd4b8c55a731\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:11.011053 containerd[1875]: time="2025-01-29T11:38:11.011025054Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-mn4rh,Uid:076cc005-5bb4-41ad-b987-7c6aaa46808b,Namespace:default,Attempt:5,} failed, error" error="failed to setup network for sandbox \"8fb7ed5229073ad6a7b25d1fcef69e50ce85e02f26f53254af79cd4b8c55a731\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:11.011772 kubelet[2329]: E0129 11:38:11.011383 2329 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fb7ed5229073ad6a7b25d1fcef69e50ce85e02f26f53254af79cd4b8c55a731\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:11.011772 kubelet[2329]: E0129 11:38:11.011449 2329 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fb7ed5229073ad6a7b25d1fcef69e50ce85e02f26f53254af79cd4b8c55a731\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-mn4rh" Jan 29 11:38:11.011772 kubelet[2329]: E0129 11:38:11.011478 2329 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fb7ed5229073ad6a7b25d1fcef69e50ce85e02f26f53254af79cd4b8c55a731\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-mn4rh" Jan 29 11:38:11.011996 kubelet[2329]: E0129 11:38:11.011530 2329 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-mn4rh_default(076cc005-5bb4-41ad-b987-7c6aaa46808b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-mn4rh_default(076cc005-5bb4-41ad-b987-7c6aaa46808b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8fb7ed5229073ad6a7b25d1fcef69e50ce85e02f26f53254af79cd4b8c55a731\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-mn4rh" podUID="076cc005-5bb4-41ad-b987-7c6aaa46808b" Jan 29 11:38:11.268180 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 29 11:38:11.413245 kubelet[2329]: E0129 11:38:11.413118 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:11.602164 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8fb7ed5229073ad6a7b25d1fcef69e50ce85e02f26f53254af79cd4b8c55a731-shm.mount: Deactivated successfully. Jan 29 11:38:11.603063 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bcc9a08010fb35b7f0936222545b7870fb3e5df61743b71d0b8a8644b1d8b912-shm.mount: Deactivated successfully. Jan 29 11:38:11.800690 kubelet[2329]: I0129 11:38:11.799547 2329 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcc9a08010fb35b7f0936222545b7870fb3e5df61743b71d0b8a8644b1d8b912" Jan 29 11:38:11.801276 containerd[1875]: time="2025-01-29T11:38:11.800855941Z" level=info msg="StopPodSandbox for \"bcc9a08010fb35b7f0936222545b7870fb3e5df61743b71d0b8a8644b1d8b912\"" Jan 29 11:38:11.803036 containerd[1875]: time="2025-01-29T11:38:11.801325363Z" level=info msg="Ensure that sandbox bcc9a08010fb35b7f0936222545b7870fb3e5df61743b71d0b8a8644b1d8b912 in task-service has been cleanup successfully" Jan 29 11:38:11.803036 containerd[1875]: time="2025-01-29T11:38:11.801732306Z" level=info msg="TearDown network for sandbox \"bcc9a08010fb35b7f0936222545b7870fb3e5df61743b71d0b8a8644b1d8b912\" successfully" Jan 29 11:38:11.803848 containerd[1875]: time="2025-01-29T11:38:11.803805002Z" level=info msg="StopPodSandbox for \"bcc9a08010fb35b7f0936222545b7870fb3e5df61743b71d0b8a8644b1d8b912\" returns successfully" Jan 29 11:38:11.805672 containerd[1875]: time="2025-01-29T11:38:11.805525906Z" level=info msg="StopPodSandbox for \"462027f11a01e789a82061dc7f1cc3d297ae7466649a4dc801b2538dd3a2355b\"" Jan 29 11:38:11.805782 containerd[1875]: time="2025-01-29T11:38:11.805754311Z" level=info msg="TearDown network for sandbox \"462027f11a01e789a82061dc7f1cc3d297ae7466649a4dc801b2538dd3a2355b\" successfully" Jan 29 11:38:11.805857 containerd[1875]: time="2025-01-29T11:38:11.805778468Z" level=info msg="StopPodSandbox for \"462027f11a01e789a82061dc7f1cc3d297ae7466649a4dc801b2538dd3a2355b\" returns successfully" Jan 29 11:38:11.806680 systemd[1]: run-netns-cni\x2dab8a33fb\x2d5429\x2ddd36\x2d814e\x2d5f14c32d7846.mount: Deactivated successfully. Jan 29 11:38:11.816700 containerd[1875]: time="2025-01-29T11:38:11.814995270Z" level=info msg="StopPodSandbox for \"c3274db9cdf7d82669680a6dc6fdb4b3a67aa0130c064a5f25b0b77ef4b98a10\"" Jan 29 11:38:11.816700 containerd[1875]: time="2025-01-29T11:38:11.815544615Z" level=info msg="TearDown network for sandbox \"c3274db9cdf7d82669680a6dc6fdb4b3a67aa0130c064a5f25b0b77ef4b98a10\" successfully" Jan 29 11:38:11.822792 containerd[1875]: time="2025-01-29T11:38:11.815887615Z" level=info msg="StopPodSandbox for \"c3274db9cdf7d82669680a6dc6fdb4b3a67aa0130c064a5f25b0b77ef4b98a10\" returns successfully" Jan 29 11:38:11.827120 containerd[1875]: time="2025-01-29T11:38:11.827055308Z" level=info msg="StopPodSandbox for \"737379651b3aa2f389b43b4dbcf0a0b2c65c0b9dfd90df300c1cc78223bef4cf\"" Jan 29 11:38:11.828496 containerd[1875]: time="2025-01-29T11:38:11.828465588Z" level=info msg="TearDown network for sandbox \"737379651b3aa2f389b43b4dbcf0a0b2c65c0b9dfd90df300c1cc78223bef4cf\" successfully" Jan 29 11:38:11.829278 containerd[1875]: time="2025-01-29T11:38:11.828621545Z" level=info msg="StopPodSandbox for \"737379651b3aa2f389b43b4dbcf0a0b2c65c0b9dfd90df300c1cc78223bef4cf\" returns successfully" Jan 29 11:38:11.830292 containerd[1875]: time="2025-01-29T11:38:11.830112312Z" level=info msg="StopPodSandbox for \"538fd0d5057dc6a28263a4d9c1006cd354e6dff76f43d1a1d0a350bbaa3c2ea6\"" Jan 29 11:38:11.830292 containerd[1875]: time="2025-01-29T11:38:11.830207817Z" level=info msg="TearDown network for sandbox \"538fd0d5057dc6a28263a4d9c1006cd354e6dff76f43d1a1d0a350bbaa3c2ea6\" successfully" Jan 29 11:38:11.830292 containerd[1875]: time="2025-01-29T11:38:11.830222068Z" level=info msg="StopPodSandbox for \"538fd0d5057dc6a28263a4d9c1006cd354e6dff76f43d1a1d0a350bbaa3c2ea6\" returns successfully" Jan 29 11:38:11.831713 containerd[1875]: time="2025-01-29T11:38:11.831351049Z" level=info msg="StopPodSandbox for \"81f0bf6a50e4fabde7dd62d12aea35a8ff4ba29aa94292e5b0dbcc3bdd34217e\"" Jan 29 11:38:11.831713 containerd[1875]: time="2025-01-29T11:38:11.831458905Z" level=info msg="TearDown network for sandbox \"81f0bf6a50e4fabde7dd62d12aea35a8ff4ba29aa94292e5b0dbcc3bdd34217e\" successfully" Jan 29 11:38:11.831713 containerd[1875]: time="2025-01-29T11:38:11.831473603Z" level=info msg="StopPodSandbox for \"81f0bf6a50e4fabde7dd62d12aea35a8ff4ba29aa94292e5b0dbcc3bdd34217e\" returns successfully" Jan 29 11:38:11.833482 containerd[1875]: time="2025-01-29T11:38:11.832486747Z" level=info msg="StopPodSandbox for \"216c7811ac130a3fb67a0705d8faf937266b3b9566fbec00e15f684f8a1e3795\"" Jan 29 11:38:11.833482 containerd[1875]: time="2025-01-29T11:38:11.832637382Z" level=info msg="TearDown network for sandbox \"216c7811ac130a3fb67a0705d8faf937266b3b9566fbec00e15f684f8a1e3795\" successfully" Jan 29 11:38:11.833482 containerd[1875]: time="2025-01-29T11:38:11.832739067Z" level=info msg="StopPodSandbox for \"216c7811ac130a3fb67a0705d8faf937266b3b9566fbec00e15f684f8a1e3795\" returns successfully" Jan 29 11:38:11.834099 containerd[1875]: time="2025-01-29T11:38:11.834069383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nwzz8,Uid:60248392-cc03-4e9c-8d46-1318728a4ee1,Namespace:calico-system,Attempt:7,}" Jan 29 11:38:11.841575 kubelet[2329]: I0129 11:38:11.841517 2329 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8fb7ed5229073ad6a7b25d1fcef69e50ce85e02f26f53254af79cd4b8c55a731" Jan 29 11:38:11.849218 containerd[1875]: time="2025-01-29T11:38:11.849177018Z" level=info msg="StopPodSandbox for \"8fb7ed5229073ad6a7b25d1fcef69e50ce85e02f26f53254af79cd4b8c55a731\"" Jan 29 11:38:11.851196 containerd[1875]: time="2025-01-29T11:38:11.851153134Z" level=info msg="Ensure that sandbox 8fb7ed5229073ad6a7b25d1fcef69e50ce85e02f26f53254af79cd4b8c55a731 in task-service has been cleanup successfully" Jan 29 11:38:11.854862 containerd[1875]: time="2025-01-29T11:38:11.851549321Z" level=info msg="TearDown network for sandbox \"8fb7ed5229073ad6a7b25d1fcef69e50ce85e02f26f53254af79cd4b8c55a731\" successfully" Jan 29 11:38:11.854862 containerd[1875]: time="2025-01-29T11:38:11.851788102Z" level=info msg="StopPodSandbox for \"8fb7ed5229073ad6a7b25d1fcef69e50ce85e02f26f53254af79cd4b8c55a731\" returns successfully" Jan 29 11:38:11.855682 containerd[1875]: time="2025-01-29T11:38:11.855306857Z" level=info msg="StopPodSandbox for \"516f171545da168b11d937fb409e6e45a7dc6cae7e97e358df301dc83844bbcc\"" Jan 29 11:38:11.855682 containerd[1875]: time="2025-01-29T11:38:11.855412594Z" level=info msg="TearDown network for sandbox \"516f171545da168b11d937fb409e6e45a7dc6cae7e97e358df301dc83844bbcc\" successfully" Jan 29 11:38:11.855682 containerd[1875]: time="2025-01-29T11:38:11.855427154Z" level=info msg="StopPodSandbox for \"516f171545da168b11d937fb409e6e45a7dc6cae7e97e358df301dc83844bbcc\" returns successfully" Jan 29 11:38:11.860654 containerd[1875]: time="2025-01-29T11:38:11.860276238Z" level=info msg="StopPodSandbox for \"79a8470c61dc8ab98e22503772b8c52b83ce790d03c5e564d5676ba3252ca1fe\"" Jan 29 11:38:11.860654 containerd[1875]: time="2025-01-29T11:38:11.860592028Z" level=info msg="TearDown network for sandbox \"79a8470c61dc8ab98e22503772b8c52b83ce790d03c5e564d5676ba3252ca1fe\" successfully" Jan 29 11:38:11.860654 containerd[1875]: time="2025-01-29T11:38:11.860612945Z" level=info msg="StopPodSandbox for \"79a8470c61dc8ab98e22503772b8c52b83ce790d03c5e564d5676ba3252ca1fe\" returns successfully" Jan 29 11:38:11.855738 systemd[1]: run-netns-cni\x2df108810a\x2dc646\x2d890b\x2d843a\x2df4e3bb3177be.mount: Deactivated successfully. Jan 29 11:38:11.862875 containerd[1875]: time="2025-01-29T11:38:11.862229545Z" level=info msg="StopPodSandbox for \"927a3e368c857aee56fec13a0102c646b1689a908268eddb6e01637928fdad6c\"" Jan 29 11:38:11.863090 containerd[1875]: time="2025-01-29T11:38:11.862654082Z" level=info msg="TearDown network for sandbox \"927a3e368c857aee56fec13a0102c646b1689a908268eddb6e01637928fdad6c\" successfully" Jan 29 11:38:11.863464 containerd[1875]: time="2025-01-29T11:38:11.863352559Z" level=info msg="StopPodSandbox for \"927a3e368c857aee56fec13a0102c646b1689a908268eddb6e01637928fdad6c\" returns successfully" Jan 29 11:38:11.864506 containerd[1875]: time="2025-01-29T11:38:11.864475429Z" level=info msg="StopPodSandbox for \"81a591738eb1ac16cb42f5ed39c3920ea2214d2f5fc3f1d787a92fa777a5b834\"" Jan 29 11:38:11.864613 containerd[1875]: time="2025-01-29T11:38:11.864586817Z" level=info msg="TearDown network for sandbox \"81a591738eb1ac16cb42f5ed39c3920ea2214d2f5fc3f1d787a92fa777a5b834\" successfully" Jan 29 11:38:11.864613 containerd[1875]: time="2025-01-29T11:38:11.864605972Z" level=info msg="StopPodSandbox for \"81a591738eb1ac16cb42f5ed39c3920ea2214d2f5fc3f1d787a92fa777a5b834\" returns successfully" Jan 29 11:38:11.865264 containerd[1875]: time="2025-01-29T11:38:11.865239901Z" level=info msg="StopPodSandbox for \"ea5a24182ae6641fccbbbf881afee34b1e37581cdea07e541dd4c20d8bb134c4\"" Jan 29 11:38:11.865575 containerd[1875]: time="2025-01-29T11:38:11.865331921Z" level=info msg="TearDown network for sandbox \"ea5a24182ae6641fccbbbf881afee34b1e37581cdea07e541dd4c20d8bb134c4\" successfully" Jan 29 11:38:11.867401 containerd[1875]: time="2025-01-29T11:38:11.865571372Z" level=info msg="StopPodSandbox for \"ea5a24182ae6641fccbbbf881afee34b1e37581cdea07e541dd4c20d8bb134c4\" returns successfully" Jan 29 11:38:11.869277 containerd[1875]: time="2025-01-29T11:38:11.868977643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-mn4rh,Uid:076cc005-5bb4-41ad-b987-7c6aaa46808b,Namespace:default,Attempt:6,}" Jan 29 11:38:12.033596 containerd[1875]: time="2025-01-29T11:38:12.033475782Z" level=error msg="Failed to destroy network for sandbox \"c76cd149a006d32561c3d17292d247b556b057bceb3523b2a64cc3d617b427a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:12.034513 containerd[1875]: time="2025-01-29T11:38:12.034021202Z" level=error msg="encountered an error cleaning up failed sandbox \"c76cd149a006d32561c3d17292d247b556b057bceb3523b2a64cc3d617b427a1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:12.034513 containerd[1875]: time="2025-01-29T11:38:12.034098212Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nwzz8,Uid:60248392-cc03-4e9c-8d46-1318728a4ee1,Namespace:calico-system,Attempt:7,} failed, error" error="failed to setup network for sandbox \"c76cd149a006d32561c3d17292d247b556b057bceb3523b2a64cc3d617b427a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:12.035470 kubelet[2329]: E0129 11:38:12.034908 2329 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c76cd149a006d32561c3d17292d247b556b057bceb3523b2a64cc3d617b427a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:12.035470 kubelet[2329]: E0129 11:38:12.034980 2329 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c76cd149a006d32561c3d17292d247b556b057bceb3523b2a64cc3d617b427a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nwzz8" Jan 29 11:38:12.035470 kubelet[2329]: E0129 11:38:12.035008 2329 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c76cd149a006d32561c3d17292d247b556b057bceb3523b2a64cc3d617b427a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nwzz8" Jan 29 11:38:12.035804 kubelet[2329]: E0129 11:38:12.035208 2329 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-nwzz8_calico-system(60248392-cc03-4e9c-8d46-1318728a4ee1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-nwzz8_calico-system(60248392-cc03-4e9c-8d46-1318728a4ee1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c76cd149a006d32561c3d17292d247b556b057bceb3523b2a64cc3d617b427a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nwzz8" podUID="60248392-cc03-4e9c-8d46-1318728a4ee1" Jan 29 11:38:12.117639 containerd[1875]: time="2025-01-29T11:38:12.117521306Z" level=error msg="Failed to destroy network for sandbox \"c52653514d90535bf9be9b55d6ed0baac06f092a82b2b824d5b97ba17d89610b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:12.118132 containerd[1875]: time="2025-01-29T11:38:12.117873793Z" level=error msg="encountered an error cleaning up failed sandbox \"c52653514d90535bf9be9b55d6ed0baac06f092a82b2b824d5b97ba17d89610b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:12.118132 containerd[1875]: time="2025-01-29T11:38:12.117954632Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-mn4rh,Uid:076cc005-5bb4-41ad-b987-7c6aaa46808b,Namespace:default,Attempt:6,} failed, error" error="failed to setup network for sandbox \"c52653514d90535bf9be9b55d6ed0baac06f092a82b2b824d5b97ba17d89610b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:12.119540 kubelet[2329]: E0129 11:38:12.118476 2329 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c52653514d90535bf9be9b55d6ed0baac06f092a82b2b824d5b97ba17d89610b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:12.119540 kubelet[2329]: E0129 11:38:12.118545 2329 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c52653514d90535bf9be9b55d6ed0baac06f092a82b2b824d5b97ba17d89610b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-mn4rh" Jan 29 11:38:12.119540 kubelet[2329]: E0129 11:38:12.118571 2329 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c52653514d90535bf9be9b55d6ed0baac06f092a82b2b824d5b97ba17d89610b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-mn4rh" Jan 29 11:38:12.119757 kubelet[2329]: E0129 11:38:12.118628 2329 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-mn4rh_default(076cc005-5bb4-41ad-b987-7c6aaa46808b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-mn4rh_default(076cc005-5bb4-41ad-b987-7c6aaa46808b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c52653514d90535bf9be9b55d6ed0baac06f092a82b2b824d5b97ba17d89610b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-mn4rh" podUID="076cc005-5bb4-41ad-b987-7c6aaa46808b" Jan 29 11:38:12.413528 kubelet[2329]: E0129 11:38:12.413412 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:12.607059 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c76cd149a006d32561c3d17292d247b556b057bceb3523b2a64cc3d617b427a1-shm.mount: Deactivated successfully. Jan 29 11:38:12.765887 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3004790441.mount: Deactivated successfully. Jan 29 11:38:12.826368 containerd[1875]: time="2025-01-29T11:38:12.826318453Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:38:12.830124 containerd[1875]: time="2025-01-29T11:38:12.829971288Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 29 11:38:12.835918 containerd[1875]: time="2025-01-29T11:38:12.835604597Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:38:12.841444 containerd[1875]: time="2025-01-29T11:38:12.839807742Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:38:12.841444 containerd[1875]: time="2025-01-29T11:38:12.841113183Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 9.197444431s" Jan 29 11:38:12.841444 containerd[1875]: time="2025-01-29T11:38:12.841143684Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 29 11:38:12.850405 kubelet[2329]: I0129 11:38:12.850374 2329 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c52653514d90535bf9be9b55d6ed0baac06f092a82b2b824d5b97ba17d89610b" Jan 29 11:38:12.857909 containerd[1875]: time="2025-01-29T11:38:12.854143115Z" level=info msg="StopPodSandbox for \"c52653514d90535bf9be9b55d6ed0baac06f092a82b2b824d5b97ba17d89610b\"" Jan 29 11:38:12.857909 containerd[1875]: time="2025-01-29T11:38:12.855168586Z" level=info msg="Ensure that sandbox c52653514d90535bf9be9b55d6ed0baac06f092a82b2b824d5b97ba17d89610b in task-service has been cleanup successfully" Jan 29 11:38:12.858506 containerd[1875]: time="2025-01-29T11:38:12.858468529Z" level=info msg="TearDown network for sandbox \"c52653514d90535bf9be9b55d6ed0baac06f092a82b2b824d5b97ba17d89610b\" successfully" Jan 29 11:38:12.858690 containerd[1875]: time="2025-01-29T11:38:12.858667060Z" level=info msg="StopPodSandbox for \"c52653514d90535bf9be9b55d6ed0baac06f092a82b2b824d5b97ba17d89610b\" returns successfully" Jan 29 11:38:12.860761 systemd[1]: run-netns-cni\x2d0b124ab8\x2d5415\x2d5480\x2da506\x2dbeaff2517d5e.mount: Deactivated successfully. Jan 29 11:38:12.862782 containerd[1875]: time="2025-01-29T11:38:12.861569292Z" level=info msg="StopPodSandbox for \"8fb7ed5229073ad6a7b25d1fcef69e50ce85e02f26f53254af79cd4b8c55a731\"" Jan 29 11:38:12.862782 containerd[1875]: time="2025-01-29T11:38:12.861679236Z" level=info msg="TearDown network for sandbox \"8fb7ed5229073ad6a7b25d1fcef69e50ce85e02f26f53254af79cd4b8c55a731\" successfully" Jan 29 11:38:12.862782 containerd[1875]: time="2025-01-29T11:38:12.861696947Z" level=info msg="StopPodSandbox for \"8fb7ed5229073ad6a7b25d1fcef69e50ce85e02f26f53254af79cd4b8c55a731\" returns successfully" Jan 29 11:38:12.864349 containerd[1875]: time="2025-01-29T11:38:12.863473609Z" level=info msg="StopPodSandbox for \"516f171545da168b11d937fb409e6e45a7dc6cae7e97e358df301dc83844bbcc\"" Jan 29 11:38:12.864349 containerd[1875]: time="2025-01-29T11:38:12.864091527Z" level=info msg="TearDown network for sandbox \"516f171545da168b11d937fb409e6e45a7dc6cae7e97e358df301dc83844bbcc\" successfully" Jan 29 11:38:12.864349 containerd[1875]: time="2025-01-29T11:38:12.864109877Z" level=info msg="StopPodSandbox for \"516f171545da168b11d937fb409e6e45a7dc6cae7e97e358df301dc83844bbcc\" returns successfully" Jan 29 11:38:12.865425 containerd[1875]: time="2025-01-29T11:38:12.865400160Z" level=info msg="StopPodSandbox for \"79a8470c61dc8ab98e22503772b8c52b83ce790d03c5e564d5676ba3252ca1fe\"" Jan 29 11:38:12.866009 containerd[1875]: time="2025-01-29T11:38:12.865917013Z" level=info msg="TearDown network for sandbox \"79a8470c61dc8ab98e22503772b8c52b83ce790d03c5e564d5676ba3252ca1fe\" successfully" Jan 29 11:38:12.866009 containerd[1875]: time="2025-01-29T11:38:12.865942132Z" level=info msg="StopPodSandbox for \"79a8470c61dc8ab98e22503772b8c52b83ce790d03c5e564d5676ba3252ca1fe\" returns successfully" Jan 29 11:38:12.867339 containerd[1875]: time="2025-01-29T11:38:12.867100170Z" level=info msg="StopPodSandbox for \"927a3e368c857aee56fec13a0102c646b1689a908268eddb6e01637928fdad6c\"" Jan 29 11:38:12.867339 containerd[1875]: time="2025-01-29T11:38:12.867196406Z" level=info msg="TearDown network for sandbox \"927a3e368c857aee56fec13a0102c646b1689a908268eddb6e01637928fdad6c\" successfully" Jan 29 11:38:12.867339 containerd[1875]: time="2025-01-29T11:38:12.867212628Z" level=info msg="StopPodSandbox for \"927a3e368c857aee56fec13a0102c646b1689a908268eddb6e01637928fdad6c\" returns successfully" Jan 29 11:38:12.870907 containerd[1875]: time="2025-01-29T11:38:12.868477097Z" level=info msg="StopPodSandbox for \"81a591738eb1ac16cb42f5ed39c3920ea2214d2f5fc3f1d787a92fa777a5b834\"" Jan 29 11:38:12.870907 containerd[1875]: time="2025-01-29T11:38:12.868577138Z" level=info msg="TearDown network for sandbox \"81a591738eb1ac16cb42f5ed39c3920ea2214d2f5fc3f1d787a92fa777a5b834\" successfully" Jan 29 11:38:12.870907 containerd[1875]: time="2025-01-29T11:38:12.868591041Z" level=info msg="StopPodSandbox for \"81a591738eb1ac16cb42f5ed39c3920ea2214d2f5fc3f1d787a92fa777a5b834\" returns successfully" Jan 29 11:38:12.872088 containerd[1875]: time="2025-01-29T11:38:12.871901135Z" level=info msg="StopPodSandbox for \"ea5a24182ae6641fccbbbf881afee34b1e37581cdea07e541dd4c20d8bb134c4\"" Jan 29 11:38:12.874789 containerd[1875]: time="2025-01-29T11:38:12.873131105Z" level=info msg="TearDown network for sandbox \"ea5a24182ae6641fccbbbf881afee34b1e37581cdea07e541dd4c20d8bb134c4\" successfully" Jan 29 11:38:12.874789 containerd[1875]: time="2025-01-29T11:38:12.874501260Z" level=info msg="StopPodSandbox for \"ea5a24182ae6641fccbbbf881afee34b1e37581cdea07e541dd4c20d8bb134c4\" returns successfully" Jan 29 11:38:12.880881 containerd[1875]: time="2025-01-29T11:38:12.878328566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-mn4rh,Uid:076cc005-5bb4-41ad-b987-7c6aaa46808b,Namespace:default,Attempt:7,}" Jan 29 11:38:12.880999 kubelet[2329]: I0129 11:38:12.877962 2329 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c76cd149a006d32561c3d17292d247b556b057bceb3523b2a64cc3d617b427a1" Jan 29 11:38:12.881856 containerd[1875]: time="2025-01-29T11:38:12.881503517Z" level=info msg="StopPodSandbox for \"c76cd149a006d32561c3d17292d247b556b057bceb3523b2a64cc3d617b427a1\"" Jan 29 11:38:12.883991 containerd[1875]: time="2025-01-29T11:38:12.883958685Z" level=info msg="Ensure that sandbox c76cd149a006d32561c3d17292d247b556b057bceb3523b2a64cc3d617b427a1 in task-service has been cleanup successfully" Jan 29 11:38:12.884951 containerd[1875]: time="2025-01-29T11:38:12.884907386Z" level=info msg="TearDown network for sandbox \"c76cd149a006d32561c3d17292d247b556b057bceb3523b2a64cc3d617b427a1\" successfully" Jan 29 11:38:12.884951 containerd[1875]: time="2025-01-29T11:38:12.884931283Z" level=info msg="StopPodSandbox for \"c76cd149a006d32561c3d17292d247b556b057bceb3523b2a64cc3d617b427a1\" returns successfully" Jan 29 11:38:12.885446 containerd[1875]: time="2025-01-29T11:38:12.885409935Z" level=info msg="StopPodSandbox for \"bcc9a08010fb35b7f0936222545b7870fb3e5df61743b71d0b8a8644b1d8b912\"" Jan 29 11:38:12.885531 containerd[1875]: time="2025-01-29T11:38:12.885506513Z" level=info msg="TearDown network for sandbox \"bcc9a08010fb35b7f0936222545b7870fb3e5df61743b71d0b8a8644b1d8b912\" successfully" Jan 29 11:38:12.885531 containerd[1875]: time="2025-01-29T11:38:12.885524488Z" level=info msg="StopPodSandbox for \"bcc9a08010fb35b7f0936222545b7870fb3e5df61743b71d0b8a8644b1d8b912\" returns successfully" Jan 29 11:38:12.885988 containerd[1875]: time="2025-01-29T11:38:12.885963359Z" level=info msg="CreateContainer within sandbox \"efd77e7e8d167eb93caa4fcef4dc4d7ee05be46bbb87f44411784ff77f7a254f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 11:38:12.886517 containerd[1875]: time="2025-01-29T11:38:12.886356429Z" level=info msg="StopPodSandbox for \"462027f11a01e789a82061dc7f1cc3d297ae7466649a4dc801b2538dd3a2355b\"" Jan 29 11:38:12.886517 containerd[1875]: time="2025-01-29T11:38:12.886444751Z" level=info msg="TearDown network for sandbox \"462027f11a01e789a82061dc7f1cc3d297ae7466649a4dc801b2538dd3a2355b\" successfully" Jan 29 11:38:12.886517 containerd[1875]: time="2025-01-29T11:38:12.886458589Z" level=info msg="StopPodSandbox for \"462027f11a01e789a82061dc7f1cc3d297ae7466649a4dc801b2538dd3a2355b\" returns successfully" Jan 29 11:38:12.887031 containerd[1875]: time="2025-01-29T11:38:12.886825721Z" level=info msg="StopPodSandbox for \"c3274db9cdf7d82669680a6dc6fdb4b3a67aa0130c064a5f25b0b77ef4b98a10\"" Jan 29 11:38:12.887031 containerd[1875]: time="2025-01-29T11:38:12.886933170Z" level=info msg="TearDown network for sandbox \"c3274db9cdf7d82669680a6dc6fdb4b3a67aa0130c064a5f25b0b77ef4b98a10\" successfully" Jan 29 11:38:12.887031 containerd[1875]: time="2025-01-29T11:38:12.886948194Z" level=info msg="StopPodSandbox for \"c3274db9cdf7d82669680a6dc6fdb4b3a67aa0130c064a5f25b0b77ef4b98a10\" returns successfully" Jan 29 11:38:12.888929 containerd[1875]: time="2025-01-29T11:38:12.887235790Z" level=info msg="StopPodSandbox for \"737379651b3aa2f389b43b4dbcf0a0b2c65c0b9dfd90df300c1cc78223bef4cf\"" Jan 29 11:38:12.888929 containerd[1875]: time="2025-01-29T11:38:12.887519752Z" level=info msg="TearDown network for sandbox \"737379651b3aa2f389b43b4dbcf0a0b2c65c0b9dfd90df300c1cc78223bef4cf\" successfully" Jan 29 11:38:12.888929 containerd[1875]: time="2025-01-29T11:38:12.887537586Z" level=info msg="StopPodSandbox for \"737379651b3aa2f389b43b4dbcf0a0b2c65c0b9dfd90df300c1cc78223bef4cf\" returns successfully" Jan 29 11:38:12.888929 containerd[1875]: time="2025-01-29T11:38:12.888540323Z" level=info msg="StopPodSandbox for \"538fd0d5057dc6a28263a4d9c1006cd354e6dff76f43d1a1d0a350bbaa3c2ea6\"" Jan 29 11:38:12.888929 containerd[1875]: time="2025-01-29T11:38:12.888628527Z" level=info msg="TearDown network for sandbox \"538fd0d5057dc6a28263a4d9c1006cd354e6dff76f43d1a1d0a350bbaa3c2ea6\" successfully" Jan 29 11:38:12.888929 containerd[1875]: time="2025-01-29T11:38:12.888643779Z" level=info msg="StopPodSandbox for \"538fd0d5057dc6a28263a4d9c1006cd354e6dff76f43d1a1d0a350bbaa3c2ea6\" returns successfully" Jan 29 11:38:12.889596 containerd[1875]: time="2025-01-29T11:38:12.889568671Z" level=info msg="StopPodSandbox for \"81f0bf6a50e4fabde7dd62d12aea35a8ff4ba29aa94292e5b0dbcc3bdd34217e\"" Jan 29 11:38:12.889681 containerd[1875]: time="2025-01-29T11:38:12.889662668Z" level=info msg="TearDown network for sandbox \"81f0bf6a50e4fabde7dd62d12aea35a8ff4ba29aa94292e5b0dbcc3bdd34217e\" successfully" Jan 29 11:38:12.889782 containerd[1875]: time="2025-01-29T11:38:12.889683047Z" level=info msg="StopPodSandbox for \"81f0bf6a50e4fabde7dd62d12aea35a8ff4ba29aa94292e5b0dbcc3bdd34217e\" returns successfully" Jan 29 11:38:12.890110 containerd[1875]: time="2025-01-29T11:38:12.890086270Z" level=info msg="StopPodSandbox for \"216c7811ac130a3fb67a0705d8faf937266b3b9566fbec00e15f684f8a1e3795\"" Jan 29 11:38:12.892351 containerd[1875]: time="2025-01-29T11:38:12.892313804Z" level=info msg="TearDown network for sandbox \"216c7811ac130a3fb67a0705d8faf937266b3b9566fbec00e15f684f8a1e3795\" successfully" Jan 29 11:38:12.892351 containerd[1875]: time="2025-01-29T11:38:12.892336571Z" level=info msg="StopPodSandbox for \"216c7811ac130a3fb67a0705d8faf937266b3b9566fbec00e15f684f8a1e3795\" returns successfully" Jan 29 11:38:12.896156 containerd[1875]: time="2025-01-29T11:38:12.896121115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nwzz8,Uid:60248392-cc03-4e9c-8d46-1318728a4ee1,Namespace:calico-system,Attempt:8,}" Jan 29 11:38:12.955095 containerd[1875]: time="2025-01-29T11:38:12.954951960Z" level=info msg="CreateContainer within sandbox \"efd77e7e8d167eb93caa4fcef4dc4d7ee05be46bbb87f44411784ff77f7a254f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"17d8e5597d4564820e06adf42ef7beca357072fc8c192ffdbfad12f15f87d908\"" Jan 29 11:38:12.958685 containerd[1875]: time="2025-01-29T11:38:12.958491486Z" level=info msg="StartContainer for \"17d8e5597d4564820e06adf42ef7beca357072fc8c192ffdbfad12f15f87d908\"" Jan 29 11:38:13.085778 containerd[1875]: time="2025-01-29T11:38:13.084125598Z" level=error msg="Failed to destroy network for sandbox \"fdd2777217320249da7cc4a55923709bc21e0701189a586446f49702263f4815\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:13.085778 containerd[1875]: time="2025-01-29T11:38:13.085207538Z" level=error msg="encountered an error cleaning up failed sandbox \"fdd2777217320249da7cc4a55923709bc21e0701189a586446f49702263f4815\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:13.085778 containerd[1875]: time="2025-01-29T11:38:13.085285237Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-mn4rh,Uid:076cc005-5bb4-41ad-b987-7c6aaa46808b,Namespace:default,Attempt:7,} failed, error" error="failed to setup network for sandbox \"fdd2777217320249da7cc4a55923709bc21e0701189a586446f49702263f4815\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:13.086067 kubelet[2329]: E0129 11:38:13.085578 2329 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fdd2777217320249da7cc4a55923709bc21e0701189a586446f49702263f4815\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:13.086067 kubelet[2329]: E0129 11:38:13.085643 2329 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fdd2777217320249da7cc4a55923709bc21e0701189a586446f49702263f4815\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-mn4rh" Jan 29 11:38:13.086067 kubelet[2329]: E0129 11:38:13.085671 2329 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fdd2777217320249da7cc4a55923709bc21e0701189a586446f49702263f4815\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-mn4rh" Jan 29 11:38:13.086342 kubelet[2329]: E0129 11:38:13.085727 2329 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-mn4rh_default(076cc005-5bb4-41ad-b987-7c6aaa46808b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-mn4rh_default(076cc005-5bb4-41ad-b987-7c6aaa46808b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fdd2777217320249da7cc4a55923709bc21e0701189a586446f49702263f4815\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-mn4rh" podUID="076cc005-5bb4-41ad-b987-7c6aaa46808b" Jan 29 11:38:13.123197 systemd[1]: Started cri-containerd-17d8e5597d4564820e06adf42ef7beca357072fc8c192ffdbfad12f15f87d908.scope - libcontainer container 17d8e5597d4564820e06adf42ef7beca357072fc8c192ffdbfad12f15f87d908. Jan 29 11:38:13.146793 containerd[1875]: time="2025-01-29T11:38:13.146496832Z" level=error msg="Failed to destroy network for sandbox \"8b79c3451af2dd3abee001578beea7ee748562764856304d2e9536ff1b463fcd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:13.148119 containerd[1875]: time="2025-01-29T11:38:13.148075853Z" level=error msg="encountered an error cleaning up failed sandbox \"8b79c3451af2dd3abee001578beea7ee748562764856304d2e9536ff1b463fcd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:13.148245 containerd[1875]: time="2025-01-29T11:38:13.148159346Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nwzz8,Uid:60248392-cc03-4e9c-8d46-1318728a4ee1,Namespace:calico-system,Attempt:8,} failed, error" error="failed to setup network for sandbox \"8b79c3451af2dd3abee001578beea7ee748562764856304d2e9536ff1b463fcd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:13.148458 kubelet[2329]: E0129 11:38:13.148417 2329 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b79c3451af2dd3abee001578beea7ee748562764856304d2e9536ff1b463fcd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:38:13.148598 kubelet[2329]: E0129 11:38:13.148484 2329 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b79c3451af2dd3abee001578beea7ee748562764856304d2e9536ff1b463fcd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nwzz8" Jan 29 11:38:13.148598 kubelet[2329]: E0129 11:38:13.148511 2329 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b79c3451af2dd3abee001578beea7ee748562764856304d2e9536ff1b463fcd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nwzz8" Jan 29 11:38:13.148598 kubelet[2329]: E0129 11:38:13.148567 2329 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-nwzz8_calico-system(60248392-cc03-4e9c-8d46-1318728a4ee1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-nwzz8_calico-system(60248392-cc03-4e9c-8d46-1318728a4ee1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b79c3451af2dd3abee001578beea7ee748562764856304d2e9536ff1b463fcd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nwzz8" podUID="60248392-cc03-4e9c-8d46-1318728a4ee1" Jan 29 11:38:13.188340 containerd[1875]: time="2025-01-29T11:38:13.188294674Z" level=info msg="StartContainer for \"17d8e5597d4564820e06adf42ef7beca357072fc8c192ffdbfad12f15f87d908\" returns successfully" Jan 29 11:38:13.282851 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 29 11:38:13.282965 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 29 11:38:13.414909 kubelet[2329]: E0129 11:38:13.414711 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:13.605585 systemd[1]: run-netns-cni\x2d9ea77e3a\x2dd3cf\x2d8e40\x2d3f02\x2d1377d9207b98.mount: Deactivated successfully. Jan 29 11:38:13.892973 kubelet[2329]: I0129 11:38:13.892930 2329 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b79c3451af2dd3abee001578beea7ee748562764856304d2e9536ff1b463fcd" Jan 29 11:38:13.895358 containerd[1875]: time="2025-01-29T11:38:13.895314050Z" level=info msg="StopPodSandbox for \"8b79c3451af2dd3abee001578beea7ee748562764856304d2e9536ff1b463fcd\"" Jan 29 11:38:13.895813 containerd[1875]: time="2025-01-29T11:38:13.895614440Z" level=info msg="Ensure that sandbox 8b79c3451af2dd3abee001578beea7ee748562764856304d2e9536ff1b463fcd in task-service has been cleanup successfully" Jan 29 11:38:13.900401 containerd[1875]: time="2025-01-29T11:38:13.896071012Z" level=info msg="TearDown network for sandbox \"8b79c3451af2dd3abee001578beea7ee748562764856304d2e9536ff1b463fcd\" successfully" Jan 29 11:38:13.900401 containerd[1875]: time="2025-01-29T11:38:13.896094260Z" level=info msg="StopPodSandbox for \"8b79c3451af2dd3abee001578beea7ee748562764856304d2e9536ff1b463fcd\" returns successfully" Jan 29 11:38:13.900401 containerd[1875]: time="2025-01-29T11:38:13.896664202Z" level=info msg="StopPodSandbox for \"c76cd149a006d32561c3d17292d247b556b057bceb3523b2a64cc3d617b427a1\"" Jan 29 11:38:13.900401 containerd[1875]: time="2025-01-29T11:38:13.896757881Z" level=info msg="TearDown network for sandbox \"c76cd149a006d32561c3d17292d247b556b057bceb3523b2a64cc3d617b427a1\" successfully" Jan 29 11:38:13.900401 containerd[1875]: time="2025-01-29T11:38:13.896775805Z" level=info msg="StopPodSandbox for \"c76cd149a006d32561c3d17292d247b556b057bceb3523b2a64cc3d617b427a1\" returns successfully" Jan 29 11:38:13.900401 containerd[1875]: time="2025-01-29T11:38:13.897842771Z" level=info msg="StopPodSandbox for \"bcc9a08010fb35b7f0936222545b7870fb3e5df61743b71d0b8a8644b1d8b912\"" Jan 29 11:38:13.900401 containerd[1875]: time="2025-01-29T11:38:13.898685584Z" level=info msg="TearDown network for sandbox \"bcc9a08010fb35b7f0936222545b7870fb3e5df61743b71d0b8a8644b1d8b912\" successfully" Jan 29 11:38:13.900401 containerd[1875]: time="2025-01-29T11:38:13.898704768Z" level=info msg="StopPodSandbox for \"bcc9a08010fb35b7f0936222545b7870fb3e5df61743b71d0b8a8644b1d8b912\" returns successfully" Jan 29 11:38:13.900401 containerd[1875]: time="2025-01-29T11:38:13.899788935Z" level=info msg="StopPodSandbox for \"462027f11a01e789a82061dc7f1cc3d297ae7466649a4dc801b2538dd3a2355b\"" Jan 29 11:38:13.907486 containerd[1875]: time="2025-01-29T11:38:13.900451378Z" level=info msg="TearDown network for sandbox \"462027f11a01e789a82061dc7f1cc3d297ae7466649a4dc801b2538dd3a2355b\" successfully" Jan 29 11:38:13.907486 containerd[1875]: time="2025-01-29T11:38:13.900470426Z" level=info msg="StopPodSandbox for \"462027f11a01e789a82061dc7f1cc3d297ae7466649a4dc801b2538dd3a2355b\" returns successfully" Jan 29 11:38:13.909936 containerd[1875]: time="2025-01-29T11:38:13.909397728Z" level=info msg="StopPodSandbox for \"c3274db9cdf7d82669680a6dc6fdb4b3a67aa0130c064a5f25b0b77ef4b98a10\"" Jan 29 11:38:13.909936 containerd[1875]: time="2025-01-29T11:38:13.909761248Z" level=info msg="TearDown network for sandbox \"c3274db9cdf7d82669680a6dc6fdb4b3a67aa0130c064a5f25b0b77ef4b98a10\" successfully" Jan 29 11:38:13.909936 containerd[1875]: time="2025-01-29T11:38:13.909785548Z" level=info msg="StopPodSandbox for \"c3274db9cdf7d82669680a6dc6fdb4b3a67aa0130c064a5f25b0b77ef4b98a10\" returns successfully" Jan 29 11:38:13.918636 containerd[1875]: time="2025-01-29T11:38:13.918489560Z" level=info msg="StopPodSandbox for \"737379651b3aa2f389b43b4dbcf0a0b2c65c0b9dfd90df300c1cc78223bef4cf\"" Jan 29 11:38:13.919317 containerd[1875]: time="2025-01-29T11:38:13.919214680Z" level=info msg="TearDown network for sandbox \"737379651b3aa2f389b43b4dbcf0a0b2c65c0b9dfd90df300c1cc78223bef4cf\" successfully" Jan 29 11:38:13.919317 containerd[1875]: time="2025-01-29T11:38:13.919237813Z" level=info msg="StopPodSandbox for \"737379651b3aa2f389b43b4dbcf0a0b2c65c0b9dfd90df300c1cc78223bef4cf\" returns successfully" Jan 29 11:38:13.919368 systemd[1]: run-netns-cni\x2d26fc90c0\x2d8f6d\x2d0210\x2db008\x2dcd712ebb645a.mount: Deactivated successfully. Jan 29 11:38:13.921948 containerd[1875]: time="2025-01-29T11:38:13.921917648Z" level=info msg="StopPodSandbox for \"538fd0d5057dc6a28263a4d9c1006cd354e6dff76f43d1a1d0a350bbaa3c2ea6\"" Jan 29 11:38:13.922047 containerd[1875]: time="2025-01-29T11:38:13.922024195Z" level=info msg="TearDown network for sandbox \"538fd0d5057dc6a28263a4d9c1006cd354e6dff76f43d1a1d0a350bbaa3c2ea6\" successfully" Jan 29 11:38:13.922047 containerd[1875]: time="2025-01-29T11:38:13.922042781Z" level=info msg="StopPodSandbox for \"538fd0d5057dc6a28263a4d9c1006cd354e6dff76f43d1a1d0a350bbaa3c2ea6\" returns successfully" Jan 29 11:38:13.926473 containerd[1875]: time="2025-01-29T11:38:13.926437919Z" level=info msg="StopPodSandbox for \"81f0bf6a50e4fabde7dd62d12aea35a8ff4ba29aa94292e5b0dbcc3bdd34217e\"" Jan 29 11:38:13.926813 containerd[1875]: time="2025-01-29T11:38:13.926789211Z" level=info msg="TearDown network for sandbox \"81f0bf6a50e4fabde7dd62d12aea35a8ff4ba29aa94292e5b0dbcc3bdd34217e\" successfully" Jan 29 11:38:13.927330 containerd[1875]: time="2025-01-29T11:38:13.926919136Z" level=info msg="StopPodSandbox for \"81f0bf6a50e4fabde7dd62d12aea35a8ff4ba29aa94292e5b0dbcc3bdd34217e\" returns successfully" Jan 29 11:38:13.927608 containerd[1875]: time="2025-01-29T11:38:13.927583907Z" level=info msg="StopPodSandbox for \"216c7811ac130a3fb67a0705d8faf937266b3b9566fbec00e15f684f8a1e3795\"" Jan 29 11:38:13.927696 containerd[1875]: time="2025-01-29T11:38:13.927673777Z" level=info msg="TearDown network for sandbox \"216c7811ac130a3fb67a0705d8faf937266b3b9566fbec00e15f684f8a1e3795\" successfully" Jan 29 11:38:13.927753 containerd[1875]: time="2025-01-29T11:38:13.927693352Z" level=info msg="StopPodSandbox for \"216c7811ac130a3fb67a0705d8faf937266b3b9566fbec00e15f684f8a1e3795\" returns successfully" Jan 29 11:38:13.928036 kubelet[2329]: I0129 11:38:13.928007 2329 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fdd2777217320249da7cc4a55923709bc21e0701189a586446f49702263f4815" Jan 29 11:38:13.929311 containerd[1875]: time="2025-01-29T11:38:13.928946775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nwzz8,Uid:60248392-cc03-4e9c-8d46-1318728a4ee1,Namespace:calico-system,Attempt:9,}" Jan 29 11:38:13.929311 containerd[1875]: time="2025-01-29T11:38:13.929062725Z" level=info msg="StopPodSandbox for \"fdd2777217320249da7cc4a55923709bc21e0701189a586446f49702263f4815\"" Jan 29 11:38:13.929311 containerd[1875]: time="2025-01-29T11:38:13.929257842Z" level=info msg="Ensure that sandbox fdd2777217320249da7cc4a55923709bc21e0701189a586446f49702263f4815 in task-service has been cleanup successfully" Jan 29 11:38:13.933929 containerd[1875]: time="2025-01-29T11:38:13.931935507Z" level=info msg="TearDown network for sandbox \"fdd2777217320249da7cc4a55923709bc21e0701189a586446f49702263f4815\" successfully" Jan 29 11:38:13.933929 containerd[1875]: time="2025-01-29T11:38:13.931980206Z" level=info msg="StopPodSandbox for \"fdd2777217320249da7cc4a55923709bc21e0701189a586446f49702263f4815\" returns successfully" Jan 29 11:38:13.933929 containerd[1875]: time="2025-01-29T11:38:13.933247836Z" level=info msg="StopPodSandbox for \"c52653514d90535bf9be9b55d6ed0baac06f092a82b2b824d5b97ba17d89610b\"" Jan 29 11:38:13.933929 containerd[1875]: time="2025-01-29T11:38:13.933377260Z" level=info msg="TearDown network for sandbox \"c52653514d90535bf9be9b55d6ed0baac06f092a82b2b824d5b97ba17d89610b\" successfully" Jan 29 11:38:13.933929 containerd[1875]: time="2025-01-29T11:38:13.933391459Z" level=info msg="StopPodSandbox for \"c52653514d90535bf9be9b55d6ed0baac06f092a82b2b824d5b97ba17d89610b\" returns successfully" Jan 29 11:38:13.934759 systemd[1]: run-netns-cni\x2d49fbaf96\x2d3ad8\x2dd949\x2d04ea\x2dda5a8c7d1270.mount: Deactivated successfully. Jan 29 11:38:13.936500 containerd[1875]: time="2025-01-29T11:38:13.936346986Z" level=info msg="StopPodSandbox for \"8fb7ed5229073ad6a7b25d1fcef69e50ce85e02f26f53254af79cd4b8c55a731\"" Jan 29 11:38:13.939840 containerd[1875]: time="2025-01-29T11:38:13.938206374Z" level=info msg="TearDown network for sandbox \"8fb7ed5229073ad6a7b25d1fcef69e50ce85e02f26f53254af79cd4b8c55a731\" successfully" Jan 29 11:38:13.939840 containerd[1875]: time="2025-01-29T11:38:13.939576575Z" level=info msg="StopPodSandbox for \"8fb7ed5229073ad6a7b25d1fcef69e50ce85e02f26f53254af79cd4b8c55a731\" returns successfully" Jan 29 11:38:13.941909 containerd[1875]: time="2025-01-29T11:38:13.941804338Z" level=info msg="StopPodSandbox for \"516f171545da168b11d937fb409e6e45a7dc6cae7e97e358df301dc83844bbcc\"" Jan 29 11:38:13.943772 containerd[1875]: time="2025-01-29T11:38:13.943702865Z" level=info msg="TearDown network for sandbox \"516f171545da168b11d937fb409e6e45a7dc6cae7e97e358df301dc83844bbcc\" successfully" Jan 29 11:38:13.943873 containerd[1875]: time="2025-01-29T11:38:13.943771009Z" level=info msg="StopPodSandbox for \"516f171545da168b11d937fb409e6e45a7dc6cae7e97e358df301dc83844bbcc\" returns successfully" Jan 29 11:38:13.944673 containerd[1875]: time="2025-01-29T11:38:13.944322101Z" level=info msg="StopPodSandbox for \"79a8470c61dc8ab98e22503772b8c52b83ce790d03c5e564d5676ba3252ca1fe\"" Jan 29 11:38:13.944673 containerd[1875]: time="2025-01-29T11:38:13.944540271Z" level=info msg="TearDown network for sandbox \"79a8470c61dc8ab98e22503772b8c52b83ce790d03c5e564d5676ba3252ca1fe\" successfully" Jan 29 11:38:13.944673 containerd[1875]: time="2025-01-29T11:38:13.944557360Z" level=info msg="StopPodSandbox for \"79a8470c61dc8ab98e22503772b8c52b83ce790d03c5e564d5676ba3252ca1fe\" returns successfully" Jan 29 11:38:13.945213 containerd[1875]: time="2025-01-29T11:38:13.944952988Z" level=info msg="StopPodSandbox for \"927a3e368c857aee56fec13a0102c646b1689a908268eddb6e01637928fdad6c\"" Jan 29 11:38:13.945213 containerd[1875]: time="2025-01-29T11:38:13.945134218Z" level=info msg="TearDown network for sandbox \"927a3e368c857aee56fec13a0102c646b1689a908268eddb6e01637928fdad6c\" successfully" Jan 29 11:38:13.945213 containerd[1875]: time="2025-01-29T11:38:13.945151787Z" level=info msg="StopPodSandbox for \"927a3e368c857aee56fec13a0102c646b1689a908268eddb6e01637928fdad6c\" returns successfully" Jan 29 11:38:13.945969 containerd[1875]: time="2025-01-29T11:38:13.945947046Z" level=info msg="StopPodSandbox for \"81a591738eb1ac16cb42f5ed39c3920ea2214d2f5fc3f1d787a92fa777a5b834\"" Jan 29 11:38:13.947159 containerd[1875]: time="2025-01-29T11:38:13.946460595Z" level=info msg="TearDown network for sandbox \"81a591738eb1ac16cb42f5ed39c3920ea2214d2f5fc3f1d787a92fa777a5b834\" successfully" Jan 29 11:38:13.947159 containerd[1875]: time="2025-01-29T11:38:13.946664779Z" level=info msg="StopPodSandbox for \"81a591738eb1ac16cb42f5ed39c3920ea2214d2f5fc3f1d787a92fa777a5b834\" returns successfully" Jan 29 11:38:13.947301 containerd[1875]: time="2025-01-29T11:38:13.947250644Z" level=info msg="StopPodSandbox for \"ea5a24182ae6641fccbbbf881afee34b1e37581cdea07e541dd4c20d8bb134c4\"" Jan 29 11:38:13.947500 containerd[1875]: time="2025-01-29T11:38:13.947403881Z" level=info msg="TearDown network for sandbox \"ea5a24182ae6641fccbbbf881afee34b1e37581cdea07e541dd4c20d8bb134c4\" successfully" Jan 29 11:38:13.947730 containerd[1875]: time="2025-01-29T11:38:13.947494882Z" level=info msg="StopPodSandbox for \"ea5a24182ae6641fccbbbf881afee34b1e37581cdea07e541dd4c20d8bb134c4\" returns successfully" Jan 29 11:38:13.949138 containerd[1875]: time="2025-01-29T11:38:13.949104522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-mn4rh,Uid:076cc005-5bb4-41ad-b987-7c6aaa46808b,Namespace:default,Attempt:8,}" Jan 29 11:38:14.348746 (udev-worker)[3357]: Network interface NamePolicy= disabled on kernel command line. Jan 29 11:38:14.353013 systemd-networkd[1730]: cali4e95ef8c474: Link UP Jan 29 11:38:14.353566 systemd-networkd[1730]: cali4e95ef8c474: Gained carrier Jan 29 11:38:14.368761 kubelet[2329]: I0129 11:38:14.368696 2329 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-7p5lq" podStartSLOduration=4.563898215 podStartE2EDuration="24.3686743s" podCreationTimestamp="2025-01-29 11:37:50 +0000 UTC" firstStartedPulling="2025-01-29 11:37:53.037323546 +0000 UTC m=+3.447511235" lastFinishedPulling="2025-01-29 11:38:12.842099642 +0000 UTC m=+23.252287320" observedRunningTime="2025-01-29 11:38:13.987415633 +0000 UTC m=+24.397603333" watchObservedRunningTime="2025-01-29 11:38:14.3686743 +0000 UTC m=+24.778861993" Jan 29 11:38:14.372074 containerd[1875]: 2025-01-29 11:38:14.030 [INFO][3382] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:38:14.372074 containerd[1875]: 2025-01-29 11:38:14.140 [INFO][3382] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.22.18-k8s-csi--node--driver--nwzz8-eth0 csi-node-driver- calico-system 60248392-cc03-4e9c-8d46-1318728a4ee1 958 0 2025-01-29 11:37:50 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172.31.22.18 csi-node-driver-nwzz8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4e95ef8c474 [] []}} ContainerID="68a186e304c9bb3adbc7dc221e400c5e5135440db316958402f08c3be0a0377c" Namespace="calico-system" Pod="csi-node-driver-nwzz8" WorkloadEndpoint="172.31.22.18-k8s-csi--node--driver--nwzz8-" Jan 29 11:38:14.372074 containerd[1875]: 2025-01-29 11:38:14.140 [INFO][3382] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="68a186e304c9bb3adbc7dc221e400c5e5135440db316958402f08c3be0a0377c" Namespace="calico-system" Pod="csi-node-driver-nwzz8" WorkloadEndpoint="172.31.22.18-k8s-csi--node--driver--nwzz8-eth0" Jan 29 11:38:14.372074 containerd[1875]: 2025-01-29 11:38:14.228 [INFO][3406] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="68a186e304c9bb3adbc7dc221e400c5e5135440db316958402f08c3be0a0377c" HandleID="k8s-pod-network.68a186e304c9bb3adbc7dc221e400c5e5135440db316958402f08c3be0a0377c" Workload="172.31.22.18-k8s-csi--node--driver--nwzz8-eth0" Jan 29 11:38:14.372074 containerd[1875]: 2025-01-29 11:38:14.251 [INFO][3406] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="68a186e304c9bb3adbc7dc221e400c5e5135440db316958402f08c3be0a0377c" HandleID="k8s-pod-network.68a186e304c9bb3adbc7dc221e400c5e5135440db316958402f08c3be0a0377c" Workload="172.31.22.18-k8s-csi--node--driver--nwzz8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000dcde0), Attrs:map[string]string{"namespace":"calico-system", "node":"172.31.22.18", "pod":"csi-node-driver-nwzz8", "timestamp":"2025-01-29 11:38:14.228291312 +0000 UTC"}, Hostname:"172.31.22.18", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:38:14.372074 containerd[1875]: 2025-01-29 11:38:14.251 [INFO][3406] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:38:14.372074 containerd[1875]: 2025-01-29 11:38:14.251 [INFO][3406] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:38:14.372074 containerd[1875]: 2025-01-29 11:38:14.251 [INFO][3406] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.22.18' Jan 29 11:38:14.372074 containerd[1875]: 2025-01-29 11:38:14.263 [INFO][3406] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.68a186e304c9bb3adbc7dc221e400c5e5135440db316958402f08c3be0a0377c" host="172.31.22.18" Jan 29 11:38:14.372074 containerd[1875]: 2025-01-29 11:38:14.283 [INFO][3406] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.22.18" Jan 29 11:38:14.372074 containerd[1875]: 2025-01-29 11:38:14.297 [INFO][3406] ipam/ipam.go 489: Trying affinity for 192.168.87.128/26 host="172.31.22.18" Jan 29 11:38:14.372074 containerd[1875]: 2025-01-29 11:38:14.304 [INFO][3406] ipam/ipam.go 155: Attempting to load block cidr=192.168.87.128/26 host="172.31.22.18" Jan 29 11:38:14.372074 containerd[1875]: 2025-01-29 11:38:14.310 [INFO][3406] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.87.128/26 host="172.31.22.18" Jan 29 11:38:14.372074 containerd[1875]: 2025-01-29 11:38:14.310 [INFO][3406] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.87.128/26 handle="k8s-pod-network.68a186e304c9bb3adbc7dc221e400c5e5135440db316958402f08c3be0a0377c" host="172.31.22.18" Jan 29 11:38:14.372074 containerd[1875]: 2025-01-29 11:38:14.315 [INFO][3406] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.68a186e304c9bb3adbc7dc221e400c5e5135440db316958402f08c3be0a0377c Jan 29 11:38:14.372074 containerd[1875]: 2025-01-29 11:38:14.321 [INFO][3406] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.87.128/26 handle="k8s-pod-network.68a186e304c9bb3adbc7dc221e400c5e5135440db316958402f08c3be0a0377c" host="172.31.22.18" Jan 29 11:38:14.372074 containerd[1875]: 2025-01-29 11:38:14.331 [INFO][3406] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.87.129/26] block=192.168.87.128/26 handle="k8s-pod-network.68a186e304c9bb3adbc7dc221e400c5e5135440db316958402f08c3be0a0377c" host="172.31.22.18" Jan 29 11:38:14.372074 containerd[1875]: 2025-01-29 11:38:14.331 [INFO][3406] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.87.129/26] handle="k8s-pod-network.68a186e304c9bb3adbc7dc221e400c5e5135440db316958402f08c3be0a0377c" host="172.31.22.18" Jan 29 11:38:14.372074 containerd[1875]: 2025-01-29 11:38:14.331 [INFO][3406] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:38:14.372074 containerd[1875]: 2025-01-29 11:38:14.331 [INFO][3406] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.87.129/26] IPv6=[] ContainerID="68a186e304c9bb3adbc7dc221e400c5e5135440db316958402f08c3be0a0377c" HandleID="k8s-pod-network.68a186e304c9bb3adbc7dc221e400c5e5135440db316958402f08c3be0a0377c" Workload="172.31.22.18-k8s-csi--node--driver--nwzz8-eth0" Jan 29 11:38:14.373293 containerd[1875]: 2025-01-29 11:38:14.335 [INFO][3382] cni-plugin/k8s.go 386: Populated endpoint ContainerID="68a186e304c9bb3adbc7dc221e400c5e5135440db316958402f08c3be0a0377c" Namespace="calico-system" Pod="csi-node-driver-nwzz8" WorkloadEndpoint="172.31.22.18-k8s-csi--node--driver--nwzz8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.18-k8s-csi--node--driver--nwzz8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"60248392-cc03-4e9c-8d46-1318728a4ee1", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 37, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.22.18", ContainerID:"", Pod:"csi-node-driver-nwzz8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.87.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4e95ef8c474", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:38:14.373293 containerd[1875]: 2025-01-29 11:38:14.336 [INFO][3382] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.87.129/32] ContainerID="68a186e304c9bb3adbc7dc221e400c5e5135440db316958402f08c3be0a0377c" Namespace="calico-system" Pod="csi-node-driver-nwzz8" WorkloadEndpoint="172.31.22.18-k8s-csi--node--driver--nwzz8-eth0" Jan 29 11:38:14.373293 containerd[1875]: 2025-01-29 11:38:14.336 [INFO][3382] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4e95ef8c474 ContainerID="68a186e304c9bb3adbc7dc221e400c5e5135440db316958402f08c3be0a0377c" Namespace="calico-system" Pod="csi-node-driver-nwzz8" WorkloadEndpoint="172.31.22.18-k8s-csi--node--driver--nwzz8-eth0" Jan 29 11:38:14.373293 containerd[1875]: 2025-01-29 11:38:14.352 [INFO][3382] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="68a186e304c9bb3adbc7dc221e400c5e5135440db316958402f08c3be0a0377c" Namespace="calico-system" Pod="csi-node-driver-nwzz8" WorkloadEndpoint="172.31.22.18-k8s-csi--node--driver--nwzz8-eth0" Jan 29 11:38:14.373293 containerd[1875]: 2025-01-29 11:38:14.352 [INFO][3382] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="68a186e304c9bb3adbc7dc221e400c5e5135440db316958402f08c3be0a0377c" Namespace="calico-system" Pod="csi-node-driver-nwzz8" WorkloadEndpoint="172.31.22.18-k8s-csi--node--driver--nwzz8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.18-k8s-csi--node--driver--nwzz8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"60248392-cc03-4e9c-8d46-1318728a4ee1", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 37, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.22.18", ContainerID:"68a186e304c9bb3adbc7dc221e400c5e5135440db316958402f08c3be0a0377c", Pod:"csi-node-driver-nwzz8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.87.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4e95ef8c474", MAC:"ee:6b:ac:57:b8:8e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:38:14.373293 containerd[1875]: 2025-01-29 11:38:14.369 [INFO][3382] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="68a186e304c9bb3adbc7dc221e400c5e5135440db316958402f08c3be0a0377c" Namespace="calico-system" Pod="csi-node-driver-nwzz8" WorkloadEndpoint="172.31.22.18-k8s-csi--node--driver--nwzz8-eth0" Jan 29 11:38:14.401694 containerd[1875]: time="2025-01-29T11:38:14.401512068Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:38:14.401694 containerd[1875]: time="2025-01-29T11:38:14.401580201Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:38:14.401694 containerd[1875]: time="2025-01-29T11:38:14.401598419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:38:14.402209 containerd[1875]: time="2025-01-29T11:38:14.401772357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:38:14.414947 kubelet[2329]: E0129 11:38:14.414894 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:14.425064 systemd[1]: Started cri-containerd-68a186e304c9bb3adbc7dc221e400c5e5135440db316958402f08c3be0a0377c.scope - libcontainer container 68a186e304c9bb3adbc7dc221e400c5e5135440db316958402f08c3be0a0377c. Jan 29 11:38:14.445415 systemd-networkd[1730]: cali187c0688724: Link UP Jan 29 11:38:14.446151 systemd-networkd[1730]: cali187c0688724: Gained carrier Jan 29 11:38:14.481063 containerd[1875]: 2025-01-29 11:38:14.063 [INFO][3392] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:38:14.481063 containerd[1875]: 2025-01-29 11:38:14.141 [INFO][3392] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.22.18-k8s-nginx--deployment--8587fbcb89--mn4rh-eth0 nginx-deployment-8587fbcb89- default 076cc005-5bb4-41ad-b987-7c6aaa46808b 1063 0 2025-01-29 11:38:05 +0000 UTC map[app:nginx pod-template-hash:8587fbcb89 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.22.18 nginx-deployment-8587fbcb89-mn4rh eth0 default [] [] [kns.default ksa.default.default] cali187c0688724 [] []}} ContainerID="fa4bd4107a4a867c52f08f16b0647f16f6eba610450a9b1f308ea990ebaed73f" Namespace="default" Pod="nginx-deployment-8587fbcb89-mn4rh" WorkloadEndpoint="172.31.22.18-k8s-nginx--deployment--8587fbcb89--mn4rh-" Jan 29 11:38:14.481063 containerd[1875]: 2025-01-29 11:38:14.141 [INFO][3392] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fa4bd4107a4a867c52f08f16b0647f16f6eba610450a9b1f308ea990ebaed73f" Namespace="default" Pod="nginx-deployment-8587fbcb89-mn4rh" WorkloadEndpoint="172.31.22.18-k8s-nginx--deployment--8587fbcb89--mn4rh-eth0" Jan 29 11:38:14.481063 containerd[1875]: 2025-01-29 11:38:14.228 [INFO][3405] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fa4bd4107a4a867c52f08f16b0647f16f6eba610450a9b1f308ea990ebaed73f" HandleID="k8s-pod-network.fa4bd4107a4a867c52f08f16b0647f16f6eba610450a9b1f308ea990ebaed73f" Workload="172.31.22.18-k8s-nginx--deployment--8587fbcb89--mn4rh-eth0" Jan 29 11:38:14.481063 containerd[1875]: 2025-01-29 11:38:14.265 [INFO][3405] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fa4bd4107a4a867c52f08f16b0647f16f6eba610450a9b1f308ea990ebaed73f" HandleID="k8s-pod-network.fa4bd4107a4a867c52f08f16b0647f16f6eba610450a9b1f308ea990ebaed73f" Workload="172.31.22.18-k8s-nginx--deployment--8587fbcb89--mn4rh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051440), Attrs:map[string]string{"namespace":"default", "node":"172.31.22.18", "pod":"nginx-deployment-8587fbcb89-mn4rh", "timestamp":"2025-01-29 11:38:14.228289095 +0000 UTC"}, Hostname:"172.31.22.18", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:38:14.481063 containerd[1875]: 2025-01-29 11:38:14.265 [INFO][3405] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:38:14.481063 containerd[1875]: 2025-01-29 11:38:14.332 [INFO][3405] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:38:14.481063 containerd[1875]: 2025-01-29 11:38:14.332 [INFO][3405] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.22.18' Jan 29 11:38:14.481063 containerd[1875]: 2025-01-29 11:38:14.355 [INFO][3405] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fa4bd4107a4a867c52f08f16b0647f16f6eba610450a9b1f308ea990ebaed73f" host="172.31.22.18" Jan 29 11:38:14.481063 containerd[1875]: 2025-01-29 11:38:14.368 [INFO][3405] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.22.18" Jan 29 11:38:14.481063 containerd[1875]: 2025-01-29 11:38:14.393 [INFO][3405] ipam/ipam.go 489: Trying affinity for 192.168.87.128/26 host="172.31.22.18" Jan 29 11:38:14.481063 containerd[1875]: 2025-01-29 11:38:14.399 [INFO][3405] ipam/ipam.go 155: Attempting to load block cidr=192.168.87.128/26 host="172.31.22.18" Jan 29 11:38:14.481063 containerd[1875]: 2025-01-29 11:38:14.405 [INFO][3405] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.87.128/26 host="172.31.22.18" Jan 29 11:38:14.481063 containerd[1875]: 2025-01-29 11:38:14.405 [INFO][3405] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.87.128/26 handle="k8s-pod-network.fa4bd4107a4a867c52f08f16b0647f16f6eba610450a9b1f308ea990ebaed73f" host="172.31.22.18" Jan 29 11:38:14.481063 containerd[1875]: 2025-01-29 11:38:14.409 [INFO][3405] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.fa4bd4107a4a867c52f08f16b0647f16f6eba610450a9b1f308ea990ebaed73f Jan 29 11:38:14.481063 containerd[1875]: 2025-01-29 11:38:14.422 [INFO][3405] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.87.128/26 handle="k8s-pod-network.fa4bd4107a4a867c52f08f16b0647f16f6eba610450a9b1f308ea990ebaed73f" host="172.31.22.18" Jan 29 11:38:14.481063 containerd[1875]: 2025-01-29 11:38:14.437 [INFO][3405] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.87.130/26] block=192.168.87.128/26 handle="k8s-pod-network.fa4bd4107a4a867c52f08f16b0647f16f6eba610450a9b1f308ea990ebaed73f" host="172.31.22.18" Jan 29 11:38:14.481063 containerd[1875]: 2025-01-29 11:38:14.437 [INFO][3405] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.87.130/26] handle="k8s-pod-network.fa4bd4107a4a867c52f08f16b0647f16f6eba610450a9b1f308ea990ebaed73f" host="172.31.22.18" Jan 29 11:38:14.481063 containerd[1875]: 2025-01-29 11:38:14.437 [INFO][3405] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:38:14.481063 containerd[1875]: 2025-01-29 11:38:14.437 [INFO][3405] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.87.130/26] IPv6=[] ContainerID="fa4bd4107a4a867c52f08f16b0647f16f6eba610450a9b1f308ea990ebaed73f" HandleID="k8s-pod-network.fa4bd4107a4a867c52f08f16b0647f16f6eba610450a9b1f308ea990ebaed73f" Workload="172.31.22.18-k8s-nginx--deployment--8587fbcb89--mn4rh-eth0" Jan 29 11:38:14.482704 containerd[1875]: 2025-01-29 11:38:14.439 [INFO][3392] cni-plugin/k8s.go 386: Populated endpoint ContainerID="fa4bd4107a4a867c52f08f16b0647f16f6eba610450a9b1f308ea990ebaed73f" Namespace="default" Pod="nginx-deployment-8587fbcb89-mn4rh" WorkloadEndpoint="172.31.22.18-k8s-nginx--deployment--8587fbcb89--mn4rh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.18-k8s-nginx--deployment--8587fbcb89--mn4rh-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"076cc005-5bb4-41ad-b987-7c6aaa46808b", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 38, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.22.18", ContainerID:"", Pod:"nginx-deployment-8587fbcb89-mn4rh", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.87.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali187c0688724", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:38:14.482704 containerd[1875]: 2025-01-29 11:38:14.441 [INFO][3392] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.87.130/32] ContainerID="fa4bd4107a4a867c52f08f16b0647f16f6eba610450a9b1f308ea990ebaed73f" Namespace="default" Pod="nginx-deployment-8587fbcb89-mn4rh" WorkloadEndpoint="172.31.22.18-k8s-nginx--deployment--8587fbcb89--mn4rh-eth0" Jan 29 11:38:14.482704 containerd[1875]: 2025-01-29 11:38:14.441 [INFO][3392] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali187c0688724 ContainerID="fa4bd4107a4a867c52f08f16b0647f16f6eba610450a9b1f308ea990ebaed73f" Namespace="default" Pod="nginx-deployment-8587fbcb89-mn4rh" WorkloadEndpoint="172.31.22.18-k8s-nginx--deployment--8587fbcb89--mn4rh-eth0" Jan 29 11:38:14.482704 containerd[1875]: 2025-01-29 11:38:14.451 [INFO][3392] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fa4bd4107a4a867c52f08f16b0647f16f6eba610450a9b1f308ea990ebaed73f" Namespace="default" Pod="nginx-deployment-8587fbcb89-mn4rh" WorkloadEndpoint="172.31.22.18-k8s-nginx--deployment--8587fbcb89--mn4rh-eth0" Jan 29 11:38:14.482704 containerd[1875]: 2025-01-29 11:38:14.453 [INFO][3392] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fa4bd4107a4a867c52f08f16b0647f16f6eba610450a9b1f308ea990ebaed73f" Namespace="default" Pod="nginx-deployment-8587fbcb89-mn4rh" WorkloadEndpoint="172.31.22.18-k8s-nginx--deployment--8587fbcb89--mn4rh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.18-k8s-nginx--deployment--8587fbcb89--mn4rh-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"076cc005-5bb4-41ad-b987-7c6aaa46808b", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 38, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.22.18", ContainerID:"fa4bd4107a4a867c52f08f16b0647f16f6eba610450a9b1f308ea990ebaed73f", Pod:"nginx-deployment-8587fbcb89-mn4rh", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.87.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali187c0688724", MAC:"be:88:1f:5d:40:fb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:38:14.482704 containerd[1875]: 2025-01-29 11:38:14.477 [INFO][3392] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="fa4bd4107a4a867c52f08f16b0647f16f6eba610450a9b1f308ea990ebaed73f" Namespace="default" Pod="nginx-deployment-8587fbcb89-mn4rh" WorkloadEndpoint="172.31.22.18-k8s-nginx--deployment--8587fbcb89--mn4rh-eth0" Jan 29 11:38:14.499169 containerd[1875]: time="2025-01-29T11:38:14.499123251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nwzz8,Uid:60248392-cc03-4e9c-8d46-1318728a4ee1,Namespace:calico-system,Attempt:9,} returns sandbox id \"68a186e304c9bb3adbc7dc221e400c5e5135440db316958402f08c3be0a0377c\"" Jan 29 11:38:14.506117 containerd[1875]: time="2025-01-29T11:38:14.506006114Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 29 11:38:14.525983 containerd[1875]: time="2025-01-29T11:38:14.525636818Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:38:14.525983 containerd[1875]: time="2025-01-29T11:38:14.525731632Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:38:14.525983 containerd[1875]: time="2025-01-29T11:38:14.525749000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:38:14.525983 containerd[1875]: time="2025-01-29T11:38:14.525917678Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:38:14.560027 systemd[1]: Started cri-containerd-fa4bd4107a4a867c52f08f16b0647f16f6eba610450a9b1f308ea990ebaed73f.scope - libcontainer container fa4bd4107a4a867c52f08f16b0647f16f6eba610450a9b1f308ea990ebaed73f. Jan 29 11:38:14.655489 containerd[1875]: time="2025-01-29T11:38:14.655218722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-mn4rh,Uid:076cc005-5bb4-41ad-b987-7c6aaa46808b,Namespace:default,Attempt:8,} returns sandbox id \"fa4bd4107a4a867c52f08f16b0647f16f6eba610450a9b1f308ea990ebaed73f\"" Jan 29 11:38:15.012147 systemd[1]: run-containerd-runc-k8s.io-17d8e5597d4564820e06adf42ef7beca357072fc8c192ffdbfad12f15f87d908-runc.vmvMYX.mount: Deactivated successfully. Jan 29 11:38:15.337871 kernel: bpftool[3663]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 29 11:38:15.417147 kubelet[2329]: E0129 11:38:15.417055 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:15.736559 systemd-networkd[1730]: vxlan.calico: Link UP Jan 29 11:38:15.736570 systemd-networkd[1730]: vxlan.calico: Gained carrier Jan 29 11:38:15.740625 (udev-worker)[3420]: Network interface NamePolicy= disabled on kernel command line. Jan 29 11:38:15.994236 systemd-networkd[1730]: cali4e95ef8c474: Gained IPv6LL Jan 29 11:38:16.185092 systemd-networkd[1730]: cali187c0688724: Gained IPv6LL Jan 29 11:38:16.233740 containerd[1875]: time="2025-01-29T11:38:16.233470164Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:38:16.236781 containerd[1875]: time="2025-01-29T11:38:16.236253720Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 29 11:38:16.238757 containerd[1875]: time="2025-01-29T11:38:16.238707666Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:38:16.256033 containerd[1875]: time="2025-01-29T11:38:16.248808422Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:38:16.256033 containerd[1875]: time="2025-01-29T11:38:16.251010094Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.74494335s" Jan 29 11:38:16.256033 containerd[1875]: time="2025-01-29T11:38:16.251985798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 29 11:38:16.256610 containerd[1875]: time="2025-01-29T11:38:16.256326339Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 29 11:38:16.260990 containerd[1875]: time="2025-01-29T11:38:16.260439770Z" level=info msg="CreateContainer within sandbox \"68a186e304c9bb3adbc7dc221e400c5e5135440db316958402f08c3be0a0377c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 29 11:38:16.315608 containerd[1875]: time="2025-01-29T11:38:16.315553984Z" level=info msg="CreateContainer within sandbox \"68a186e304c9bb3adbc7dc221e400c5e5135440db316958402f08c3be0a0377c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"0fb71c6d00b7782c558698d206df7072842694f68fed8764a1f2dc38f72f8818\"" Jan 29 11:38:16.324479 containerd[1875]: time="2025-01-29T11:38:16.324439586Z" level=info msg="StartContainer for \"0fb71c6d00b7782c558698d206df7072842694f68fed8764a1f2dc38f72f8818\"" Jan 29 11:38:16.411050 systemd[1]: Started cri-containerd-0fb71c6d00b7782c558698d206df7072842694f68fed8764a1f2dc38f72f8818.scope - libcontainer container 0fb71c6d00b7782c558698d206df7072842694f68fed8764a1f2dc38f72f8818. Jan 29 11:38:16.423423 kubelet[2329]: E0129 11:38:16.417940 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:16.518297 containerd[1875]: time="2025-01-29T11:38:16.517715613Z" level=info msg="StartContainer for \"0fb71c6d00b7782c558698d206df7072842694f68fed8764a1f2dc38f72f8818\" returns successfully" Jan 29 11:38:17.404429 systemd-networkd[1730]: vxlan.calico: Gained IPv6LL Jan 29 11:38:17.420862 kubelet[2329]: E0129 11:38:17.418224 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:18.418770 kubelet[2329]: E0129 11:38:18.418637 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:19.419151 kubelet[2329]: E0129 11:38:19.419110 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:19.436863 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2701105827.mount: Deactivated successfully. Jan 29 11:38:20.063454 ntpd[1856]: Listen normally on 7 vxlan.calico 192.168.87.128:123 Jan 29 11:38:20.064409 ntpd[1856]: 29 Jan 11:38:20 ntpd[1856]: Listen normally on 7 vxlan.calico 192.168.87.128:123 Jan 29 11:38:20.064478 ntpd[1856]: 29 Jan 11:38:20 ntpd[1856]: Listen normally on 8 cali4e95ef8c474 [fe80::ecee:eeff:feee:eeee%3]:123 Jan 29 11:38:20.064455 ntpd[1856]: Listen normally on 8 cali4e95ef8c474 [fe80::ecee:eeff:feee:eeee%3]:123 Jan 29 11:38:20.064641 ntpd[1856]: 29 Jan 11:38:20 ntpd[1856]: Listen normally on 9 cali187c0688724 [fe80::ecee:eeff:feee:eeee%4]:123 Jan 29 11:38:20.064641 ntpd[1856]: 29 Jan 11:38:20 ntpd[1856]: Listen normally on 10 vxlan.calico [fe80::6414:21ff:fe31:a8b%5]:123 Jan 29 11:38:20.064513 ntpd[1856]: Listen normally on 9 cali187c0688724 [fe80::ecee:eeff:feee:eeee%4]:123 Jan 29 11:38:20.064557 ntpd[1856]: Listen normally on 10 vxlan.calico [fe80::6414:21ff:fe31:a8b%5]:123 Jan 29 11:38:20.422954 kubelet[2329]: E0129 11:38:20.421214 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:21.422103 kubelet[2329]: E0129 11:38:21.422051 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:21.523703 containerd[1875]: time="2025-01-29T11:38:21.519107557Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71015561" Jan 29 11:38:21.526788 containerd[1875]: time="2025-01-29T11:38:21.526715624Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:38:21.530786 containerd[1875]: time="2025-01-29T11:38:21.530692308Z" level=info msg="ImageCreate event name:\"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:38:21.532067 containerd[1875]: time="2025-01-29T11:38:21.531594360Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:38:21.532909 containerd[1875]: time="2025-01-29T11:38:21.532872249Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 5.276475835s" Jan 29 11:38:21.533020 containerd[1875]: time="2025-01-29T11:38:21.532916565Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 29 11:38:21.553301 containerd[1875]: time="2025-01-29T11:38:21.551778698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 29 11:38:21.565282 containerd[1875]: time="2025-01-29T11:38:21.564492126Z" level=info msg="CreateContainer within sandbox \"fa4bd4107a4a867c52f08f16b0647f16f6eba610450a9b1f308ea990ebaed73f\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 29 11:38:21.626209 containerd[1875]: time="2025-01-29T11:38:21.626146267Z" level=info msg="CreateContainer within sandbox \"fa4bd4107a4a867c52f08f16b0647f16f6eba610450a9b1f308ea990ebaed73f\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"c4f69ed63229edc918881a58ceb17331f786013093a44c655faaa2f4af7fd725\"" Jan 29 11:38:21.627521 containerd[1875]: time="2025-01-29T11:38:21.627426889Z" level=info msg="StartContainer for \"c4f69ed63229edc918881a58ceb17331f786013093a44c655faaa2f4af7fd725\"" Jan 29 11:38:21.699061 systemd[1]: Started cri-containerd-c4f69ed63229edc918881a58ceb17331f786013093a44c655faaa2f4af7fd725.scope - libcontainer container c4f69ed63229edc918881a58ceb17331f786013093a44c655faaa2f4af7fd725. Jan 29 11:38:21.752513 containerd[1875]: time="2025-01-29T11:38:21.752209380Z" level=info msg="StartContainer for \"c4f69ed63229edc918881a58ceb17331f786013093a44c655faaa2f4af7fd725\" returns successfully" Jan 29 11:38:22.042179 kubelet[2329]: I0129 11:38:22.042110 2329 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-mn4rh" podStartSLOduration=10.157905507 podStartE2EDuration="17.03774978s" podCreationTimestamp="2025-01-29 11:38:05 +0000 UTC" firstStartedPulling="2025-01-29 11:38:14.659920209 +0000 UTC m=+25.070107900" lastFinishedPulling="2025-01-29 11:38:21.539764494 +0000 UTC m=+31.949952173" observedRunningTime="2025-01-29 11:38:22.03374108 +0000 UTC m=+32.443928780" watchObservedRunningTime="2025-01-29 11:38:22.03774978 +0000 UTC m=+32.447937478" Jan 29 11:38:22.422810 kubelet[2329]: E0129 11:38:22.422613 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:23.191368 containerd[1875]: time="2025-01-29T11:38:23.191311538Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:38:23.194215 containerd[1875]: time="2025-01-29T11:38:23.193998023Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 29 11:38:23.196867 containerd[1875]: time="2025-01-29T11:38:23.196386807Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:38:23.201723 containerd[1875]: time="2025-01-29T11:38:23.200659212Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:38:23.201723 containerd[1875]: time="2025-01-29T11:38:23.201509035Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.649625941s" Jan 29 11:38:23.201723 containerd[1875]: time="2025-01-29T11:38:23.201548904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 29 11:38:23.203901 containerd[1875]: time="2025-01-29T11:38:23.203860669Z" level=info msg="CreateContainer within sandbox \"68a186e304c9bb3adbc7dc221e400c5e5135440db316958402f08c3be0a0377c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 29 11:38:23.233315 containerd[1875]: time="2025-01-29T11:38:23.233261989Z" level=info msg="CreateContainer within sandbox \"68a186e304c9bb3adbc7dc221e400c5e5135440db316958402f08c3be0a0377c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"794deed8d7dada9741b3c022f42d9835448d4aefb738dfdce6ae04e28b4b4baa\"" Jan 29 11:38:23.233998 containerd[1875]: time="2025-01-29T11:38:23.233951767Z" level=info msg="StartContainer for \"794deed8d7dada9741b3c022f42d9835448d4aefb738dfdce6ae04e28b4b4baa\"" Jan 29 11:38:23.278494 systemd[1]: Started cri-containerd-794deed8d7dada9741b3c022f42d9835448d4aefb738dfdce6ae04e28b4b4baa.scope - libcontainer container 794deed8d7dada9741b3c022f42d9835448d4aefb738dfdce6ae04e28b4b4baa. Jan 29 11:38:23.340349 containerd[1875]: time="2025-01-29T11:38:23.340285589Z" level=info msg="StartContainer for \"794deed8d7dada9741b3c022f42d9835448d4aefb738dfdce6ae04e28b4b4baa\" returns successfully" Jan 29 11:38:23.423138 kubelet[2329]: E0129 11:38:23.423073 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:23.641421 kubelet[2329]: I0129 11:38:23.633339 2329 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 29 11:38:23.641592 kubelet[2329]: I0129 11:38:23.641438 2329 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 29 11:38:24.423803 kubelet[2329]: E0129 11:38:24.423751 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:25.424708 kubelet[2329]: E0129 11:38:25.424652 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:25.756465 update_engine[1863]: I20250129 11:38:25.756271 1863 update_attempter.cc:509] Updating boot flags... Jan 29 11:38:25.845874 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3939) Jan 29 11:38:26.176006 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3939) Jan 29 11:38:26.193166 kubelet[2329]: I0129 11:38:26.193095 2329 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-nwzz8" podStartSLOduration=27.49597645 podStartE2EDuration="36.19307183s" podCreationTimestamp="2025-01-29 11:37:50 +0000 UTC" firstStartedPulling="2025-01-29 11:38:14.505606597 +0000 UTC m=+24.915794289" lastFinishedPulling="2025-01-29 11:38:23.202701984 +0000 UTC m=+33.612889669" observedRunningTime="2025-01-29 11:38:24.060349697 +0000 UTC m=+34.470537396" watchObservedRunningTime="2025-01-29 11:38:26.19307183 +0000 UTC m=+36.603259526" Jan 29 11:38:26.275971 systemd[1]: Created slice kubepods-besteffort-pod9af33155_191e_483b_ad23_0752e8142d48.slice - libcontainer container kubepods-besteffort-pod9af33155_191e_483b_ad23_0752e8142d48.slice. Jan 29 11:38:26.302322 kubelet[2329]: I0129 11:38:26.302096 2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vklfv\" (UniqueName: \"kubernetes.io/projected/9af33155-191e-483b-ad23-0752e8142d48-kube-api-access-vklfv\") pod \"nfs-server-provisioner-0\" (UID: \"9af33155-191e-483b-ad23-0752e8142d48\") " pod="default/nfs-server-provisioner-0" Jan 29 11:38:26.302322 kubelet[2329]: I0129 11:38:26.302239 2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/9af33155-191e-483b-ad23-0752e8142d48-data\") pod \"nfs-server-provisioner-0\" (UID: \"9af33155-191e-483b-ad23-0752e8142d48\") " pod="default/nfs-server-provisioner-0" Jan 29 11:38:26.427140 kubelet[2329]: E0129 11:38:26.426962 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:26.584681 containerd[1875]: time="2025-01-29T11:38:26.584635913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:9af33155-191e-483b-ad23-0752e8142d48,Namespace:default,Attempt:0,}" Jan 29 11:38:26.899284 (udev-worker)[3940]: Network interface NamePolicy= disabled on kernel command line. Jan 29 11:38:26.901022 systemd-networkd[1730]: cali60e51b789ff: Link UP Jan 29 11:38:26.902863 systemd-networkd[1730]: cali60e51b789ff: Gained carrier Jan 29 11:38:26.919922 containerd[1875]: 2025-01-29 11:38:26.698 [INFO][4122] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.22.18-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 9af33155-191e-483b-ad23-0752e8142d48 1193 0 2025-01-29 11:38:26 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 172.31.22.18 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="6044af1896e884dd02eb61aa8e5f2f5b52d0fcd6648d310eb6c08613c4f1ee9f" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.22.18-k8s-nfs--server--provisioner--0-" Jan 29 11:38:26.919922 containerd[1875]: 2025-01-29 11:38:26.698 [INFO][4122] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6044af1896e884dd02eb61aa8e5f2f5b52d0fcd6648d310eb6c08613c4f1ee9f" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.22.18-k8s-nfs--server--provisioner--0-eth0" Jan 29 11:38:26.919922 containerd[1875]: 2025-01-29 11:38:26.749 [INFO][4132] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6044af1896e884dd02eb61aa8e5f2f5b52d0fcd6648d310eb6c08613c4f1ee9f" HandleID="k8s-pod-network.6044af1896e884dd02eb61aa8e5f2f5b52d0fcd6648d310eb6c08613c4f1ee9f" Workload="172.31.22.18-k8s-nfs--server--provisioner--0-eth0" Jan 29 11:38:26.919922 containerd[1875]: 2025-01-29 11:38:26.764 [INFO][4132] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6044af1896e884dd02eb61aa8e5f2f5b52d0fcd6648d310eb6c08613c4f1ee9f" HandleID="k8s-pod-network.6044af1896e884dd02eb61aa8e5f2f5b52d0fcd6648d310eb6c08613c4f1ee9f" Workload="172.31.22.18-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290b70), Attrs:map[string]string{"namespace":"default", "node":"172.31.22.18", "pod":"nfs-server-provisioner-0", "timestamp":"2025-01-29 11:38:26.749317189 +0000 UTC"}, Hostname:"172.31.22.18", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:38:26.919922 containerd[1875]: 2025-01-29 11:38:26.764 [INFO][4132] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:38:26.919922 containerd[1875]: 2025-01-29 11:38:26.764 [INFO][4132] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:38:26.919922 containerd[1875]: 2025-01-29 11:38:26.764 [INFO][4132] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.22.18' Jan 29 11:38:26.919922 containerd[1875]: 2025-01-29 11:38:26.768 [INFO][4132] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6044af1896e884dd02eb61aa8e5f2f5b52d0fcd6648d310eb6c08613c4f1ee9f" host="172.31.22.18" Jan 29 11:38:26.919922 containerd[1875]: 2025-01-29 11:38:26.862 [INFO][4132] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.22.18" Jan 29 11:38:26.919922 containerd[1875]: 2025-01-29 11:38:26.868 [INFO][4132] ipam/ipam.go 489: Trying affinity for 192.168.87.128/26 host="172.31.22.18" Jan 29 11:38:26.919922 containerd[1875]: 2025-01-29 11:38:26.871 [INFO][4132] ipam/ipam.go 155: Attempting to load block cidr=192.168.87.128/26 host="172.31.22.18" Jan 29 11:38:26.919922 containerd[1875]: 2025-01-29 11:38:26.874 [INFO][4132] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.87.128/26 host="172.31.22.18" Jan 29 11:38:26.919922 containerd[1875]: 2025-01-29 11:38:26.874 [INFO][4132] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.87.128/26 handle="k8s-pod-network.6044af1896e884dd02eb61aa8e5f2f5b52d0fcd6648d310eb6c08613c4f1ee9f" host="172.31.22.18" Jan 29 11:38:26.919922 containerd[1875]: 2025-01-29 11:38:26.876 [INFO][4132] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6044af1896e884dd02eb61aa8e5f2f5b52d0fcd6648d310eb6c08613c4f1ee9f Jan 29 11:38:26.919922 containerd[1875]: 2025-01-29 11:38:26.882 [INFO][4132] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.87.128/26 handle="k8s-pod-network.6044af1896e884dd02eb61aa8e5f2f5b52d0fcd6648d310eb6c08613c4f1ee9f" host="172.31.22.18" Jan 29 11:38:26.919922 containerd[1875]: 2025-01-29 11:38:26.890 [INFO][4132] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.87.131/26] block=192.168.87.128/26 handle="k8s-pod-network.6044af1896e884dd02eb61aa8e5f2f5b52d0fcd6648d310eb6c08613c4f1ee9f" host="172.31.22.18" Jan 29 11:38:26.919922 containerd[1875]: 2025-01-29 11:38:26.890 [INFO][4132] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.87.131/26] handle="k8s-pod-network.6044af1896e884dd02eb61aa8e5f2f5b52d0fcd6648d310eb6c08613c4f1ee9f" host="172.31.22.18" Jan 29 11:38:26.919922 containerd[1875]: 2025-01-29 11:38:26.890 [INFO][4132] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:38:26.919922 containerd[1875]: 2025-01-29 11:38:26.890 [INFO][4132] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.87.131/26] IPv6=[] ContainerID="6044af1896e884dd02eb61aa8e5f2f5b52d0fcd6648d310eb6c08613c4f1ee9f" HandleID="k8s-pod-network.6044af1896e884dd02eb61aa8e5f2f5b52d0fcd6648d310eb6c08613c4f1ee9f" Workload="172.31.22.18-k8s-nfs--server--provisioner--0-eth0" Jan 29 11:38:26.921212 containerd[1875]: 2025-01-29 11:38:26.892 [INFO][4122] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6044af1896e884dd02eb61aa8e5f2f5b52d0fcd6648d310eb6c08613c4f1ee9f" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.22.18-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.18-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"9af33155-191e-483b-ad23-0752e8142d48", ResourceVersion:"1193", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 38, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.22.18", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.87.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:38:26.921212 containerd[1875]: 2025-01-29 11:38:26.893 [INFO][4122] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.87.131/32] ContainerID="6044af1896e884dd02eb61aa8e5f2f5b52d0fcd6648d310eb6c08613c4f1ee9f" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.22.18-k8s-nfs--server--provisioner--0-eth0" Jan 29 11:38:26.921212 containerd[1875]: 2025-01-29 11:38:26.893 [INFO][4122] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="6044af1896e884dd02eb61aa8e5f2f5b52d0fcd6648d310eb6c08613c4f1ee9f" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.22.18-k8s-nfs--server--provisioner--0-eth0" Jan 29 11:38:26.921212 containerd[1875]: 2025-01-29 11:38:26.903 [INFO][4122] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6044af1896e884dd02eb61aa8e5f2f5b52d0fcd6648d310eb6c08613c4f1ee9f" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.22.18-k8s-nfs--server--provisioner--0-eth0" Jan 29 11:38:26.921512 containerd[1875]: 2025-01-29 11:38:26.904 [INFO][4122] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6044af1896e884dd02eb61aa8e5f2f5b52d0fcd6648d310eb6c08613c4f1ee9f" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.22.18-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.18-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"9af33155-191e-483b-ad23-0752e8142d48", ResourceVersion:"1193", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 38, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.22.18", ContainerID:"6044af1896e884dd02eb61aa8e5f2f5b52d0fcd6648d310eb6c08613c4f1ee9f", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.87.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"2a:f4:57:b1:1e:7c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:38:26.921512 containerd[1875]: 2025-01-29 11:38:26.918 [INFO][4122] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6044af1896e884dd02eb61aa8e5f2f5b52d0fcd6648d310eb6c08613c4f1ee9f" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.22.18-k8s-nfs--server--provisioner--0-eth0" Jan 29 11:38:26.962935 containerd[1875]: time="2025-01-29T11:38:26.962454249Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:38:26.962935 containerd[1875]: time="2025-01-29T11:38:26.962520695Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:38:26.962935 containerd[1875]: time="2025-01-29T11:38:26.962544874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:38:26.963222 containerd[1875]: time="2025-01-29T11:38:26.962865411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:38:27.000351 systemd[1]: Started cri-containerd-6044af1896e884dd02eb61aa8e5f2f5b52d0fcd6648d310eb6c08613c4f1ee9f.scope - libcontainer container 6044af1896e884dd02eb61aa8e5f2f5b52d0fcd6648d310eb6c08613c4f1ee9f. Jan 29 11:38:27.135528 containerd[1875]: time="2025-01-29T11:38:27.135484118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:9af33155-191e-483b-ad23-0752e8142d48,Namespace:default,Attempt:0,} returns sandbox id \"6044af1896e884dd02eb61aa8e5f2f5b52d0fcd6648d310eb6c08613c4f1ee9f\"" Jan 29 11:38:27.166491 containerd[1875]: time="2025-01-29T11:38:27.166367969Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 29 11:38:27.427665 kubelet[2329]: E0129 11:38:27.427556 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:28.222203 systemd-networkd[1730]: cali60e51b789ff: Gained IPv6LL Jan 29 11:38:28.428434 kubelet[2329]: E0129 11:38:28.428390 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:29.429569 kubelet[2329]: E0129 11:38:29.429516 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:30.396000 kubelet[2329]: E0129 11:38:30.395956 2329 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:30.430488 kubelet[2329]: E0129 11:38:30.430387 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:30.462258 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1794746692.mount: Deactivated successfully. Jan 29 11:38:31.063253 ntpd[1856]: Listen normally on 11 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Jan 29 11:38:31.063677 ntpd[1856]: 29 Jan 11:38:31 ntpd[1856]: Listen normally on 11 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Jan 29 11:38:31.432940 kubelet[2329]: E0129 11:38:31.431979 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:32.435590 kubelet[2329]: E0129 11:38:32.433943 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:33.435984 kubelet[2329]: E0129 11:38:33.435903 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:33.645899 containerd[1875]: time="2025-01-29T11:38:33.645770075Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:38:33.647585 containerd[1875]: time="2025-01-29T11:38:33.647493912Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jan 29 11:38:33.648857 containerd[1875]: time="2025-01-29T11:38:33.648464540Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:38:33.653502 containerd[1875]: time="2025-01-29T11:38:33.653432126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:38:33.659428 containerd[1875]: time="2025-01-29T11:38:33.659373131Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 6.492960147s" Jan 29 11:38:33.659428 containerd[1875]: time="2025-01-29T11:38:33.659428672Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 29 11:38:33.756935 containerd[1875]: time="2025-01-29T11:38:33.756892796Z" level=info msg="CreateContainer within sandbox \"6044af1896e884dd02eb61aa8e5f2f5b52d0fcd6648d310eb6c08613c4f1ee9f\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 29 11:38:33.772589 containerd[1875]: time="2025-01-29T11:38:33.772545764Z" level=info msg="CreateContainer within sandbox \"6044af1896e884dd02eb61aa8e5f2f5b52d0fcd6648d310eb6c08613c4f1ee9f\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"11630a68b3565bd9122f0b541f16ebd8aef9b9c4b67f37e82a2be9ecd3cf6dcc\"" Jan 29 11:38:33.773688 containerd[1875]: time="2025-01-29T11:38:33.773653469Z" level=info msg="StartContainer for \"11630a68b3565bd9122f0b541f16ebd8aef9b9c4b67f37e82a2be9ecd3cf6dcc\"" Jan 29 11:38:33.828039 systemd[1]: Started cri-containerd-11630a68b3565bd9122f0b541f16ebd8aef9b9c4b67f37e82a2be9ecd3cf6dcc.scope - libcontainer container 11630a68b3565bd9122f0b541f16ebd8aef9b9c4b67f37e82a2be9ecd3cf6dcc. Jan 29 11:38:33.892446 containerd[1875]: time="2025-01-29T11:38:33.892267670Z" level=info msg="StartContainer for \"11630a68b3565bd9122f0b541f16ebd8aef9b9c4b67f37e82a2be9ecd3cf6dcc\" returns successfully" Jan 29 11:38:34.135326 kubelet[2329]: I0129 11:38:34.134975 2329 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.5592632690000001 podStartE2EDuration="8.13179527s" podCreationTimestamp="2025-01-29 11:38:26 +0000 UTC" firstStartedPulling="2025-01-29 11:38:27.139806685 +0000 UTC m=+37.549994366" lastFinishedPulling="2025-01-29 11:38:33.712338685 +0000 UTC m=+44.122526367" observedRunningTime="2025-01-29 11:38:34.130517244 +0000 UTC m=+44.540704967" watchObservedRunningTime="2025-01-29 11:38:34.13179527 +0000 UTC m=+44.541982970" Jan 29 11:38:34.436725 kubelet[2329]: E0129 11:38:34.436547 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:35.437407 kubelet[2329]: E0129 11:38:35.437317 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:36.438195 kubelet[2329]: E0129 11:38:36.438130 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:37.438956 kubelet[2329]: E0129 11:38:37.438901 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:38.440120 kubelet[2329]: E0129 11:38:38.440017 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:39.440494 kubelet[2329]: E0129 11:38:39.440433 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:40.441506 kubelet[2329]: E0129 11:38:40.441225 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:41.442426 kubelet[2329]: E0129 11:38:41.442258 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:42.442623 kubelet[2329]: E0129 11:38:42.442562 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:43.443180 kubelet[2329]: E0129 11:38:43.443113 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:44.443341 kubelet[2329]: E0129 11:38:44.443287 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:45.443676 kubelet[2329]: E0129 11:38:45.443620 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:46.444610 kubelet[2329]: E0129 11:38:46.444563 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:47.445616 kubelet[2329]: E0129 11:38:47.445557 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:48.446743 kubelet[2329]: E0129 11:38:48.446681 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:49.446945 kubelet[2329]: E0129 11:38:49.446887 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:50.391342 kubelet[2329]: E0129 11:38:50.391280 2329 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:50.435780 containerd[1875]: time="2025-01-29T11:38:50.435735705Z" level=info msg="StopPodSandbox for \"216c7811ac130a3fb67a0705d8faf937266b3b9566fbec00e15f684f8a1e3795\"" Jan 29 11:38:50.436377 containerd[1875]: time="2025-01-29T11:38:50.435994865Z" level=info msg="TearDown network for sandbox \"216c7811ac130a3fb67a0705d8faf937266b3b9566fbec00e15f684f8a1e3795\" successfully" Jan 29 11:38:50.436377 containerd[1875]: time="2025-01-29T11:38:50.436062995Z" level=info msg="StopPodSandbox for \"216c7811ac130a3fb67a0705d8faf937266b3b9566fbec00e15f684f8a1e3795\" returns successfully" Jan 29 11:38:50.445766 containerd[1875]: time="2025-01-29T11:38:50.444854301Z" level=info msg="RemovePodSandbox for \"216c7811ac130a3fb67a0705d8faf937266b3b9566fbec00e15f684f8a1e3795\"" Jan 29 11:38:50.448508 kubelet[2329]: E0129 11:38:50.447892 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:50.453518 containerd[1875]: time="2025-01-29T11:38:50.453466688Z" level=info msg="Forcibly stopping sandbox \"216c7811ac130a3fb67a0705d8faf937266b3b9566fbec00e15f684f8a1e3795\"" Jan 29 11:38:50.453682 containerd[1875]: time="2025-01-29T11:38:50.453606090Z" level=info msg="TearDown network for sandbox \"216c7811ac130a3fb67a0705d8faf937266b3b9566fbec00e15f684f8a1e3795\" successfully" Jan 29 11:38:50.488634 containerd[1875]: time="2025-01-29T11:38:50.488571469Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"216c7811ac130a3fb67a0705d8faf937266b3b9566fbec00e15f684f8a1e3795\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:38:50.488904 containerd[1875]: time="2025-01-29T11:38:50.488665995Z" level=info msg="RemovePodSandbox \"216c7811ac130a3fb67a0705d8faf937266b3b9566fbec00e15f684f8a1e3795\" returns successfully" Jan 29 11:38:50.499298 containerd[1875]: time="2025-01-29T11:38:50.498633754Z" level=info msg="StopPodSandbox for \"81f0bf6a50e4fabde7dd62d12aea35a8ff4ba29aa94292e5b0dbcc3bdd34217e\"" Jan 29 11:38:50.499298 containerd[1875]: time="2025-01-29T11:38:50.498782355Z" level=info msg="TearDown network for sandbox \"81f0bf6a50e4fabde7dd62d12aea35a8ff4ba29aa94292e5b0dbcc3bdd34217e\" successfully" Jan 29 11:38:50.499298 containerd[1875]: time="2025-01-29T11:38:50.499146010Z" level=info msg="StopPodSandbox for \"81f0bf6a50e4fabde7dd62d12aea35a8ff4ba29aa94292e5b0dbcc3bdd34217e\" returns successfully" Jan 29 11:38:50.499753 containerd[1875]: time="2025-01-29T11:38:50.499721877Z" level=info msg="RemovePodSandbox for \"81f0bf6a50e4fabde7dd62d12aea35a8ff4ba29aa94292e5b0dbcc3bdd34217e\"" Jan 29 11:38:50.499852 containerd[1875]: time="2025-01-29T11:38:50.499755467Z" level=info msg="Forcibly stopping sandbox \"81f0bf6a50e4fabde7dd62d12aea35a8ff4ba29aa94292e5b0dbcc3bdd34217e\"" Jan 29 11:38:50.499907 containerd[1875]: time="2025-01-29T11:38:50.499858045Z" level=info msg="TearDown network for sandbox \"81f0bf6a50e4fabde7dd62d12aea35a8ff4ba29aa94292e5b0dbcc3bdd34217e\" successfully" Jan 29 11:38:50.512114 containerd[1875]: time="2025-01-29T11:38:50.512049602Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"81f0bf6a50e4fabde7dd62d12aea35a8ff4ba29aa94292e5b0dbcc3bdd34217e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:38:50.512114 containerd[1875]: time="2025-01-29T11:38:50.512122079Z" level=info msg="RemovePodSandbox \"81f0bf6a50e4fabde7dd62d12aea35a8ff4ba29aa94292e5b0dbcc3bdd34217e\" returns successfully" Jan 29 11:38:50.512624 containerd[1875]: time="2025-01-29T11:38:50.512589880Z" level=info msg="StopPodSandbox for \"538fd0d5057dc6a28263a4d9c1006cd354e6dff76f43d1a1d0a350bbaa3c2ea6\"" Jan 29 11:38:50.512725 containerd[1875]: time="2025-01-29T11:38:50.512706389Z" level=info msg="TearDown network for sandbox \"538fd0d5057dc6a28263a4d9c1006cd354e6dff76f43d1a1d0a350bbaa3c2ea6\" successfully" Jan 29 11:38:50.512775 containerd[1875]: time="2025-01-29T11:38:50.512721644Z" level=info msg="StopPodSandbox for \"538fd0d5057dc6a28263a4d9c1006cd354e6dff76f43d1a1d0a350bbaa3c2ea6\" returns successfully" Jan 29 11:38:50.513378 containerd[1875]: time="2025-01-29T11:38:50.513346860Z" level=info msg="RemovePodSandbox for \"538fd0d5057dc6a28263a4d9c1006cd354e6dff76f43d1a1d0a350bbaa3c2ea6\"" Jan 29 11:38:50.513469 containerd[1875]: time="2025-01-29T11:38:50.513380360Z" level=info msg="Forcibly stopping sandbox \"538fd0d5057dc6a28263a4d9c1006cd354e6dff76f43d1a1d0a350bbaa3c2ea6\"" Jan 29 11:38:50.513529 containerd[1875]: time="2025-01-29T11:38:50.513480771Z" level=info msg="TearDown network for sandbox \"538fd0d5057dc6a28263a4d9c1006cd354e6dff76f43d1a1d0a350bbaa3c2ea6\" successfully" Jan 29 11:38:50.518336 containerd[1875]: time="2025-01-29T11:38:50.518289331Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"538fd0d5057dc6a28263a4d9c1006cd354e6dff76f43d1a1d0a350bbaa3c2ea6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:38:50.518691 containerd[1875]: time="2025-01-29T11:38:50.518346064Z" level=info msg="RemovePodSandbox \"538fd0d5057dc6a28263a4d9c1006cd354e6dff76f43d1a1d0a350bbaa3c2ea6\" returns successfully" Jan 29 11:38:50.519018 containerd[1875]: time="2025-01-29T11:38:50.518987299Z" level=info msg="StopPodSandbox for \"737379651b3aa2f389b43b4dbcf0a0b2c65c0b9dfd90df300c1cc78223bef4cf\"" Jan 29 11:38:50.519122 containerd[1875]: time="2025-01-29T11:38:50.519097099Z" level=info msg="TearDown network for sandbox \"737379651b3aa2f389b43b4dbcf0a0b2c65c0b9dfd90df300c1cc78223bef4cf\" successfully" Jan 29 11:38:50.519122 containerd[1875]: time="2025-01-29T11:38:50.519116378Z" level=info msg="StopPodSandbox for \"737379651b3aa2f389b43b4dbcf0a0b2c65c0b9dfd90df300c1cc78223bef4cf\" returns successfully" Jan 29 11:38:50.519440 containerd[1875]: time="2025-01-29T11:38:50.519416143Z" level=info msg="RemovePodSandbox for \"737379651b3aa2f389b43b4dbcf0a0b2c65c0b9dfd90df300c1cc78223bef4cf\"" Jan 29 11:38:50.519509 containerd[1875]: time="2025-01-29T11:38:50.519440713Z" level=info msg="Forcibly stopping sandbox \"737379651b3aa2f389b43b4dbcf0a0b2c65c0b9dfd90df300c1cc78223bef4cf\"" Jan 29 11:38:50.519563 containerd[1875]: time="2025-01-29T11:38:50.519515894Z" level=info msg="TearDown network for sandbox \"737379651b3aa2f389b43b4dbcf0a0b2c65c0b9dfd90df300c1cc78223bef4cf\" successfully" Jan 29 11:38:50.527633 containerd[1875]: time="2025-01-29T11:38:50.527573949Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"737379651b3aa2f389b43b4dbcf0a0b2c65c0b9dfd90df300c1cc78223bef4cf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:38:50.527792 containerd[1875]: time="2025-01-29T11:38:50.527650490Z" level=info msg="RemovePodSandbox \"737379651b3aa2f389b43b4dbcf0a0b2c65c0b9dfd90df300c1cc78223bef4cf\" returns successfully" Jan 29 11:38:50.528686 containerd[1875]: time="2025-01-29T11:38:50.528654674Z" level=info msg="StopPodSandbox for \"c3274db9cdf7d82669680a6dc6fdb4b3a67aa0130c064a5f25b0b77ef4b98a10\"" Jan 29 11:38:50.528800 containerd[1875]: time="2025-01-29T11:38:50.528776370Z" level=info msg="TearDown network for sandbox \"c3274db9cdf7d82669680a6dc6fdb4b3a67aa0130c064a5f25b0b77ef4b98a10\" successfully" Jan 29 11:38:50.528963 containerd[1875]: time="2025-01-29T11:38:50.528798150Z" level=info msg="StopPodSandbox for \"c3274db9cdf7d82669680a6dc6fdb4b3a67aa0130c064a5f25b0b77ef4b98a10\" returns successfully" Jan 29 11:38:50.529329 containerd[1875]: time="2025-01-29T11:38:50.529303021Z" level=info msg="RemovePodSandbox for \"c3274db9cdf7d82669680a6dc6fdb4b3a67aa0130c064a5f25b0b77ef4b98a10\"" Jan 29 11:38:50.529405 containerd[1875]: time="2025-01-29T11:38:50.529337239Z" level=info msg="Forcibly stopping sandbox \"c3274db9cdf7d82669680a6dc6fdb4b3a67aa0130c064a5f25b0b77ef4b98a10\"" Jan 29 11:38:50.529553 containerd[1875]: time="2025-01-29T11:38:50.529430924Z" level=info msg="TearDown network for sandbox \"c3274db9cdf7d82669680a6dc6fdb4b3a67aa0130c064a5f25b0b77ef4b98a10\" successfully" Jan 29 11:38:50.540176 containerd[1875]: time="2025-01-29T11:38:50.539431642Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c3274db9cdf7d82669680a6dc6fdb4b3a67aa0130c064a5f25b0b77ef4b98a10\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:38:50.540353 containerd[1875]: time="2025-01-29T11:38:50.540190587Z" level=info msg="RemovePodSandbox \"c3274db9cdf7d82669680a6dc6fdb4b3a67aa0130c064a5f25b0b77ef4b98a10\" returns successfully" Jan 29 11:38:50.542708 containerd[1875]: time="2025-01-29T11:38:50.542634060Z" level=info msg="StopPodSandbox for \"462027f11a01e789a82061dc7f1cc3d297ae7466649a4dc801b2538dd3a2355b\"" Jan 29 11:38:50.545491 containerd[1875]: time="2025-01-29T11:38:50.545450072Z" level=info msg="TearDown network for sandbox \"462027f11a01e789a82061dc7f1cc3d297ae7466649a4dc801b2538dd3a2355b\" successfully" Jan 29 11:38:50.547863 containerd[1875]: time="2025-01-29T11:38:50.545865719Z" level=info msg="StopPodSandbox for \"462027f11a01e789a82061dc7f1cc3d297ae7466649a4dc801b2538dd3a2355b\" returns successfully" Jan 29 11:38:50.567567 containerd[1875]: time="2025-01-29T11:38:50.567531176Z" level=info msg="RemovePodSandbox for \"462027f11a01e789a82061dc7f1cc3d297ae7466649a4dc801b2538dd3a2355b\"" Jan 29 11:38:50.567763 containerd[1875]: time="2025-01-29T11:38:50.567741032Z" level=info msg="Forcibly stopping sandbox \"462027f11a01e789a82061dc7f1cc3d297ae7466649a4dc801b2538dd3a2355b\"" Jan 29 11:38:50.576167 containerd[1875]: time="2025-01-29T11:38:50.576094780Z" level=info msg="TearDown network for sandbox \"462027f11a01e789a82061dc7f1cc3d297ae7466649a4dc801b2538dd3a2355b\" successfully" Jan 29 11:38:50.583342 containerd[1875]: time="2025-01-29T11:38:50.583293293Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"462027f11a01e789a82061dc7f1cc3d297ae7466649a4dc801b2538dd3a2355b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:38:50.583493 containerd[1875]: time="2025-01-29T11:38:50.583362586Z" level=info msg="RemovePodSandbox \"462027f11a01e789a82061dc7f1cc3d297ae7466649a4dc801b2538dd3a2355b\" returns successfully" Jan 29 11:38:50.586476 containerd[1875]: time="2025-01-29T11:38:50.584924408Z" level=info msg="StopPodSandbox for \"bcc9a08010fb35b7f0936222545b7870fb3e5df61743b71d0b8a8644b1d8b912\"" Jan 29 11:38:50.586476 containerd[1875]: time="2025-01-29T11:38:50.586162331Z" level=info msg="TearDown network for sandbox \"bcc9a08010fb35b7f0936222545b7870fb3e5df61743b71d0b8a8644b1d8b912\" successfully" Jan 29 11:38:50.586476 containerd[1875]: time="2025-01-29T11:38:50.586235613Z" level=info msg="StopPodSandbox for \"bcc9a08010fb35b7f0936222545b7870fb3e5df61743b71d0b8a8644b1d8b912\" returns successfully" Jan 29 11:38:50.593427 containerd[1875]: time="2025-01-29T11:38:50.592942620Z" level=info msg="RemovePodSandbox for \"bcc9a08010fb35b7f0936222545b7870fb3e5df61743b71d0b8a8644b1d8b912\"" Jan 29 11:38:50.593427 containerd[1875]: time="2025-01-29T11:38:50.593433957Z" level=info msg="Forcibly stopping sandbox \"bcc9a08010fb35b7f0936222545b7870fb3e5df61743b71d0b8a8644b1d8b912\"" Jan 29 11:38:50.593642 containerd[1875]: time="2025-01-29T11:38:50.593553310Z" level=info msg="TearDown network for sandbox \"bcc9a08010fb35b7f0936222545b7870fb3e5df61743b71d0b8a8644b1d8b912\" successfully" Jan 29 11:38:50.608249 containerd[1875]: time="2025-01-29T11:38:50.608094589Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bcc9a08010fb35b7f0936222545b7870fb3e5df61743b71d0b8a8644b1d8b912\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:38:50.608411 containerd[1875]: time="2025-01-29T11:38:50.608273823Z" level=info msg="RemovePodSandbox \"bcc9a08010fb35b7f0936222545b7870fb3e5df61743b71d0b8a8644b1d8b912\" returns successfully" Jan 29 11:38:50.609849 containerd[1875]: time="2025-01-29T11:38:50.609479360Z" level=info msg="StopPodSandbox for \"c76cd149a006d32561c3d17292d247b556b057bceb3523b2a64cc3d617b427a1\"" Jan 29 11:38:50.609849 containerd[1875]: time="2025-01-29T11:38:50.609656100Z" level=info msg="TearDown network for sandbox \"c76cd149a006d32561c3d17292d247b556b057bceb3523b2a64cc3d617b427a1\" successfully" Jan 29 11:38:50.609849 containerd[1875]: time="2025-01-29T11:38:50.609669852Z" level=info msg="StopPodSandbox for \"c76cd149a006d32561c3d17292d247b556b057bceb3523b2a64cc3d617b427a1\" returns successfully" Jan 29 11:38:50.610702 containerd[1875]: time="2025-01-29T11:38:50.610677449Z" level=info msg="RemovePodSandbox for \"c76cd149a006d32561c3d17292d247b556b057bceb3523b2a64cc3d617b427a1\"" Jan 29 11:38:50.610766 containerd[1875]: time="2025-01-29T11:38:50.610710458Z" level=info msg="Forcibly stopping sandbox \"c76cd149a006d32561c3d17292d247b556b057bceb3523b2a64cc3d617b427a1\"" Jan 29 11:38:50.610864 containerd[1875]: time="2025-01-29T11:38:50.610796770Z" level=info msg="TearDown network for sandbox \"c76cd149a006d32561c3d17292d247b556b057bceb3523b2a64cc3d617b427a1\" successfully" Jan 29 11:38:50.616000 containerd[1875]: time="2025-01-29T11:38:50.615953575Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c76cd149a006d32561c3d17292d247b556b057bceb3523b2a64cc3d617b427a1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:38:50.616160 containerd[1875]: time="2025-01-29T11:38:50.616021880Z" level=info msg="RemovePodSandbox \"c76cd149a006d32561c3d17292d247b556b057bceb3523b2a64cc3d617b427a1\" returns successfully" Jan 29 11:38:50.616479 containerd[1875]: time="2025-01-29T11:38:50.616445591Z" level=info msg="StopPodSandbox for \"8b79c3451af2dd3abee001578beea7ee748562764856304d2e9536ff1b463fcd\"" Jan 29 11:38:50.616570 containerd[1875]: time="2025-01-29T11:38:50.616553961Z" level=info msg="TearDown network for sandbox \"8b79c3451af2dd3abee001578beea7ee748562764856304d2e9536ff1b463fcd\" successfully" Jan 29 11:38:50.616625 containerd[1875]: time="2025-01-29T11:38:50.616569316Z" level=info msg="StopPodSandbox for \"8b79c3451af2dd3abee001578beea7ee748562764856304d2e9536ff1b463fcd\" returns successfully" Jan 29 11:38:50.616938 containerd[1875]: time="2025-01-29T11:38:50.616907888Z" level=info msg="RemovePodSandbox for \"8b79c3451af2dd3abee001578beea7ee748562764856304d2e9536ff1b463fcd\"" Jan 29 11:38:50.616938 containerd[1875]: time="2025-01-29T11:38:50.616935519Z" level=info msg="Forcibly stopping sandbox \"8b79c3451af2dd3abee001578beea7ee748562764856304d2e9536ff1b463fcd\"" Jan 29 11:38:50.617141 containerd[1875]: time="2025-01-29T11:38:50.617083520Z" level=info msg="TearDown network for sandbox \"8b79c3451af2dd3abee001578beea7ee748562764856304d2e9536ff1b463fcd\" successfully" Jan 29 11:38:50.622299 containerd[1875]: time="2025-01-29T11:38:50.622234045Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8b79c3451af2dd3abee001578beea7ee748562764856304d2e9536ff1b463fcd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:38:50.622299 containerd[1875]: time="2025-01-29T11:38:50.622300193Z" level=info msg="RemovePodSandbox \"8b79c3451af2dd3abee001578beea7ee748562764856304d2e9536ff1b463fcd\" returns successfully" Jan 29 11:38:50.623069 containerd[1875]: time="2025-01-29T11:38:50.623033339Z" level=info msg="StopPodSandbox for \"ea5a24182ae6641fccbbbf881afee34b1e37581cdea07e541dd4c20d8bb134c4\"" Jan 29 11:38:50.623172 containerd[1875]: time="2025-01-29T11:38:50.623149494Z" level=info msg="TearDown network for sandbox \"ea5a24182ae6641fccbbbf881afee34b1e37581cdea07e541dd4c20d8bb134c4\" successfully" Jan 29 11:38:50.623172 containerd[1875]: time="2025-01-29T11:38:50.623166278Z" level=info msg="StopPodSandbox for \"ea5a24182ae6641fccbbbf881afee34b1e37581cdea07e541dd4c20d8bb134c4\" returns successfully" Jan 29 11:38:50.623706 containerd[1875]: time="2025-01-29T11:38:50.623521742Z" level=info msg="RemovePodSandbox for \"ea5a24182ae6641fccbbbf881afee34b1e37581cdea07e541dd4c20d8bb134c4\"" Jan 29 11:38:50.623706 containerd[1875]: time="2025-01-29T11:38:50.623552803Z" level=info msg="Forcibly stopping sandbox \"ea5a24182ae6641fccbbbf881afee34b1e37581cdea07e541dd4c20d8bb134c4\"" Jan 29 11:38:50.623847 containerd[1875]: time="2025-01-29T11:38:50.623749783Z" level=info msg="TearDown network for sandbox \"ea5a24182ae6641fccbbbf881afee34b1e37581cdea07e541dd4c20d8bb134c4\" successfully" Jan 29 11:38:50.629738 containerd[1875]: time="2025-01-29T11:38:50.629687675Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ea5a24182ae6641fccbbbf881afee34b1e37581cdea07e541dd4c20d8bb134c4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:38:50.636252 containerd[1875]: time="2025-01-29T11:38:50.629753702Z" level=info msg="RemovePodSandbox \"ea5a24182ae6641fccbbbf881afee34b1e37581cdea07e541dd4c20d8bb134c4\" returns successfully" Jan 29 11:38:50.637814 containerd[1875]: time="2025-01-29T11:38:50.637770943Z" level=info msg="StopPodSandbox for \"81a591738eb1ac16cb42f5ed39c3920ea2214d2f5fc3f1d787a92fa777a5b834\"" Jan 29 11:38:50.637964 containerd[1875]: time="2025-01-29T11:38:50.637926708Z" level=info msg="TearDown network for sandbox \"81a591738eb1ac16cb42f5ed39c3920ea2214d2f5fc3f1d787a92fa777a5b834\" successfully" Jan 29 11:38:50.637964 containerd[1875]: time="2025-01-29T11:38:50.637946441Z" level=info msg="StopPodSandbox for \"81a591738eb1ac16cb42f5ed39c3920ea2214d2f5fc3f1d787a92fa777a5b834\" returns successfully" Jan 29 11:38:50.639978 containerd[1875]: time="2025-01-29T11:38:50.639949755Z" level=info msg="RemovePodSandbox for \"81a591738eb1ac16cb42f5ed39c3920ea2214d2f5fc3f1d787a92fa777a5b834\"" Jan 29 11:38:50.642106 containerd[1875]: time="2025-01-29T11:38:50.641097605Z" level=info msg="Forcibly stopping sandbox \"81a591738eb1ac16cb42f5ed39c3920ea2214d2f5fc3f1d787a92fa777a5b834\"" Jan 29 11:38:50.642106 containerd[1875]: time="2025-01-29T11:38:50.641211061Z" level=info msg="TearDown network for sandbox \"81a591738eb1ac16cb42f5ed39c3920ea2214d2f5fc3f1d787a92fa777a5b834\" successfully" Jan 29 11:38:50.657333 containerd[1875]: time="2025-01-29T11:38:50.657287956Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"81a591738eb1ac16cb42f5ed39c3920ea2214d2f5fc3f1d787a92fa777a5b834\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:38:50.657457 containerd[1875]: time="2025-01-29T11:38:50.657347799Z" level=info msg="RemovePodSandbox \"81a591738eb1ac16cb42f5ed39c3920ea2214d2f5fc3f1d787a92fa777a5b834\" returns successfully" Jan 29 11:38:50.657920 containerd[1875]: time="2025-01-29T11:38:50.657885546Z" level=info msg="StopPodSandbox for \"927a3e368c857aee56fec13a0102c646b1689a908268eddb6e01637928fdad6c\"" Jan 29 11:38:50.658039 containerd[1875]: time="2025-01-29T11:38:50.658004630Z" level=info msg="TearDown network for sandbox \"927a3e368c857aee56fec13a0102c646b1689a908268eddb6e01637928fdad6c\" successfully" Jan 29 11:38:50.658039 containerd[1875]: time="2025-01-29T11:38:50.658020197Z" level=info msg="StopPodSandbox for \"927a3e368c857aee56fec13a0102c646b1689a908268eddb6e01637928fdad6c\" returns successfully" Jan 29 11:38:50.658356 containerd[1875]: time="2025-01-29T11:38:50.658332637Z" level=info msg="RemovePodSandbox for \"927a3e368c857aee56fec13a0102c646b1689a908268eddb6e01637928fdad6c\"" Jan 29 11:38:50.658432 containerd[1875]: time="2025-01-29T11:38:50.658359234Z" level=info msg="Forcibly stopping sandbox \"927a3e368c857aee56fec13a0102c646b1689a908268eddb6e01637928fdad6c\"" Jan 29 11:38:50.658487 containerd[1875]: time="2025-01-29T11:38:50.658438958Z" level=info msg="TearDown network for sandbox \"927a3e368c857aee56fec13a0102c646b1689a908268eddb6e01637928fdad6c\" successfully" Jan 29 11:38:50.663627 containerd[1875]: time="2025-01-29T11:38:50.663582295Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"927a3e368c857aee56fec13a0102c646b1689a908268eddb6e01637928fdad6c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:38:50.663851 containerd[1875]: time="2025-01-29T11:38:50.663643115Z" level=info msg="RemovePodSandbox \"927a3e368c857aee56fec13a0102c646b1689a908268eddb6e01637928fdad6c\" returns successfully" Jan 29 11:38:50.664133 containerd[1875]: time="2025-01-29T11:38:50.664100689Z" level=info msg="StopPodSandbox for \"79a8470c61dc8ab98e22503772b8c52b83ce790d03c5e564d5676ba3252ca1fe\"" Jan 29 11:38:50.664232 containerd[1875]: time="2025-01-29T11:38:50.664215260Z" level=info msg="TearDown network for sandbox \"79a8470c61dc8ab98e22503772b8c52b83ce790d03c5e564d5676ba3252ca1fe\" successfully" Jan 29 11:38:50.664277 containerd[1875]: time="2025-01-29T11:38:50.664233676Z" level=info msg="StopPodSandbox for \"79a8470c61dc8ab98e22503772b8c52b83ce790d03c5e564d5676ba3252ca1fe\" returns successfully" Jan 29 11:38:50.664861 containerd[1875]: time="2025-01-29T11:38:50.664637011Z" level=info msg="RemovePodSandbox for \"79a8470c61dc8ab98e22503772b8c52b83ce790d03c5e564d5676ba3252ca1fe\"" Jan 29 11:38:50.664861 containerd[1875]: time="2025-01-29T11:38:50.664665608Z" level=info msg="Forcibly stopping sandbox \"79a8470c61dc8ab98e22503772b8c52b83ce790d03c5e564d5676ba3252ca1fe\"" Jan 29 11:38:50.664861 containerd[1875]: time="2025-01-29T11:38:50.664732640Z" level=info msg="TearDown network for sandbox \"79a8470c61dc8ab98e22503772b8c52b83ce790d03c5e564d5676ba3252ca1fe\" successfully" Jan 29 11:38:50.669550 containerd[1875]: time="2025-01-29T11:38:50.669485476Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"79a8470c61dc8ab98e22503772b8c52b83ce790d03c5e564d5676ba3252ca1fe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:38:50.669550 containerd[1875]: time="2025-01-29T11:38:50.669551042Z" level=info msg="RemovePodSandbox \"79a8470c61dc8ab98e22503772b8c52b83ce790d03c5e564d5676ba3252ca1fe\" returns successfully" Jan 29 11:38:50.670205 containerd[1875]: time="2025-01-29T11:38:50.670176872Z" level=info msg="StopPodSandbox for \"516f171545da168b11d937fb409e6e45a7dc6cae7e97e358df301dc83844bbcc\"" Jan 29 11:38:50.670316 containerd[1875]: time="2025-01-29T11:38:50.670295136Z" level=info msg="TearDown network for sandbox \"516f171545da168b11d937fb409e6e45a7dc6cae7e97e358df301dc83844bbcc\" successfully" Jan 29 11:38:50.670370 containerd[1875]: time="2025-01-29T11:38:50.670314980Z" level=info msg="StopPodSandbox for \"516f171545da168b11d937fb409e6e45a7dc6cae7e97e358df301dc83844bbcc\" returns successfully" Jan 29 11:38:50.670665 containerd[1875]: time="2025-01-29T11:38:50.670641642Z" level=info msg="RemovePodSandbox for \"516f171545da168b11d937fb409e6e45a7dc6cae7e97e358df301dc83844bbcc\"" Jan 29 11:38:50.670739 containerd[1875]: time="2025-01-29T11:38:50.670670299Z" level=info msg="Forcibly stopping sandbox \"516f171545da168b11d937fb409e6e45a7dc6cae7e97e358df301dc83844bbcc\"" Jan 29 11:38:50.670817 containerd[1875]: time="2025-01-29T11:38:50.670748793Z" level=info msg="TearDown network for sandbox \"516f171545da168b11d937fb409e6e45a7dc6cae7e97e358df301dc83844bbcc\" successfully" Jan 29 11:38:50.675572 containerd[1875]: time="2025-01-29T11:38:50.675519009Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"516f171545da168b11d937fb409e6e45a7dc6cae7e97e358df301dc83844bbcc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:38:50.675716 containerd[1875]: time="2025-01-29T11:38:50.675582497Z" level=info msg="RemovePodSandbox \"516f171545da168b11d937fb409e6e45a7dc6cae7e97e358df301dc83844bbcc\" returns successfully" Jan 29 11:38:50.676126 containerd[1875]: time="2025-01-29T11:38:50.676096049Z" level=info msg="StopPodSandbox for \"8fb7ed5229073ad6a7b25d1fcef69e50ce85e02f26f53254af79cd4b8c55a731\"" Jan 29 11:38:50.676227 containerd[1875]: time="2025-01-29T11:38:50.676210739Z" level=info msg="TearDown network for sandbox \"8fb7ed5229073ad6a7b25d1fcef69e50ce85e02f26f53254af79cd4b8c55a731\" successfully" Jan 29 11:38:50.676271 containerd[1875]: time="2025-01-29T11:38:50.676227182Z" level=info msg="StopPodSandbox for \"8fb7ed5229073ad6a7b25d1fcef69e50ce85e02f26f53254af79cd4b8c55a731\" returns successfully" Jan 29 11:38:50.676543 containerd[1875]: time="2025-01-29T11:38:50.676518262Z" level=info msg="RemovePodSandbox for \"8fb7ed5229073ad6a7b25d1fcef69e50ce85e02f26f53254af79cd4b8c55a731\"" Jan 29 11:38:50.677140 containerd[1875]: time="2025-01-29T11:38:50.676543419Z" level=info msg="Forcibly stopping sandbox \"8fb7ed5229073ad6a7b25d1fcef69e50ce85e02f26f53254af79cd4b8c55a731\"" Jan 29 11:38:50.677418 containerd[1875]: time="2025-01-29T11:38:50.677142250Z" level=info msg="TearDown network for sandbox \"8fb7ed5229073ad6a7b25d1fcef69e50ce85e02f26f53254af79cd4b8c55a731\" successfully" Jan 29 11:38:50.682391 containerd[1875]: time="2025-01-29T11:38:50.682346550Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8fb7ed5229073ad6a7b25d1fcef69e50ce85e02f26f53254af79cd4b8c55a731\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:38:50.682537 containerd[1875]: time="2025-01-29T11:38:50.682407474Z" level=info msg="RemovePodSandbox \"8fb7ed5229073ad6a7b25d1fcef69e50ce85e02f26f53254af79cd4b8c55a731\" returns successfully" Jan 29 11:38:50.682950 containerd[1875]: time="2025-01-29T11:38:50.682920900Z" level=info msg="StopPodSandbox for \"c52653514d90535bf9be9b55d6ed0baac06f092a82b2b824d5b97ba17d89610b\"" Jan 29 11:38:50.683054 containerd[1875]: time="2025-01-29T11:38:50.683029955Z" level=info msg="TearDown network for sandbox \"c52653514d90535bf9be9b55d6ed0baac06f092a82b2b824d5b97ba17d89610b\" successfully" Jan 29 11:38:50.683165 containerd[1875]: time="2025-01-29T11:38:50.683051921Z" level=info msg="StopPodSandbox for \"c52653514d90535bf9be9b55d6ed0baac06f092a82b2b824d5b97ba17d89610b\" returns successfully" Jan 29 11:38:50.683665 containerd[1875]: time="2025-01-29T11:38:50.683609421Z" level=info msg="RemovePodSandbox for \"c52653514d90535bf9be9b55d6ed0baac06f092a82b2b824d5b97ba17d89610b\"" Jan 29 11:38:50.683772 containerd[1875]: time="2025-01-29T11:38:50.683666997Z" level=info msg="Forcibly stopping sandbox \"c52653514d90535bf9be9b55d6ed0baac06f092a82b2b824d5b97ba17d89610b\"" Jan 29 11:38:50.684219 containerd[1875]: time="2025-01-29T11:38:50.683823648Z" level=info msg="TearDown network for sandbox \"c52653514d90535bf9be9b55d6ed0baac06f092a82b2b824d5b97ba17d89610b\" successfully" Jan 29 11:38:50.689337 containerd[1875]: time="2025-01-29T11:38:50.689155409Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c52653514d90535bf9be9b55d6ed0baac06f092a82b2b824d5b97ba17d89610b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:38:50.689481 containerd[1875]: time="2025-01-29T11:38:50.689370095Z" level=info msg="RemovePodSandbox \"c52653514d90535bf9be9b55d6ed0baac06f092a82b2b824d5b97ba17d89610b\" returns successfully" Jan 29 11:38:50.689873 containerd[1875]: time="2025-01-29T11:38:50.689823278Z" level=info msg="StopPodSandbox for \"fdd2777217320249da7cc4a55923709bc21e0701189a586446f49702263f4815\"" Jan 29 11:38:50.690019 containerd[1875]: time="2025-01-29T11:38:50.689957503Z" level=info msg="TearDown network for sandbox \"fdd2777217320249da7cc4a55923709bc21e0701189a586446f49702263f4815\" successfully" Jan 29 11:38:50.690070 containerd[1875]: time="2025-01-29T11:38:50.690023640Z" level=info msg="StopPodSandbox for \"fdd2777217320249da7cc4a55923709bc21e0701189a586446f49702263f4815\" returns successfully" Jan 29 11:38:50.690418 containerd[1875]: time="2025-01-29T11:38:50.690390887Z" level=info msg="RemovePodSandbox for \"fdd2777217320249da7cc4a55923709bc21e0701189a586446f49702263f4815\"" Jan 29 11:38:50.690539 containerd[1875]: time="2025-01-29T11:38:50.690424002Z" level=info msg="Forcibly stopping sandbox \"fdd2777217320249da7cc4a55923709bc21e0701189a586446f49702263f4815\"" Jan 29 11:38:50.690619 containerd[1875]: time="2025-01-29T11:38:50.690564855Z" level=info msg="TearDown network for sandbox \"fdd2777217320249da7cc4a55923709bc21e0701189a586446f49702263f4815\" successfully" Jan 29 11:38:50.695692 containerd[1875]: time="2025-01-29T11:38:50.695648259Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fdd2777217320249da7cc4a55923709bc21e0701189a586446f49702263f4815\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:38:50.695817 containerd[1875]: time="2025-01-29T11:38:50.695705659Z" level=info msg="RemovePodSandbox \"fdd2777217320249da7cc4a55923709bc21e0701189a586446f49702263f4815\" returns successfully" Jan 29 11:38:51.449051 kubelet[2329]: E0129 11:38:51.448980 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:52.450220 kubelet[2329]: E0129 11:38:52.450161 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:53.451097 kubelet[2329]: E0129 11:38:53.451044 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:54.453788 kubelet[2329]: E0129 11:38:54.453742 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:55.454506 kubelet[2329]: E0129 11:38:55.454443 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:56.454903 kubelet[2329]: E0129 11:38:56.454847 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:57.455874 kubelet[2329]: E0129 11:38:57.455817 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:58.456324 kubelet[2329]: E0129 11:38:58.456264 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:58.692542 systemd[1]: Created slice kubepods-besteffort-pod440e0d08_5d40_466c_98bf_7f81497d78e4.slice - libcontainer container kubepods-besteffort-pod440e0d08_5d40_466c_98bf_7f81497d78e4.slice. Jan 29 11:38:58.783478 kubelet[2329]: I0129 11:38:58.783424 2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7rxl\" (UniqueName: \"kubernetes.io/projected/440e0d08-5d40-466c-98bf-7f81497d78e4-kube-api-access-b7rxl\") pod \"test-pod-1\" (UID: \"440e0d08-5d40-466c-98bf-7f81497d78e4\") " pod="default/test-pod-1" Jan 29 11:38:58.783658 kubelet[2329]: I0129 11:38:58.783484 2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-fce4a8dd-4e82-43e0-97a7-6450d84184b9\" (UniqueName: \"kubernetes.io/nfs/440e0d08-5d40-466c-98bf-7f81497d78e4-pvc-fce4a8dd-4e82-43e0-97a7-6450d84184b9\") pod \"test-pod-1\" (UID: \"440e0d08-5d40-466c-98bf-7f81497d78e4\") " pod="default/test-pod-1" Jan 29 11:38:58.933897 kernel: FS-Cache: Loaded Jan 29 11:38:59.027132 kernel: RPC: Registered named UNIX socket transport module. Jan 29 11:38:59.027319 kernel: RPC: Registered udp transport module. Jan 29 11:38:59.027341 kernel: RPC: Registered tcp transport module. Jan 29 11:38:59.027358 kernel: RPC: Registered tcp-with-tls transport module. Jan 29 11:38:59.027374 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 29 11:38:59.456741 kubelet[2329]: E0129 11:38:59.456678 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:38:59.473095 kernel: NFS: Registering the id_resolver key type Jan 29 11:38:59.473209 kernel: Key type id_resolver registered Jan 29 11:38:59.473260 kernel: Key type id_legacy registered Jan 29 11:38:59.518584 nfsidmap[4357]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jan 29 11:38:59.524482 nfsidmap[4358]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jan 29 11:38:59.607455 containerd[1875]: time="2025-01-29T11:38:59.607411170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:440e0d08-5d40-466c-98bf-7f81497d78e4,Namespace:default,Attempt:0,}" Jan 29 11:38:59.782761 (udev-worker)[4354]: Network interface NamePolicy= disabled on kernel command line. Jan 29 11:38:59.783499 systemd-networkd[1730]: cali5ec59c6bf6e: Link UP Jan 29 11:38:59.784409 systemd-networkd[1730]: cali5ec59c6bf6e: Gained carrier Jan 29 11:38:59.804614 containerd[1875]: 2025-01-29 11:38:59.682 [INFO][4361] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.22.18-k8s-test--pod--1-eth0 default 440e0d08-5d40-466c-98bf-7f81497d78e4 1295 0 2025-01-29 11:38:27 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.22.18 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="56b4fc71aaae09f78b21a2d62b00ce6ae37a8cf4de5be88772708fe27401bea3" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.22.18-k8s-test--pod--1-" Jan 29 11:38:59.804614 containerd[1875]: 2025-01-29 11:38:59.682 [INFO][4361] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="56b4fc71aaae09f78b21a2d62b00ce6ae37a8cf4de5be88772708fe27401bea3" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.22.18-k8s-test--pod--1-eth0" Jan 29 11:38:59.804614 containerd[1875]: 2025-01-29 11:38:59.716 [INFO][4371] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="56b4fc71aaae09f78b21a2d62b00ce6ae37a8cf4de5be88772708fe27401bea3" HandleID="k8s-pod-network.56b4fc71aaae09f78b21a2d62b00ce6ae37a8cf4de5be88772708fe27401bea3" Workload="172.31.22.18-k8s-test--pod--1-eth0" Jan 29 11:38:59.804614 containerd[1875]: 2025-01-29 11:38:59.731 [INFO][4371] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="56b4fc71aaae09f78b21a2d62b00ce6ae37a8cf4de5be88772708fe27401bea3" HandleID="k8s-pod-network.56b4fc71aaae09f78b21a2d62b00ce6ae37a8cf4de5be88772708fe27401bea3" Workload="172.31.22.18-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000336c80), Attrs:map[string]string{"namespace":"default", "node":"172.31.22.18", "pod":"test-pod-1", "timestamp":"2025-01-29 11:38:59.716629891 +0000 UTC"}, Hostname:"172.31.22.18", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:38:59.804614 containerd[1875]: 2025-01-29 11:38:59.732 [INFO][4371] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:38:59.804614 containerd[1875]: 2025-01-29 11:38:59.732 [INFO][4371] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:38:59.804614 containerd[1875]: 2025-01-29 11:38:59.732 [INFO][4371] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.22.18' Jan 29 11:38:59.804614 containerd[1875]: 2025-01-29 11:38:59.738 [INFO][4371] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.56b4fc71aaae09f78b21a2d62b00ce6ae37a8cf4de5be88772708fe27401bea3" host="172.31.22.18" Jan 29 11:38:59.804614 containerd[1875]: 2025-01-29 11:38:59.746 [INFO][4371] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.22.18" Jan 29 11:38:59.804614 containerd[1875]: 2025-01-29 11:38:59.753 [INFO][4371] ipam/ipam.go 489: Trying affinity for 192.168.87.128/26 host="172.31.22.18" Jan 29 11:38:59.804614 containerd[1875]: 2025-01-29 11:38:59.756 [INFO][4371] ipam/ipam.go 155: Attempting to load block cidr=192.168.87.128/26 host="172.31.22.18" Jan 29 11:38:59.804614 containerd[1875]: 2025-01-29 11:38:59.759 [INFO][4371] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.87.128/26 host="172.31.22.18" Jan 29 11:38:59.804614 containerd[1875]: 2025-01-29 11:38:59.759 [INFO][4371] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.87.128/26 handle="k8s-pod-network.56b4fc71aaae09f78b21a2d62b00ce6ae37a8cf4de5be88772708fe27401bea3" host="172.31.22.18" Jan 29 11:38:59.804614 containerd[1875]: 2025-01-29 11:38:59.761 [INFO][4371] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.56b4fc71aaae09f78b21a2d62b00ce6ae37a8cf4de5be88772708fe27401bea3 Jan 29 11:38:59.804614 containerd[1875]: 2025-01-29 11:38:59.767 [INFO][4371] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.87.128/26 handle="k8s-pod-network.56b4fc71aaae09f78b21a2d62b00ce6ae37a8cf4de5be88772708fe27401bea3" host="172.31.22.18" Jan 29 11:38:59.804614 containerd[1875]: 2025-01-29 11:38:59.777 [INFO][4371] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.87.132/26] block=192.168.87.128/26 handle="k8s-pod-network.56b4fc71aaae09f78b21a2d62b00ce6ae37a8cf4de5be88772708fe27401bea3" host="172.31.22.18" Jan 29 11:38:59.804614 containerd[1875]: 2025-01-29 11:38:59.777 [INFO][4371] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.87.132/26] handle="k8s-pod-network.56b4fc71aaae09f78b21a2d62b00ce6ae37a8cf4de5be88772708fe27401bea3" host="172.31.22.18" Jan 29 11:38:59.804614 containerd[1875]: 2025-01-29 11:38:59.777 [INFO][4371] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:38:59.804614 containerd[1875]: 2025-01-29 11:38:59.777 [INFO][4371] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.87.132/26] IPv6=[] ContainerID="56b4fc71aaae09f78b21a2d62b00ce6ae37a8cf4de5be88772708fe27401bea3" HandleID="k8s-pod-network.56b4fc71aaae09f78b21a2d62b00ce6ae37a8cf4de5be88772708fe27401bea3" Workload="172.31.22.18-k8s-test--pod--1-eth0" Jan 29 11:38:59.804614 containerd[1875]: 2025-01-29 11:38:59.779 [INFO][4361] cni-plugin/k8s.go 386: Populated endpoint ContainerID="56b4fc71aaae09f78b21a2d62b00ce6ae37a8cf4de5be88772708fe27401bea3" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.22.18-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.18-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"440e0d08-5d40-466c-98bf-7f81497d78e4", ResourceVersion:"1295", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 38, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.22.18", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.87.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:38:59.807813 containerd[1875]: 2025-01-29 11:38:59.779 [INFO][4361] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.87.132/32] ContainerID="56b4fc71aaae09f78b21a2d62b00ce6ae37a8cf4de5be88772708fe27401bea3" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.22.18-k8s-test--pod--1-eth0" Jan 29 11:38:59.807813 containerd[1875]: 2025-01-29 11:38:59.779 [INFO][4361] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="56b4fc71aaae09f78b21a2d62b00ce6ae37a8cf4de5be88772708fe27401bea3" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.22.18-k8s-test--pod--1-eth0" Jan 29 11:38:59.807813 containerd[1875]: 2025-01-29 11:38:59.781 [INFO][4361] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="56b4fc71aaae09f78b21a2d62b00ce6ae37a8cf4de5be88772708fe27401bea3" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.22.18-k8s-test--pod--1-eth0" Jan 29 11:38:59.807813 containerd[1875]: 2025-01-29 11:38:59.781 [INFO][4361] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="56b4fc71aaae09f78b21a2d62b00ce6ae37a8cf4de5be88772708fe27401bea3" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.22.18-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.18-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"440e0d08-5d40-466c-98bf-7f81497d78e4", ResourceVersion:"1295", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 38, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.22.18", ContainerID:"56b4fc71aaae09f78b21a2d62b00ce6ae37a8cf4de5be88772708fe27401bea3", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.87.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"aa:7f:b7:55:c2:c3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:38:59.807813 containerd[1875]: 2025-01-29 11:38:59.802 [INFO][4361] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="56b4fc71aaae09f78b21a2d62b00ce6ae37a8cf4de5be88772708fe27401bea3" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.22.18-k8s-test--pod--1-eth0" Jan 29 11:38:59.887859 containerd[1875]: time="2025-01-29T11:38:59.874705265Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:38:59.887859 containerd[1875]: time="2025-01-29T11:38:59.874788774Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:38:59.887859 containerd[1875]: time="2025-01-29T11:38:59.874812145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:38:59.887859 containerd[1875]: time="2025-01-29T11:38:59.875327810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:38:59.927150 systemd[1]: Started cri-containerd-56b4fc71aaae09f78b21a2d62b00ce6ae37a8cf4de5be88772708fe27401bea3.scope - libcontainer container 56b4fc71aaae09f78b21a2d62b00ce6ae37a8cf4de5be88772708fe27401bea3. Jan 29 11:39:00.102371 containerd[1875]: time="2025-01-29T11:39:00.101797450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:440e0d08-5d40-466c-98bf-7f81497d78e4,Namespace:default,Attempt:0,} returns sandbox id \"56b4fc71aaae09f78b21a2d62b00ce6ae37a8cf4de5be88772708fe27401bea3\"" Jan 29 11:39:00.116437 containerd[1875]: time="2025-01-29T11:39:00.115377185Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 29 11:39:00.457752 kubelet[2329]: E0129 11:39:00.457611 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:39:00.505439 containerd[1875]: time="2025-01-29T11:39:00.505367474Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 29 11:39:00.511152 containerd[1875]: time="2025-01-29T11:39:00.511101332Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 395.674163ms" Jan 29 11:39:00.511152 containerd[1875]: time="2025-01-29T11:39:00.511146077Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 29 11:39:00.516952 containerd[1875]: time="2025-01-29T11:39:00.516624986Z" level=info msg="CreateContainer within sandbox \"56b4fc71aaae09f78b21a2d62b00ce6ae37a8cf4de5be88772708fe27401bea3\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 29 11:39:00.521551 containerd[1875]: time="2025-01-29T11:39:00.519484615Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:39:00.553733 containerd[1875]: time="2025-01-29T11:39:00.553527810Z" level=info msg="CreateContainer within sandbox \"56b4fc71aaae09f78b21a2d62b00ce6ae37a8cf4de5be88772708fe27401bea3\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"c65c804feaacdee0b34ff4b29b792f381d4657c72d9d5761ece3a8812b2029ba\"" Jan 29 11:39:00.555090 containerd[1875]: time="2025-01-29T11:39:00.555054692Z" level=info msg="StartContainer for \"c65c804feaacdee0b34ff4b29b792f381d4657c72d9d5761ece3a8812b2029ba\"" Jan 29 11:39:00.610463 systemd[1]: Started cri-containerd-c65c804feaacdee0b34ff4b29b792f381d4657c72d9d5761ece3a8812b2029ba.scope - libcontainer container c65c804feaacdee0b34ff4b29b792f381d4657c72d9d5761ece3a8812b2029ba. Jan 29 11:39:00.702437 containerd[1875]: time="2025-01-29T11:39:00.702210201Z" level=info msg="StartContainer for \"c65c804feaacdee0b34ff4b29b792f381d4657c72d9d5761ece3a8812b2029ba\" returns successfully" Jan 29 11:39:01.211659 kubelet[2329]: I0129 11:39:01.211275 2329 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=33.810458145 podStartE2EDuration="34.21125488s" podCreationTimestamp="2025-01-29 11:38:27 +0000 UTC" firstStartedPulling="2025-01-29 11:39:00.111494351 +0000 UTC m=+70.521682043" lastFinishedPulling="2025-01-29 11:39:00.512291091 +0000 UTC m=+70.922478778" observedRunningTime="2025-01-29 11:39:01.211239952 +0000 UTC m=+71.621427652" watchObservedRunningTime="2025-01-29 11:39:01.21125488 +0000 UTC m=+71.621442572" Jan 29 11:39:01.240069 systemd-networkd[1730]: cali5ec59c6bf6e: Gained IPv6LL Jan 29 11:39:01.458817 kubelet[2329]: E0129 11:39:01.458757 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:39:02.459052 kubelet[2329]: E0129 11:39:02.458936 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:39:03.459184 kubelet[2329]: E0129 11:39:03.459129 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:39:04.063176 ntpd[1856]: Listen normally on 12 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Jan 29 11:39:04.063660 ntpd[1856]: 29 Jan 11:39:04 ntpd[1856]: Listen normally on 12 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Jan 29 11:39:04.460225 kubelet[2329]: E0129 11:39:04.459865 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:39:05.460526 kubelet[2329]: E0129 11:39:05.460430 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:39:06.461291 kubelet[2329]: E0129 11:39:06.461243 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:39:07.462231 kubelet[2329]: E0129 11:39:07.462179 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:39:08.463373 kubelet[2329]: E0129 11:39:08.463330 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:39:09.463924 kubelet[2329]: E0129 11:39:09.463825 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:39:10.390957 kubelet[2329]: E0129 11:39:10.390897 2329 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:39:10.464974 kubelet[2329]: E0129 11:39:10.464917 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:39:11.465645 kubelet[2329]: E0129 11:39:11.465586 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:39:12.466401 kubelet[2329]: E0129 11:39:12.466354 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:39:13.466559 kubelet[2329]: E0129 11:39:13.466507 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:39:14.467849 kubelet[2329]: E0129 11:39:14.467689 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:39:15.469124 kubelet[2329]: E0129 11:39:15.468639 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:39:16.469392 kubelet[2329]: E0129 11:39:16.469333 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:39:17.470164 kubelet[2329]: E0129 11:39:17.470109 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:39:18.470966 kubelet[2329]: E0129 11:39:18.470858 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:39:19.472181 kubelet[2329]: E0129 11:39:19.472124 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:39:20.472403 kubelet[2329]: E0129 11:39:20.472293 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:39:21.473470 kubelet[2329]: E0129 11:39:21.473233 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:39:22.474179 kubelet[2329]: E0129 11:39:22.474099 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:39:23.474457 kubelet[2329]: E0129 11:39:23.474403 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:39:24.475567 kubelet[2329]: E0129 11:39:24.475376 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:39:25.476711 kubelet[2329]: E0129 11:39:25.476595 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:39:26.477259 kubelet[2329]: E0129 11:39:26.477204 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:39:27.478192 kubelet[2329]: E0129 11:39:27.478135 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:39:28.479601 kubelet[2329]: E0129 11:39:28.479542 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:39:29.480712 kubelet[2329]: E0129 11:39:29.480648 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:39:30.391483 kubelet[2329]: E0129 11:39:30.391428 2329 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:39:30.481145 kubelet[2329]: E0129 11:39:30.480793 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:39:31.481492 kubelet[2329]: E0129 11:39:31.481434 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:39:32.481980 kubelet[2329]: E0129 11:39:32.481896 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:39:32.806402 kubelet[2329]: E0129 11:39:32.806255 2329 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.232:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.22.18?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 11:39:33.482666 kubelet[2329]: E0129 11:39:33.482607 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:39:34.483736 kubelet[2329]: E0129 11:39:34.483678 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:39:35.484855 kubelet[2329]: E0129 11:39:35.484791 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:39:36.484987 kubelet[2329]: E0129 11:39:36.484939 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:39:37.485602 kubelet[2329]: E0129 11:39:37.485542 2329 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"