Dec 13 13:27:41.083370 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 13 11:52:04 -00 2024 Dec 13 13:27:41.083396 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:27:41.083410 kernel: BIOS-provided physical RAM map: Dec 13 13:27:41.083419 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 13:27:41.083427 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 13:27:41.083436 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 13:27:41.083445 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Dec 13 13:27:41.083454 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Dec 13 13:27:41.083462 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 13:27:41.083471 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 13:27:41.083482 kernel: NX (Execute Disable) protection: active Dec 13 13:27:41.083491 kernel: APIC: Static calls initialized Dec 13 13:27:41.083499 kernel: SMBIOS 2.8 present. Dec 13 13:27:41.083508 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Dec 13 13:27:41.083518 kernel: Hypervisor detected: KVM Dec 13 13:27:41.083529 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 13:27:41.083538 kernel: kvm-clock: using sched offset of 5268335771 cycles Dec 13 13:27:41.083547 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 13:27:41.083557 kernel: tsc: Detected 1996.249 MHz processor Dec 13 13:27:41.083566 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 13:27:41.083576 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 13:27:41.083585 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Dec 13 13:27:41.083594 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 13:27:41.083604 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 13:27:41.083615 kernel: ACPI: Early table checksum verification disabled Dec 13 13:27:41.083624 kernel: ACPI: RSDP 0x00000000000F5930 000014 (v00 BOCHS ) Dec 13 13:27:41.083634 kernel: ACPI: RSDT 0x000000007FFE1848 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:27:41.083643 kernel: ACPI: FACP 0x000000007FFE172C 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:27:41.083652 kernel: ACPI: DSDT 0x000000007FFE0040 0016EC (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:27:41.083661 kernel: ACPI: FACS 0x000000007FFE0000 000040 Dec 13 13:27:41.083670 kernel: ACPI: APIC 0x000000007FFE17A0 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:27:41.083680 kernel: ACPI: WAET 0x000000007FFE1820 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:27:41.083689 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe172c-0x7ffe179f] Dec 13 13:27:41.083700 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe172b] Dec 13 13:27:41.083709 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Dec 13 13:27:41.083718 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17a0-0x7ffe181f] Dec 13 13:27:41.083728 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe1820-0x7ffe1847] Dec 13 13:27:41.083737 kernel: No NUMA configuration found Dec 13 13:27:41.083746 kernel: Faking a node at [mem 0x0000000000000000-0x000000007ffdcfff] Dec 13 13:27:41.083755 kernel: NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdcfff] Dec 13 13:27:41.083768 kernel: Zone ranges: Dec 13 13:27:41.083780 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 13:27:41.083789 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdcfff] Dec 13 13:27:41.083799 kernel: Normal empty Dec 13 13:27:41.083808 kernel: Movable zone start for each node Dec 13 13:27:41.083818 kernel: Early memory node ranges Dec 13 13:27:41.083828 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 13:27:41.083839 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Dec 13 13:27:41.083848 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdcfff] Dec 13 13:27:41.083858 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 13:27:41.083867 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 13:27:41.083877 kernel: On node 0, zone DMA32: 35 pages in unavailable ranges Dec 13 13:27:41.083886 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 13:27:41.083895 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 13:27:41.083905 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 13:27:41.083915 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 13:27:41.083925 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 13:27:41.083936 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 13:27:41.083946 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 13:27:41.083955 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 13:27:41.083965 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 13:27:41.083975 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 13:27:41.083985 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 13:27:41.083995 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Dec 13 13:27:41.084004 kernel: Booting paravirtualized kernel on KVM Dec 13 13:27:41.084014 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 13:27:41.084026 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 13 13:27:41.084035 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Dec 13 13:27:41.084045 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Dec 13 13:27:41.084054 kernel: pcpu-alloc: [0] 0 1 Dec 13 13:27:41.084064 kernel: kvm-guest: PV spinlocks disabled, no host support Dec 13 13:27:41.084075 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:27:41.084086 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 13:27:41.084095 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 13:27:41.084107 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 13:27:41.084116 kernel: Fallback order for Node 0: 0 Dec 13 13:27:41.084126 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515805 Dec 13 13:27:41.084135 kernel: Policy zone: DMA32 Dec 13 13:27:41.084145 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 13:27:41.084155 kernel: Memory: 1969164K/2096620K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43328K init, 1748K bss, 127196K reserved, 0K cma-reserved) Dec 13 13:27:41.084165 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 13:27:41.084175 kernel: ftrace: allocating 37874 entries in 148 pages Dec 13 13:27:41.084186 kernel: ftrace: allocated 148 pages with 3 groups Dec 13 13:27:41.084195 kernel: Dynamic Preempt: voluntary Dec 13 13:27:41.084205 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 13:27:41.084215 kernel: rcu: RCU event tracing is enabled. Dec 13 13:27:41.084225 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 13:27:41.084234 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 13:27:41.084244 kernel: Rude variant of Tasks RCU enabled. Dec 13 13:27:41.084254 kernel: Tracing variant of Tasks RCU enabled. Dec 13 13:27:41.084263 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 13:27:41.084273 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 13:27:41.084284 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 13:27:41.084294 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 13:27:41.084303 kernel: Console: colour VGA+ 80x25 Dec 13 13:27:41.084312 kernel: printk: console [tty0] enabled Dec 13 13:27:41.084322 kernel: printk: console [ttyS0] enabled Dec 13 13:27:41.084887 kernel: ACPI: Core revision 20230628 Dec 13 13:27:41.084897 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 13:27:41.084906 kernel: x2apic enabled Dec 13 13:27:41.084915 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 13:27:41.084927 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 13:27:41.084936 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 13:27:41.084945 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Dec 13 13:27:41.084954 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 13 13:27:41.084963 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 13 13:27:41.084972 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 13:27:41.084981 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 13:27:41.084990 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 13:27:41.084999 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 13:27:41.085009 kernel: Speculative Store Bypass: Vulnerable Dec 13 13:27:41.085018 kernel: x86/fpu: x87 FPU will use FXSAVE Dec 13 13:27:41.085026 kernel: Freeing SMP alternatives memory: 32K Dec 13 13:27:41.085035 kernel: pid_max: default: 32768 minimum: 301 Dec 13 13:27:41.085044 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 13:27:41.085053 kernel: landlock: Up and running. Dec 13 13:27:41.085061 kernel: SELinux: Initializing. Dec 13 13:27:41.085071 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 13:27:41.085087 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 13:27:41.085097 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Dec 13 13:27:41.085106 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 13:27:41.085116 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 13:27:41.085127 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 13:27:41.085136 kernel: Performance Events: AMD PMU driver. Dec 13 13:27:41.085145 kernel: ... version: 0 Dec 13 13:27:41.085154 kernel: ... bit width: 48 Dec 13 13:27:41.085165 kernel: ... generic registers: 4 Dec 13 13:27:41.085175 kernel: ... value mask: 0000ffffffffffff Dec 13 13:27:41.085184 kernel: ... max period: 00007fffffffffff Dec 13 13:27:41.085193 kernel: ... fixed-purpose events: 0 Dec 13 13:27:41.085202 kernel: ... event mask: 000000000000000f Dec 13 13:27:41.085211 kernel: signal: max sigframe size: 1440 Dec 13 13:27:41.085220 kernel: rcu: Hierarchical SRCU implementation. Dec 13 13:27:41.085230 kernel: rcu: Max phase no-delay instances is 400. Dec 13 13:27:41.085239 kernel: smp: Bringing up secondary CPUs ... Dec 13 13:27:41.085249 kernel: smpboot: x86: Booting SMP configuration: Dec 13 13:27:41.085259 kernel: .... node #0, CPUs: #1 Dec 13 13:27:41.085269 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 13:27:41.085278 kernel: smpboot: Max logical packages: 2 Dec 13 13:27:41.085287 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Dec 13 13:27:41.085296 kernel: devtmpfs: initialized Dec 13 13:27:41.085305 kernel: x86/mm: Memory block size: 128MB Dec 13 13:27:41.085315 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 13:27:41.085348 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 13:27:41.085358 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 13:27:41.085370 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 13:27:41.085379 kernel: audit: initializing netlink subsys (disabled) Dec 13 13:27:41.085388 kernel: audit: type=2000 audit(1734096460.711:1): state=initialized audit_enabled=0 res=1 Dec 13 13:27:41.085397 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 13:27:41.085407 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 13:27:41.085416 kernel: cpuidle: using governor menu Dec 13 13:27:41.085425 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 13:27:41.085435 kernel: dca service started, version 1.12.1 Dec 13 13:27:41.085444 kernel: PCI: Using configuration type 1 for base access Dec 13 13:27:41.085455 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 13:27:41.085464 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 13:27:41.085488 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 13:27:41.085500 kernel: ACPI: Added _OSI(Module Device) Dec 13 13:27:41.085509 kernel: ACPI: Added _OSI(Processor Device) Dec 13 13:27:41.085519 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 13:27:41.085528 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 13:27:41.085537 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 13:27:41.085546 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 13:27:41.085558 kernel: ACPI: Interpreter enabled Dec 13 13:27:41.085569 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 13:27:41.085579 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 13:27:41.085589 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 13:27:41.085599 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 13:27:41.085609 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Dec 13 13:27:41.085619 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 13:27:41.085775 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 13:27:41.085885 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 13 13:27:41.085983 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 13 13:27:41.085999 kernel: acpiphp: Slot [3] registered Dec 13 13:27:41.086009 kernel: acpiphp: Slot [4] registered Dec 13 13:27:41.086020 kernel: acpiphp: Slot [5] registered Dec 13 13:27:41.086029 kernel: acpiphp: Slot [6] registered Dec 13 13:27:41.086040 kernel: acpiphp: Slot [7] registered Dec 13 13:27:41.086050 kernel: acpiphp: Slot [8] registered Dec 13 13:27:41.086063 kernel: acpiphp: Slot [9] registered Dec 13 13:27:41.086073 kernel: acpiphp: Slot [10] registered Dec 13 13:27:41.086083 kernel: acpiphp: Slot [11] registered Dec 13 13:27:41.086093 kernel: acpiphp: Slot [12] registered Dec 13 13:27:41.086103 kernel: acpiphp: Slot [13] registered Dec 13 13:27:41.086113 kernel: acpiphp: Slot [14] registered Dec 13 13:27:41.086122 kernel: acpiphp: Slot [15] registered Dec 13 13:27:41.086132 kernel: acpiphp: Slot [16] registered Dec 13 13:27:41.086146 kernel: acpiphp: Slot [17] registered Dec 13 13:27:41.086161 kernel: acpiphp: Slot [18] registered Dec 13 13:27:41.086181 kernel: acpiphp: Slot [19] registered Dec 13 13:27:41.086195 kernel: acpiphp: Slot [20] registered Dec 13 13:27:41.086210 kernel: acpiphp: Slot [21] registered Dec 13 13:27:41.086224 kernel: acpiphp: Slot [22] registered Dec 13 13:27:41.086241 kernel: acpiphp: Slot [23] registered Dec 13 13:27:41.086258 kernel: acpiphp: Slot [24] registered Dec 13 13:27:41.086280 kernel: acpiphp: Slot [25] registered Dec 13 13:27:41.086291 kernel: acpiphp: Slot [26] registered Dec 13 13:27:41.086301 kernel: acpiphp: Slot [27] registered Dec 13 13:27:41.086315 kernel: acpiphp: Slot [28] registered Dec 13 13:27:41.086347 kernel: acpiphp: Slot [29] registered Dec 13 13:27:41.086358 kernel: acpiphp: Slot [30] registered Dec 13 13:27:41.086368 kernel: acpiphp: Slot [31] registered Dec 13 13:27:41.086377 kernel: PCI host bridge to bus 0000:00 Dec 13 13:27:41.086500 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 13:27:41.086591 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 13:27:41.086682 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 13:27:41.086790 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 13 13:27:41.086872 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Dec 13 13:27:41.086952 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 13:27:41.087072 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 13:27:41.087209 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Dec 13 13:27:41.087313 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Dec 13 13:27:41.087441 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Dec 13 13:27:41.087530 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Dec 13 13:27:41.087621 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Dec 13 13:27:41.087710 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Dec 13 13:27:41.087799 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Dec 13 13:27:41.087904 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Dec 13 13:27:41.088003 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Dec 13 13:27:41.088105 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Dec 13 13:27:41.088214 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Dec 13 13:27:41.088313 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Dec 13 13:27:41.090466 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Dec 13 13:27:41.090569 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Dec 13 13:27:41.090667 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Dec 13 13:27:41.090764 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 13:27:41.090879 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Dec 13 13:27:41.090981 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Dec 13 13:27:41.091082 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Dec 13 13:27:41.091183 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Dec 13 13:27:41.091281 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Dec 13 13:27:41.091415 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Dec 13 13:27:41.091516 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 13:27:41.091622 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Dec 13 13:27:41.091721 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Dec 13 13:27:41.091828 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Dec 13 13:27:41.091920 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Dec 13 13:27:41.092011 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Dec 13 13:27:41.092108 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 13:27:41.092199 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Dec 13 13:27:41.092294 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Dec 13 13:27:41.092308 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 13:27:41.092318 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 13:27:41.092343 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 13:27:41.092353 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 13:27:41.092362 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 13:27:41.092372 kernel: iommu: Default domain type: Translated Dec 13 13:27:41.092382 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 13:27:41.092394 kernel: PCI: Using ACPI for IRQ routing Dec 13 13:27:41.092404 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 13:27:41.092414 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 13:27:41.092423 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Dec 13 13:27:41.092516 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Dec 13 13:27:41.092607 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Dec 13 13:27:41.092699 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 13:27:41.092713 kernel: vgaarb: loaded Dec 13 13:27:41.092723 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 13:27:41.092736 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 13:27:41.092746 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 13:27:41.092755 kernel: pnp: PnP ACPI init Dec 13 13:27:41.092847 kernel: pnp 00:03: [dma 2] Dec 13 13:27:41.092862 kernel: pnp: PnP ACPI: found 5 devices Dec 13 13:27:41.092872 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 13:27:41.092882 kernel: NET: Registered PF_INET protocol family Dec 13 13:27:41.092891 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 13:27:41.092904 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 13:27:41.092913 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 13:27:41.092923 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 13:27:41.092932 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 13:27:41.092942 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 13:27:41.092951 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 13:27:41.092961 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 13:27:41.092970 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 13:27:41.092979 kernel: NET: Registered PF_XDP protocol family Dec 13 13:27:41.093064 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 13:27:41.093146 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 13:27:41.093225 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 13:27:41.093305 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 13 13:27:41.093402 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Dec 13 13:27:41.093509 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Dec 13 13:27:41.093610 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 13:27:41.093626 kernel: PCI: CLS 0 bytes, default 64 Dec 13 13:27:41.093640 kernel: Initialise system trusted keyrings Dec 13 13:27:41.093651 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 13:27:41.093661 kernel: Key type asymmetric registered Dec 13 13:27:41.093671 kernel: Asymmetric key parser 'x509' registered Dec 13 13:27:41.093681 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 13:27:41.093691 kernel: io scheduler mq-deadline registered Dec 13 13:27:41.093701 kernel: io scheduler kyber registered Dec 13 13:27:41.093712 kernel: io scheduler bfq registered Dec 13 13:27:41.093722 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 13:27:41.093735 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Dec 13 13:27:41.093745 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 13 13:27:41.093755 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 13 13:27:41.093766 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 13 13:27:41.093776 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 13:27:41.093954 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 13:27:41.093985 kernel: random: crng init done Dec 13 13:27:41.094011 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 13:27:41.094036 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 13:27:41.094082 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 13:27:41.094694 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 13:27:41.094762 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 13:27:41.095087 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 13:27:41.095423 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T13:27:40 UTC (1734096460) Dec 13 13:27:41.095648 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Dec 13 13:27:41.095684 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 13 13:27:41.095709 kernel: NET: Registered PF_INET6 protocol family Dec 13 13:27:41.095746 kernel: Segment Routing with IPv6 Dec 13 13:27:41.095770 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 13:27:41.095793 kernel: NET: Registered PF_PACKET protocol family Dec 13 13:27:41.095817 kernel: Key type dns_resolver registered Dec 13 13:27:41.095840 kernel: IPI shorthand broadcast: enabled Dec 13 13:27:41.095864 kernel: sched_clock: Marking stable (970007984, 150538523)->(1127938709, -7392202) Dec 13 13:27:41.095887 kernel: registered taskstats version 1 Dec 13 13:27:41.095910 kernel: Loading compiled-in X.509 certificates Dec 13 13:27:41.095934 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: 87a680e70013684f1bdd04e047addefc714bd162' Dec 13 13:27:41.095963 kernel: Key type .fscrypt registered Dec 13 13:27:41.095986 kernel: Key type fscrypt-provisioning registered Dec 13 13:27:41.096010 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 13:27:41.096033 kernel: ima: Allocated hash algorithm: sha1 Dec 13 13:27:41.096056 kernel: ima: No architecture policies found Dec 13 13:27:41.096079 kernel: clk: Disabling unused clocks Dec 13 13:27:41.096103 kernel: Freeing unused kernel image (initmem) memory: 43328K Dec 13 13:27:41.096126 kernel: Write protecting the kernel read-only data: 38912k Dec 13 13:27:41.096154 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Dec 13 13:27:41.096177 kernel: Run /init as init process Dec 13 13:27:41.096201 kernel: with arguments: Dec 13 13:27:41.096224 kernel: /init Dec 13 13:27:41.096246 kernel: with environment: Dec 13 13:27:41.096269 kernel: HOME=/ Dec 13 13:27:41.096292 kernel: TERM=linux Dec 13 13:27:41.096314 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 13:27:41.100399 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:27:41.100437 systemd[1]: Detected virtualization kvm. Dec 13 13:27:41.100459 systemd[1]: Detected architecture x86-64. Dec 13 13:27:41.100484 systemd[1]: Running in initrd. Dec 13 13:27:41.100509 systemd[1]: No hostname configured, using default hostname. Dec 13 13:27:41.100535 systemd[1]: Hostname set to . Dec 13 13:27:41.100563 systemd[1]: Initializing machine ID from VM UUID. Dec 13 13:27:41.100592 systemd[1]: Queued start job for default target initrd.target. Dec 13 13:27:41.100630 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:27:41.100663 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:27:41.100698 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 13:27:41.100734 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:27:41.100767 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 13:27:41.100800 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 13:27:41.100840 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 13:27:41.100882 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 13:27:41.100914 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:27:41.100946 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:27:41.100979 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:27:41.101037 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:27:41.101073 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:27:41.101108 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:27:41.101132 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:27:41.101156 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:27:41.101181 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 13:27:41.101205 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 13:27:41.101229 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:27:41.101253 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:27:41.101276 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:27:41.101300 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:27:41.101356 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 13:27:41.101381 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:27:41.101405 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 13:27:41.101428 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 13:27:41.101451 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:27:41.101500 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:27:41.101525 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:27:41.101547 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 13:27:41.101574 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:27:41.101595 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 13:27:41.101667 systemd-journald[185]: Collecting audit messages is disabled. Dec 13 13:27:41.101723 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 13:27:41.101744 systemd-journald[185]: Journal started Dec 13 13:27:41.101787 systemd-journald[185]: Runtime Journal (/run/log/journal/14b9364a209a471ca0312eff65c29e25) is 4.9M, max 39.3M, 34.4M free. Dec 13 13:27:41.062607 systemd-modules-load[187]: Inserted module 'overlay' Dec 13 13:27:41.145127 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:27:41.145151 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 13:27:41.145167 kernel: Bridge firewalling registered Dec 13 13:27:41.111665 systemd-modules-load[187]: Inserted module 'br_netfilter' Dec 13 13:27:41.145032 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:27:41.145876 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:27:41.146834 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:27:41.154513 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:27:41.156539 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:27:41.160408 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:27:41.172051 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:27:41.174241 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:27:41.184098 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:27:41.186836 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:27:41.192551 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 13:27:41.194072 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:27:41.203979 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:27:41.211031 dracut-cmdline[218]: dracut-dracut-053 Dec 13 13:27:41.214392 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:27:41.249228 systemd-resolved[222]: Positive Trust Anchors: Dec 13 13:27:41.249248 systemd-resolved[222]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:27:41.249296 systemd-resolved[222]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:27:41.253069 systemd-resolved[222]: Defaulting to hostname 'linux'. Dec 13 13:27:41.254572 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:27:41.255714 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:27:41.301393 kernel: SCSI subsystem initialized Dec 13 13:27:41.311372 kernel: Loading iSCSI transport class v2.0-870. Dec 13 13:27:41.323371 kernel: iscsi: registered transport (tcp) Dec 13 13:27:41.347376 kernel: iscsi: registered transport (qla4xxx) Dec 13 13:27:41.347450 kernel: QLogic iSCSI HBA Driver Dec 13 13:27:41.390842 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 13:27:41.396556 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 13:27:41.428372 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 13:27:41.428462 kernel: device-mapper: uevent: version 1.0.3 Dec 13 13:27:41.430382 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 13:27:41.479430 kernel: raid6: sse2x4 gen() 8187 MB/s Dec 13 13:27:41.497523 kernel: raid6: sse2x2 gen() 13209 MB/s Dec 13 13:27:41.514647 kernel: raid6: sse2x1 gen() 6877 MB/s Dec 13 13:27:41.514752 kernel: raid6: using algorithm sse2x2 gen() 13209 MB/s Dec 13 13:27:41.532856 kernel: raid6: .... xor() 9015 MB/s, rmw enabled Dec 13 13:27:41.532993 kernel: raid6: using ssse3x2 recovery algorithm Dec 13 13:27:41.556392 kernel: xor: measuring software checksum speed Dec 13 13:27:41.556521 kernel: prefetch64-sse : 15304 MB/sec Dec 13 13:27:41.559808 kernel: generic_sse : 11659 MB/sec Dec 13 13:27:41.559903 kernel: xor: using function: prefetch64-sse (15304 MB/sec) Dec 13 13:27:41.740403 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 13:27:41.758639 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:27:41.769537 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:27:41.821202 systemd-udevd[403]: Using default interface naming scheme 'v255'. Dec 13 13:27:41.832175 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:27:41.845118 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 13:27:41.873728 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Dec 13 13:27:41.916817 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:27:41.922582 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:27:41.974379 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:27:41.986627 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 13:27:42.010046 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 13:27:42.021239 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:27:42.023643 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:27:42.025065 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:27:42.036025 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 13:27:42.053911 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:27:42.087441 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Dec 13 13:27:42.208610 kernel: virtio_blk virtio2: [vda] 41943040 512-byte logical blocks (21.5 GB/20.0 GiB) Dec 13 13:27:42.208943 kernel: libata version 3.00 loaded. Dec 13 13:27:42.208975 kernel: ata_piix 0000:00:01.1: version 2.13 Dec 13 13:27:42.209266 kernel: scsi host0: ata_piix Dec 13 13:27:42.209677 kernel: scsi host1: ata_piix Dec 13 13:27:42.209981 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Dec 13 13:27:42.210009 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Dec 13 13:27:42.210032 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 13:27:42.210055 kernel: GPT:17805311 != 41943039 Dec 13 13:27:42.210085 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 13:27:42.210107 kernel: GPT:17805311 != 41943039 Dec 13 13:27:42.210127 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 13:27:42.210148 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:27:42.107144 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:27:42.107303 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:27:42.109013 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:27:42.109620 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:27:42.109801 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:27:42.110404 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:27:42.114728 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:27:42.168974 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:27:42.175581 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:27:42.196790 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:27:42.419397 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (454) Dec 13 13:27:42.446840 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 13:27:42.492398 kernel: BTRFS: device fsid 79c74448-2326-4c98-b9ff-09542b30ea52 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (446) Dec 13 13:27:42.500427 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 13:27:42.518398 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 13:27:42.532117 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 13:27:42.533593 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 13:27:42.543688 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 13:27:42.568207 disk-uuid[506]: Primary Header is updated. Dec 13 13:27:42.568207 disk-uuid[506]: Secondary Entries is updated. Dec 13 13:27:42.568207 disk-uuid[506]: Secondary Header is updated. Dec 13 13:27:42.580394 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:27:43.800473 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:27:43.802175 disk-uuid[507]: The operation has completed successfully. Dec 13 13:27:43.910534 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 13:27:43.910810 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 13:27:43.946547 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 13:27:43.954009 sh[519]: Success Dec 13 13:27:43.974371 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Dec 13 13:27:44.048499 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 13:27:44.050447 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 13:27:44.057570 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 13:27:44.096403 kernel: BTRFS info (device dm-0): first mount of filesystem 79c74448-2326-4c98-b9ff-09542b30ea52 Dec 13 13:27:44.096513 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:27:44.097869 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 13:27:44.100082 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 13:27:44.100131 kernel: BTRFS info (device dm-0): using free space tree Dec 13 13:27:44.121843 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 13:27:44.123226 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 13:27:44.129712 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 13:27:44.134628 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 13:27:44.143898 kernel: BTRFS info (device vda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:27:44.143950 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:27:44.143967 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:27:44.148349 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 13:27:44.158505 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 13:27:44.160371 kernel: BTRFS info (device vda6): last unmount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:27:44.174311 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 13:27:44.184747 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 13:27:44.281423 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:27:44.291545 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:27:44.314938 systemd-networkd[701]: lo: Link UP Dec 13 13:27:44.315671 systemd-networkd[701]: lo: Gained carrier Dec 13 13:27:44.317170 systemd-networkd[701]: Enumeration completed Dec 13 13:27:44.318148 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:27:44.319184 systemd[1]: Reached target network.target - Network. Dec 13 13:27:44.319512 systemd-networkd[701]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:27:44.319516 systemd-networkd[701]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:27:44.320737 systemd-networkd[701]: eth0: Link UP Dec 13 13:27:44.320742 systemd-networkd[701]: eth0: Gained carrier Dec 13 13:27:44.320762 systemd-networkd[701]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:27:44.336522 systemd-networkd[701]: eth0: DHCPv4 address 172.24.4.94/24, gateway 172.24.4.1 acquired from 172.24.4.1 Dec 13 13:27:44.365731 ignition[596]: Ignition 2.20.0 Dec 13 13:27:44.366669 ignition[596]: Stage: fetch-offline Dec 13 13:27:44.366719 ignition[596]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:27:44.366729 ignition[596]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 13:27:44.367353 ignition[596]: parsed url from cmdline: "" Dec 13 13:27:44.368607 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:27:44.367357 ignition[596]: no config URL provided Dec 13 13:27:44.367363 ignition[596]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 13:27:44.367373 ignition[596]: no config at "/usr/lib/ignition/user.ign" Dec 13 13:27:44.367379 ignition[596]: failed to fetch config: resource requires networking Dec 13 13:27:44.367608 ignition[596]: Ignition finished successfully Dec 13 13:27:44.375825 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 13:27:44.389878 ignition[711]: Ignition 2.20.0 Dec 13 13:27:44.389892 ignition[711]: Stage: fetch Dec 13 13:27:44.390103 ignition[711]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:27:44.390116 ignition[711]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 13:27:44.390224 ignition[711]: parsed url from cmdline: "" Dec 13 13:27:44.390228 ignition[711]: no config URL provided Dec 13 13:27:44.390235 ignition[711]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 13:27:44.390244 ignition[711]: no config at "/usr/lib/ignition/user.ign" Dec 13 13:27:44.390343 ignition[711]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Dec 13 13:27:44.390363 ignition[711]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Dec 13 13:27:44.390380 ignition[711]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Dec 13 13:27:44.804168 ignition[711]: GET result: OK Dec 13 13:27:44.804385 ignition[711]: parsing config with SHA512: 2acec0d59f7dac096a330c25bb4cd173694d8c130e3f617e28e518c83ec2cff6de376f3d5937f715b8f1f702c469e96969cfa8803449899074c3a5bdd53d19a6 Dec 13 13:27:44.811970 unknown[711]: fetched base config from "system" Dec 13 13:27:44.812002 unknown[711]: fetched base config from "system" Dec 13 13:27:44.812660 ignition[711]: fetch: fetch complete Dec 13 13:27:44.812018 unknown[711]: fetched user config from "openstack" Dec 13 13:27:44.812674 ignition[711]: fetch: fetch passed Dec 13 13:27:44.818529 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 13:27:44.812786 ignition[711]: Ignition finished successfully Dec 13 13:27:44.830028 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 13:27:44.874012 ignition[717]: Ignition 2.20.0 Dec 13 13:27:44.875672 ignition[717]: Stage: kargs Dec 13 13:27:44.876126 ignition[717]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:27:44.876154 ignition[717]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 13:27:44.880410 ignition[717]: kargs: kargs passed Dec 13 13:27:44.883108 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 13:27:44.880515 ignition[717]: Ignition finished successfully Dec 13 13:27:44.891603 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 13:27:44.937511 ignition[724]: Ignition 2.20.0 Dec 13 13:27:44.937543 ignition[724]: Stage: disks Dec 13 13:27:44.937959 ignition[724]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:27:44.937986 ignition[724]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 13:27:44.943306 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 13:27:44.939804 ignition[724]: disks: disks passed Dec 13 13:27:44.946716 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 13:27:44.939903 ignition[724]: Ignition finished successfully Dec 13 13:27:44.948649 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 13:27:44.951020 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:27:44.953780 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:27:44.956026 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:27:44.969803 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 13:27:45.002529 systemd-fsck[732]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 13 13:27:45.012268 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 13:27:45.021700 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 13:27:45.215444 kernel: EXT4-fs (vda9): mounted filesystem 8801d4fe-2f40-4e12-9140-c192f2e7d668 r/w with ordered data mode. Quota mode: none. Dec 13 13:27:45.218529 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 13:27:45.222439 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 13:27:45.231545 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:27:45.244575 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 13:27:45.246501 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 13:27:45.257722 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Dec 13 13:27:45.261569 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 13:27:45.261647 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:27:45.271563 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 13:27:45.278604 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 13:27:45.288396 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (740) Dec 13 13:27:45.353057 kernel: BTRFS info (device vda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:27:45.353201 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:27:45.356064 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:27:45.436403 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 13:27:45.455072 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:27:45.965283 systemd-networkd[701]: eth0: Gained IPv6LL Dec 13 13:27:46.228900 initrd-setup-root[767]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 13:27:46.242195 initrd-setup-root[775]: cut: /sysroot/etc/group: No such file or directory Dec 13 13:27:46.253924 initrd-setup-root[782]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 13:27:46.267021 initrd-setup-root[789]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 13:27:46.403447 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 13:27:46.410438 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 13:27:46.414433 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 13:27:46.421967 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 13:27:46.424053 kernel: BTRFS info (device vda6): last unmount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:27:46.456546 ignition[857]: INFO : Ignition 2.20.0 Dec 13 13:27:46.457508 ignition[857]: INFO : Stage: mount Dec 13 13:27:46.458066 ignition[857]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:27:46.458066 ignition[857]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 13:27:46.460322 ignition[857]: INFO : mount: mount passed Dec 13 13:27:46.460322 ignition[857]: INFO : Ignition finished successfully Dec 13 13:27:46.461716 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 13:27:46.462528 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 13:27:53.375747 coreos-metadata[742]: Dec 13 13:27:53.375 WARN failed to locate config-drive, using the metadata service API instead Dec 13 13:27:53.416303 coreos-metadata[742]: Dec 13 13:27:53.416 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 13:27:53.428372 coreos-metadata[742]: Dec 13 13:27:53.428 INFO Fetch successful Dec 13 13:27:53.429842 coreos-metadata[742]: Dec 13 13:27:53.429 INFO wrote hostname ci-4186-0-0-e-a1a8545b52.novalocal to /sysroot/etc/hostname Dec 13 13:27:53.433911 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Dec 13 13:27:53.434193 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Dec 13 13:27:53.444552 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 13:27:53.481959 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:27:53.581706 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (874) Dec 13 13:27:53.608705 kernel: BTRFS info (device vda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:27:53.608864 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:27:53.611675 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:27:53.724409 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 13:27:53.809579 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:27:53.854384 ignition[892]: INFO : Ignition 2.20.0 Dec 13 13:27:53.854384 ignition[892]: INFO : Stage: files Dec 13 13:27:53.857138 ignition[892]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:27:53.857138 ignition[892]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 13:27:53.857138 ignition[892]: DEBUG : files: compiled without relabeling support, skipping Dec 13 13:27:53.877922 ignition[892]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 13:27:53.877922 ignition[892]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 13:27:53.967958 ignition[892]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 13:27:53.970241 ignition[892]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 13:27:53.970241 ignition[892]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 13:27:53.969112 unknown[892]: wrote ssh authorized keys file for user: core Dec 13 13:27:53.979924 ignition[892]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Dec 13 13:27:53.982548 ignition[892]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 13:27:53.982548 ignition[892]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:27:53.982548 ignition[892]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:27:53.982548 ignition[892]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 13:27:53.982548 ignition[892]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 13:27:53.982548 ignition[892]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 13:27:53.982548 ignition[892]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Dec 13 13:27:54.550619 ignition[892]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Dec 13 13:27:56.157960 ignition[892]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 13:27:56.157960 ignition[892]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:27:56.157960 ignition[892]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:27:56.157960 ignition[892]: INFO : files: files passed Dec 13 13:27:56.157960 ignition[892]: INFO : Ignition finished successfully Dec 13 13:27:56.163311 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 13:27:56.175775 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 13:27:56.178788 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 13:27:56.195508 initrd-setup-root-after-ignition[918]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:27:56.195508 initrd-setup-root-after-ignition[918]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:27:56.200032 initrd-setup-root-after-ignition[922]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:27:56.199213 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:27:56.200949 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 13:27:56.209674 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 13:27:56.234867 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 13:27:56.235146 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 13:27:56.237219 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 13:27:56.238997 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 13:27:56.354624 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 13:27:56.354874 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 13:27:56.357139 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 13:27:56.366757 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 13:27:56.402321 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:27:56.411706 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 13:27:56.448004 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:27:56.449682 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:27:56.452727 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 13:27:56.455624 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 13:27:56.455897 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:27:56.458891 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 13:27:56.460631 systemd[1]: Stopped target basic.target - Basic System. Dec 13 13:27:56.463460 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 13:27:56.465946 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:27:56.468362 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 13:27:56.471127 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 13:27:56.474114 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:27:56.477071 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 13:27:56.479839 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 13:27:56.492059 systemd[1]: Stopped target swap.target - Swaps. Dec 13 13:27:56.494593 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 13:27:56.494878 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:27:56.497854 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:27:56.499587 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:27:56.501719 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 13:27:56.501968 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:27:56.504582 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 13:27:56.504952 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 13:27:56.508721 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 13:27:56.509028 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:27:56.511838 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 13:27:56.512076 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 13:27:56.524556 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 13:27:56.532866 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 13:27:56.534024 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 13:27:56.534482 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:27:56.539554 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 13:27:56.542479 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:27:56.554168 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 13:27:56.554568 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 13:27:56.565867 ignition[944]: INFO : Ignition 2.20.0 Dec 13 13:27:56.567808 ignition[944]: INFO : Stage: umount Dec 13 13:27:56.567808 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:27:56.567808 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 13:27:56.567808 ignition[944]: INFO : umount: umount passed Dec 13 13:27:56.567808 ignition[944]: INFO : Ignition finished successfully Dec 13 13:27:56.571646 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 13:27:56.571759 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 13:27:56.573205 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 13:27:56.573280 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 13:27:56.574776 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 13:27:56.574848 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 13:27:56.575395 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 13:27:56.575439 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 13:27:56.575957 systemd[1]: Stopped target network.target - Network. Dec 13 13:27:56.578465 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 13:27:56.578519 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:27:56.579368 systemd[1]: Stopped target paths.target - Path Units. Dec 13 13:27:56.579802 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 13:27:56.587398 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:27:56.588769 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 13:27:56.589859 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 13:27:56.590914 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 13:27:56.590954 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:27:56.591451 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 13:27:56.591485 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:27:56.592022 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 13:27:56.592069 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 13:27:56.593045 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 13:27:56.593089 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 13:27:56.594236 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 13:27:56.595179 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 13:27:56.597243 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 13:27:56.597971 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 13:27:56.598093 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 13:27:56.598162 systemd-networkd[701]: eth0: DHCPv6 lease lost Dec 13 13:27:56.600460 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 13:27:56.600561 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 13:27:56.602643 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 13:27:56.602752 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 13:27:56.605978 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 13:27:56.606179 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:27:56.607281 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 13:27:56.607441 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 13:27:56.618493 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 13:27:56.619525 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 13:27:56.619581 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:27:56.620158 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 13:27:56.620197 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:27:56.620688 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 13:27:56.620734 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 13:27:56.621205 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 13:27:56.621243 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:27:56.621938 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:27:56.627208 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 13:27:56.628402 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:27:56.630527 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 13:27:56.630569 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 13:27:56.631651 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 13:27:56.631682 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:27:56.632796 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 13:27:56.632842 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:27:56.634445 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 13:27:56.634489 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 13:27:56.635567 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:27:56.635610 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:27:56.640733 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 13:27:56.641775 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 13:27:56.641894 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:27:56.643060 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 13:27:56.643135 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:27:56.647469 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 13:27:56.647525 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:27:56.651649 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:27:56.651690 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:27:56.653112 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 13:27:56.653208 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 13:27:56.654058 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 13:27:56.654154 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 13:27:56.655588 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 13:27:56.662570 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 13:27:56.671614 systemd[1]: Switching root. Dec 13 13:27:56.700878 systemd-journald[185]: Journal stopped Dec 13 13:27:58.561991 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Dec 13 13:27:58.562066 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 13:27:58.562086 kernel: SELinux: policy capability open_perms=1 Dec 13 13:27:58.562100 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 13:27:58.562116 kernel: SELinux: policy capability always_check_network=0 Dec 13 13:27:58.562134 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 13:27:58.562147 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 13:27:58.562161 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 13:27:58.562174 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 13:27:58.562187 kernel: audit: type=1403 audit(1734096477.280:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 13:27:58.562205 systemd[1]: Successfully loaded SELinux policy in 76.242ms. Dec 13 13:27:58.562221 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 27.496ms. Dec 13 13:27:58.562236 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:27:58.562250 systemd[1]: Detected virtualization kvm. Dec 13 13:27:58.562267 systemd[1]: Detected architecture x86-64. Dec 13 13:27:58.562280 systemd[1]: Detected first boot. Dec 13 13:27:58.562295 systemd[1]: Hostname set to . Dec 13 13:27:58.562311 systemd[1]: Initializing machine ID from VM UUID. Dec 13 13:27:58.564419 zram_generator::config[987]: No configuration found. Dec 13 13:27:58.564466 systemd[1]: Populated /etc with preset unit settings. Dec 13 13:27:58.564482 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 13:27:58.564496 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 13:27:58.564510 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 13:27:58.564525 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 13:27:58.564539 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 13:27:58.564558 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 13:27:58.564572 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 13:27:58.564587 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 13:27:58.564602 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 13:27:58.564616 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 13:27:58.564630 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 13:27:58.564644 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:27:58.564658 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:27:58.564671 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 13:27:58.564685 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 13:27:58.564699 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 13:27:58.564715 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:27:58.564729 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 13:27:58.564743 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:27:58.564756 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 13:27:58.564771 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 13:27:58.564785 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 13:27:58.564801 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 13:27:58.564815 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:27:58.564829 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:27:58.564842 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:27:58.564856 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:27:58.564870 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 13:27:58.564884 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 13:27:58.564897 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:27:58.564911 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:27:58.564927 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:27:58.564941 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 13:27:58.564955 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 13:27:58.564972 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 13:27:58.564986 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 13:27:58.564999 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:27:58.565013 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 13:27:58.565026 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 13:27:58.565040 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 13:27:58.565057 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 13:27:58.565071 systemd[1]: Reached target machines.target - Containers. Dec 13 13:27:58.565085 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 13:27:58.565099 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:27:58.565113 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:27:58.565127 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 13:27:58.565141 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:27:58.565155 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 13:27:58.565171 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:27:58.565187 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 13:27:58.565201 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:27:58.565217 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 13:27:58.565232 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 13:27:58.565246 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 13:27:58.565260 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 13:27:58.565274 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 13:27:58.565288 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:27:58.565304 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:27:58.565318 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 13:27:58.565931 kernel: loop: module loaded Dec 13 13:27:58.565951 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 13:27:58.565966 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:27:58.565980 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 13:27:58.565994 systemd[1]: Stopped verity-setup.service. Dec 13 13:27:58.566008 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:27:58.566022 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 13:27:58.566041 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 13:27:58.566056 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 13:27:58.566072 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 13:27:58.566086 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 13:27:58.566102 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 13:27:58.566116 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:27:58.566129 kernel: fuse: init (API version 7.39) Dec 13 13:27:58.566143 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 13:27:58.566157 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 13:27:58.566170 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:27:58.566184 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:27:58.566200 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:27:58.566214 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:27:58.566228 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 13:27:58.566242 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 13:27:58.566257 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:27:58.566299 systemd-journald[1080]: Collecting audit messages is disabled. Dec 13 13:27:58.568460 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:27:58.568484 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:27:58.568498 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 13:27:58.568513 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 13:27:58.568528 systemd-journald[1080]: Journal started Dec 13 13:27:58.568556 systemd-journald[1080]: Runtime Journal (/run/log/journal/14b9364a209a471ca0312eff65c29e25) is 4.9M, max 39.3M, 34.4M free. Dec 13 13:27:58.114197 systemd[1]: Queued start job for default target multi-user.target. Dec 13 13:27:58.142133 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 13:27:58.143305 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 13:27:58.571378 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:27:58.573144 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 13:27:58.582482 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 13:27:58.596375 kernel: ACPI: bus type drm_connector registered Dec 13 13:27:58.594672 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 13:27:58.602572 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 13:27:58.603466 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 13:27:58.603522 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:27:58.605883 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 13:27:58.615209 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 13:27:58.620496 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 13:27:58.621193 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:27:58.623840 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 13:27:58.627504 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 13:27:58.628123 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:27:58.636242 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 13:27:58.637630 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:27:58.641369 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:27:58.644586 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 13:27:58.655498 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 13:27:58.661475 systemd-journald[1080]: Time spent on flushing to /var/log/journal/14b9364a209a471ca0312eff65c29e25 is 19.221ms for 916 entries. Dec 13 13:27:58.661475 systemd-journald[1080]: System Journal (/var/log/journal/14b9364a209a471ca0312eff65c29e25) is 8.0M, max 584.8M, 576.8M free. Dec 13 13:27:58.710440 systemd-journald[1080]: Received client request to flush runtime journal. Dec 13 13:27:58.710518 kernel: loop0: detected capacity change from 0 to 138184 Dec 13 13:27:58.658713 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 13:27:58.659492 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 13:27:58.660792 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:27:58.667586 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 13:27:58.668276 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 13:27:58.669097 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 13:27:58.670918 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 13:27:58.687926 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 13:27:58.706577 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 13:27:58.719469 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 13:27:58.726790 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 13:27:58.735435 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:27:58.749654 udevadm[1132]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 13:27:58.781991 systemd-tmpfiles[1120]: ACLs are not supported, ignoring. Dec 13 13:27:58.782456 systemd-tmpfiles[1120]: ACLs are not supported, ignoring. Dec 13 13:27:58.791246 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 13:27:58.791967 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:27:58.792845 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 13:27:58.801619 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 13:27:58.832628 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 13:27:58.856383 kernel: loop1: detected capacity change from 0 to 205544 Dec 13 13:27:58.872054 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 13:27:58.879605 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:27:58.893871 systemd-tmpfiles[1142]: ACLs are not supported, ignoring. Dec 13 13:27:58.893893 systemd-tmpfiles[1142]: ACLs are not supported, ignoring. Dec 13 13:27:58.898661 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:27:58.925379 kernel: loop2: detected capacity change from 0 to 141000 Dec 13 13:27:59.007062 kernel: loop3: detected capacity change from 0 to 8 Dec 13 13:27:59.036565 kernel: loop4: detected capacity change from 0 to 138184 Dec 13 13:27:59.073385 kernel: loop5: detected capacity change from 0 to 205544 Dec 13 13:27:59.135367 kernel: loop6: detected capacity change from 0 to 141000 Dec 13 13:27:59.182346 kernel: loop7: detected capacity change from 0 to 8 Dec 13 13:27:59.184484 (sd-merge)[1149]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Dec 13 13:27:59.184927 (sd-merge)[1149]: Merged extensions into '/usr'. Dec 13 13:27:59.197173 systemd[1]: Reloading requested from client PID 1119 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 13:27:59.197196 systemd[1]: Reloading... Dec 13 13:27:59.275357 zram_generator::config[1171]: No configuration found. Dec 13 13:27:59.488633 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:27:59.549987 systemd[1]: Reloading finished in 352 ms. Dec 13 13:27:59.579594 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 13:27:59.587482 systemd[1]: Starting ensure-sysext.service... Dec 13 13:27:59.589486 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:27:59.661573 systemd[1]: Reloading requested from client PID 1230 ('systemctl') (unit ensure-sysext.service)... Dec 13 13:27:59.661593 systemd[1]: Reloading... Dec 13 13:27:59.704273 systemd-tmpfiles[1231]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 13:27:59.704642 systemd-tmpfiles[1231]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 13:27:59.708679 systemd-tmpfiles[1231]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 13:27:59.709050 systemd-tmpfiles[1231]: ACLs are not supported, ignoring. Dec 13 13:27:59.709120 systemd-tmpfiles[1231]: ACLs are not supported, ignoring. Dec 13 13:27:59.714174 systemd-tmpfiles[1231]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 13:27:59.714186 systemd-tmpfiles[1231]: Skipping /boot Dec 13 13:27:59.730982 systemd-tmpfiles[1231]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 13:27:59.731510 systemd-tmpfiles[1231]: Skipping /boot Dec 13 13:27:59.793393 zram_generator::config[1260]: No configuration found. Dec 13 13:27:59.971853 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:28:00.032977 systemd[1]: Reloading finished in 371 ms. Dec 13 13:28:00.050653 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 13:28:00.055761 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:28:00.067492 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:28:00.093687 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 13:28:00.096506 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 13:28:00.105676 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:28:00.112593 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:28:00.116499 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 13:28:00.127201 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:28:00.127463 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:28:00.136643 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:28:00.146728 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:28:00.152800 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:28:00.153508 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:28:00.153666 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:28:00.159638 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 13:28:00.161999 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:28:00.162203 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:28:00.163460 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:28:00.163607 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:28:00.168874 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:28:00.169180 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:28:00.177880 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 13:28:00.179561 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:28:00.179791 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:28:00.183276 systemd[1]: Finished ensure-sysext.service. Dec 13 13:28:00.194647 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 13:28:00.206700 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 13:28:00.212528 ldconfig[1114]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 13:28:00.213730 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:28:00.213909 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:28:00.216191 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:28:00.217491 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:28:00.219987 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:28:00.230734 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 13:28:00.235510 systemd-udevd[1322]: Using default interface naming scheme 'v255'. Dec 13 13:28:00.237760 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:28:00.237983 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:28:00.238886 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:28:00.241763 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 13:28:00.241937 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 13:28:00.249144 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 13:28:00.257771 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 13:28:00.288676 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 13:28:00.289760 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:28:00.297604 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:28:00.299813 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 13:28:00.314960 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 13:28:00.331485 augenrules[1377]: No rules Dec 13 13:28:00.333186 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:28:00.334135 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:28:00.335789 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 13:28:00.354356 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1359) Dec 13 13:28:00.361350 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1359) Dec 13 13:28:00.387878 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1367) Dec 13 13:28:00.434783 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 13:28:00.437036 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 13:28:00.482068 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 13:28:00.488704 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 13:28:00.496569 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 13:28:00.498288 systemd-resolved[1321]: Positive Trust Anchors: Dec 13 13:28:00.499591 systemd-resolved[1321]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:28:00.499718 systemd-resolved[1321]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:28:00.503035 systemd-networkd[1362]: lo: Link UP Dec 13 13:28:00.505645 systemd-networkd[1362]: lo: Gained carrier Dec 13 13:28:00.507145 systemd-networkd[1362]: Enumeration completed Dec 13 13:28:00.507313 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:28:00.509193 systemd-networkd[1362]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:28:00.509267 systemd-networkd[1362]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:28:00.510052 systemd-networkd[1362]: eth0: Link UP Dec 13 13:28:00.510115 systemd-networkd[1362]: eth0: Gained carrier Dec 13 13:28:00.510185 systemd-networkd[1362]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:28:00.510700 systemd-resolved[1321]: Using system hostname 'ci-4186-0-0-e-a1a8545b52.novalocal'. Dec 13 13:28:00.513560 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 13:28:00.515477 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:28:00.523441 systemd-networkd[1362]: eth0: DHCPv4 address 172.24.4.94/24, gateway 172.24.4.1 acquired from 172.24.4.1 Dec 13 13:28:00.524700 systemd-timesyncd[1335]: Network configuration changed, trying to establish connection. Dec 13 13:28:00.525220 systemd[1]: Reached target network.target - Network. Dec 13 13:28:00.527216 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:28:00.542563 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 13:28:00.558705 systemd-networkd[1362]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:28:00.570481 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 13:28:00.578401 kernel: ACPI: button: Power Button [PWRF] Dec 13 13:28:00.581350 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Dec 13 13:28:00.626361 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 13:28:00.636692 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:28:00.645372 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 13:28:00.668357 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Dec 13 13:28:00.671830 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Dec 13 13:28:00.672592 kernel: Console: switching to colour dummy device 80x25 Dec 13 13:28:00.673660 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 13 13:28:00.673699 kernel: [drm] features: -context_init Dec 13 13:28:00.675345 kernel: [drm] number of scanouts: 1 Dec 13 13:28:00.675436 kernel: [drm] number of cap sets: 0 Dec 13 13:28:00.680536 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Dec 13 13:28:00.682006 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:28:00.682300 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:28:00.685456 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Dec 13 13:28:00.685510 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 13:28:00.689753 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:28:00.695121 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 13 13:28:00.714450 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:28:00.714750 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:28:00.719481 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 13:28:00.729588 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 13:28:00.730954 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:28:00.801725 lvm[1413]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 13:28:00.920217 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 13:28:00.922079 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:28:00.945719 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 13:28:00.951457 lvm[1418]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 13:28:01.225982 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 13:28:01.665258 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:28:01.667308 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:28:01.668776 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 13:28:01.669061 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 13:28:01.669720 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 13:28:01.670087 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 13:28:01.670287 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 13:28:01.672025 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 13:28:01.672125 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:28:01.672288 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:28:01.697770 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 13:28:01.701106 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 13:28:01.710289 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 13:28:01.715886 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 13:28:01.719070 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:28:01.722427 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:28:01.725725 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 13:28:01.725810 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 13:28:01.734644 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 13:28:01.743728 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 13:28:01.761582 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 13:28:01.772598 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 13:28:01.787875 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 13:28:01.791954 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 13:28:01.802572 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 13:28:01.809578 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 13:28:01.816520 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 13:28:01.822258 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 13:28:01.824821 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 13:28:01.825575 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 13:28:01.832619 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 13:28:01.839723 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 13:28:01.843485 systemd-networkd[1362]: eth0: Gained IPv6LL Dec 13 13:28:01.846062 systemd-timesyncd[1335]: Network configuration changed, trying to establish connection. Dec 13 13:28:01.851446 jq[1430]: false Dec 13 13:28:01.850825 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 13:28:01.851543 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 13:28:01.869493 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 13:28:01.871243 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 13:28:01.871525 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 13:28:01.878627 jq[1440]: true Dec 13 13:28:01.882935 extend-filesystems[1431]: Found loop4 Dec 13 13:28:01.887042 extend-filesystems[1431]: Found loop5 Dec 13 13:28:01.887042 extend-filesystems[1431]: Found loop6 Dec 13 13:28:01.887042 extend-filesystems[1431]: Found loop7 Dec 13 13:28:01.887042 extend-filesystems[1431]: Found vda Dec 13 13:28:01.887042 extend-filesystems[1431]: Found vda1 Dec 13 13:28:01.887042 extend-filesystems[1431]: Found vda2 Dec 13 13:28:01.887042 extend-filesystems[1431]: Found vda3 Dec 13 13:28:01.887042 extend-filesystems[1431]: Found usr Dec 13 13:28:01.887042 extend-filesystems[1431]: Found vda4 Dec 13 13:28:01.887042 extend-filesystems[1431]: Found vda6 Dec 13 13:28:01.887042 extend-filesystems[1431]: Found vda7 Dec 13 13:28:01.887042 extend-filesystems[1431]: Found vda9 Dec 13 13:28:01.887042 extend-filesystems[1431]: Checking size of /dev/vda9 Dec 13 13:28:01.891438 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 13:28:01.907724 (ntainerd)[1449]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 13:28:01.937248 jq[1450]: true Dec 13 13:28:01.907866 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 13:28:01.932485 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:28:01.946655 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 13:28:01.952232 update_engine[1438]: I20241213 13:28:01.952149 1438 main.cc:92] Flatcar Update Engine starting Dec 13 13:28:01.953505 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 13:28:01.953831 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 13:28:02.002865 extend-filesystems[1431]: Resized partition /dev/vda9 Dec 13 13:28:02.008522 extend-filesystems[1473]: resize2fs 1.47.1 (20-May-2024) Dec 13 13:28:02.018803 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 4635643 blocks Dec 13 13:28:02.013301 systemd-logind[1437]: New seat seat0. Dec 13 13:28:02.032981 systemd-logind[1437]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 13:28:02.033013 systemd-logind[1437]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 13:28:02.037465 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 13:28:02.063244 dbus-daemon[1427]: [system] SELinux support is enabled Dec 13 13:28:02.262884 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1370) Dec 13 13:28:02.065405 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 13:28:02.093911 dbus-daemon[1427]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 13:28:02.263184 update_engine[1438]: I20241213 13:28:02.096982 1438 update_check_scheduler.cc:74] Next update check in 9m0s Dec 13 13:28:02.075493 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 13:28:02.075522 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 13:28:02.078294 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 13:28:02.078315 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 13:28:02.096163 systemd[1]: Started update-engine.service - Update Engine. Dec 13 13:28:02.128179 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 13:28:02.132419 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 13:28:02.232687 locksmithd[1490]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 13:28:02.267741 sshd_keygen[1462]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 13:28:02.300755 kernel: EXT4-fs (vda9): resized filesystem to 4635643 Dec 13 13:28:02.308690 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 13:28:02.320619 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 13:28:02.328800 systemd[1]: Started sshd@0-172.24.4.94:22-172.24.4.1:50092.service - OpenSSH per-connection server daemon (172.24.4.1:50092). Dec 13 13:28:02.341279 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 13:28:02.341545 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 13:28:02.355140 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 13:28:02.373693 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 13:28:02.384693 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 13:28:02.388638 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 13:28:02.390041 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 13:28:02.403614 bash[1486]: Updated "/home/core/.ssh/authorized_keys" Dec 13 13:28:02.405120 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 13:28:02.416353 extend-filesystems[1473]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 13:28:02.416353 extend-filesystems[1473]: old_desc_blocks = 1, new_desc_blocks = 3 Dec 13 13:28:02.416353 extend-filesystems[1473]: The filesystem on /dev/vda9 is now 4635643 (4k) blocks long. Dec 13 13:28:02.441858 extend-filesystems[1431]: Resized filesystem in /dev/vda9 Dec 13 13:28:02.418221 systemd[1]: Starting sshkeys.service... Dec 13 13:28:02.426898 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 13:28:02.427629 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 13:28:02.461748 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 13:28:02.477426 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 13:28:02.509346 containerd[1449]: time="2024-12-13T13:28:02.507807908Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Dec 13 13:28:02.539314 containerd[1449]: time="2024-12-13T13:28:02.539154539Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:28:02.541282 containerd[1449]: time="2024-12-13T13:28:02.541246562Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:28:02.541381 containerd[1449]: time="2024-12-13T13:28:02.541363321Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 13:28:02.541534 containerd[1449]: time="2024-12-13T13:28:02.541509846Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 13:28:02.541808 containerd[1449]: time="2024-12-13T13:28:02.541787767Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 13:28:02.541887 containerd[1449]: time="2024-12-13T13:28:02.541869180Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 13:28:02.542039 containerd[1449]: time="2024-12-13T13:28:02.542016526Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:28:02.542100 containerd[1449]: time="2024-12-13T13:28:02.542087469Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:28:02.542394 containerd[1449]: time="2024-12-13T13:28:02.542372274Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:28:02.542459 containerd[1449]: time="2024-12-13T13:28:02.542445341Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 13:28:02.542549 containerd[1449]: time="2024-12-13T13:28:02.542530590Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:28:02.542614 containerd[1449]: time="2024-12-13T13:28:02.542600001Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 13:28:02.542757 containerd[1449]: time="2024-12-13T13:28:02.542737158Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:28:02.543058 containerd[1449]: time="2024-12-13T13:28:02.543040316Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:28:02.543216 containerd[1449]: time="2024-12-13T13:28:02.543197982Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:28:02.543282 containerd[1449]: time="2024-12-13T13:28:02.543269156Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 13:28:02.543442 containerd[1449]: time="2024-12-13T13:28:02.543424707Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 13:28:02.543554 containerd[1449]: time="2024-12-13T13:28:02.543538731Z" level=info msg="metadata content store policy set" policy=shared Dec 13 13:28:02.564537 containerd[1449]: time="2024-12-13T13:28:02.564498732Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 13:28:02.564537 containerd[1449]: time="2024-12-13T13:28:02.564605402Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 13:28:02.564537 containerd[1449]: time="2024-12-13T13:28:02.564627744Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 13:28:02.564537 containerd[1449]: time="2024-12-13T13:28:02.564655065Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 13:28:02.564537 containerd[1449]: time="2024-12-13T13:28:02.564674561Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 13:28:02.564537 containerd[1449]: time="2024-12-13T13:28:02.564848638Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 13:28:02.565290 containerd[1449]: time="2024-12-13T13:28:02.565213743Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 13:28:02.566502 containerd[1449]: time="2024-12-13T13:28:02.565461998Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 13:28:02.566502 containerd[1449]: time="2024-12-13T13:28:02.565490822Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 13:28:02.566502 containerd[1449]: time="2024-12-13T13:28:02.565510740Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 13:28:02.566502 containerd[1449]: time="2024-12-13T13:28:02.565528583Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 13:28:02.566502 containerd[1449]: time="2024-12-13T13:28:02.565544443Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 13:28:02.566502 containerd[1449]: time="2024-12-13T13:28:02.565565392Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 13:28:02.566502 containerd[1449]: time="2024-12-13T13:28:02.565583005Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 13:28:02.566502 containerd[1449]: time="2024-12-13T13:28:02.565600729Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 13:28:02.566502 containerd[1449]: time="2024-12-13T13:28:02.565616047Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 13:28:02.566502 containerd[1449]: time="2024-12-13T13:28:02.565629853Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 13:28:02.566502 containerd[1449]: time="2024-12-13T13:28:02.565643148Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 13:28:02.566502 containerd[1449]: time="2024-12-13T13:28:02.565667454Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 13:28:02.566502 containerd[1449]: time="2024-12-13T13:28:02.565684185Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 13:28:02.566502 containerd[1449]: time="2024-12-13T13:28:02.565699674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 13:28:02.566836 containerd[1449]: time="2024-12-13T13:28:02.565715333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 13:28:02.566836 containerd[1449]: time="2024-12-13T13:28:02.565731394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 13:28:02.566836 containerd[1449]: time="2024-12-13T13:28:02.565746923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 13:28:02.566836 containerd[1449]: time="2024-12-13T13:28:02.565762061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 13:28:02.566836 containerd[1449]: time="2024-12-13T13:28:02.565776658Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 13:28:02.566836 containerd[1449]: time="2024-12-13T13:28:02.565791887Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 13:28:02.566836 containerd[1449]: time="2024-12-13T13:28:02.565810101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 13:28:02.566836 containerd[1449]: time="2024-12-13T13:28:02.565823396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 13:28:02.566836 containerd[1449]: time="2024-12-13T13:28:02.565836441Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 13:28:02.566836 containerd[1449]: time="2024-12-13T13:28:02.565850467Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 13:28:02.566836 containerd[1449]: time="2024-12-13T13:28:02.565866397Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 13:28:02.566836 containerd[1449]: time="2024-12-13T13:28:02.565887677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 13:28:02.566836 containerd[1449]: time="2024-12-13T13:28:02.565906001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 13:28:02.566836 containerd[1449]: time="2024-12-13T13:28:02.565918394Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 13:28:02.567145 containerd[1449]: time="2024-12-13T13:28:02.565975501Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 13:28:02.567145 containerd[1449]: time="2024-12-13T13:28:02.565998464Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 13:28:02.567145 containerd[1449]: time="2024-12-13T13:28:02.566010307Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 13:28:02.567145 containerd[1449]: time="2024-12-13T13:28:02.566023011Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 13:28:02.567145 containerd[1449]: time="2024-12-13T13:28:02.566033801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 13:28:02.567145 containerd[1449]: time="2024-12-13T13:28:02.566048699Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 13:28:02.567145 containerd[1449]: time="2024-12-13T13:28:02.566059579Z" level=info msg="NRI interface is disabled by configuration." Dec 13 13:28:02.567145 containerd[1449]: time="2024-12-13T13:28:02.566070680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 13:28:02.567350 containerd[1449]: time="2024-12-13T13:28:02.566419274Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 13:28:02.567350 containerd[1449]: time="2024-12-13T13:28:02.566480839Z" level=info msg="Connect containerd service" Dec 13 13:28:02.567350 containerd[1449]: time="2024-12-13T13:28:02.566520744Z" level=info msg="using legacy CRI server" Dec 13 13:28:02.567350 containerd[1449]: time="2024-12-13T13:28:02.566528899Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 13:28:02.567350 containerd[1449]: time="2024-12-13T13:28:02.566658042Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 13:28:02.567633 containerd[1449]: time="2024-12-13T13:28:02.567365278Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 13:28:02.568239 containerd[1449]: time="2024-12-13T13:28:02.567707430Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 13:28:02.568239 containerd[1449]: time="2024-12-13T13:28:02.567769376Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 13:28:02.568239 containerd[1449]: time="2024-12-13T13:28:02.567781889Z" level=info msg="Start subscribing containerd event" Dec 13 13:28:02.568239 containerd[1449]: time="2024-12-13T13:28:02.567857892Z" level=info msg="Start recovering state" Dec 13 13:28:02.568239 containerd[1449]: time="2024-12-13T13:28:02.567931129Z" level=info msg="Start event monitor" Dec 13 13:28:02.568239 containerd[1449]: time="2024-12-13T13:28:02.567953702Z" level=info msg="Start snapshots syncer" Dec 13 13:28:02.568239 containerd[1449]: time="2024-12-13T13:28:02.567964923Z" level=info msg="Start cni network conf syncer for default" Dec 13 13:28:02.568239 containerd[1449]: time="2024-12-13T13:28:02.567975272Z" level=info msg="Start streaming server" Dec 13 13:28:02.568239 containerd[1449]: time="2024-12-13T13:28:02.568057787Z" level=info msg="containerd successfully booted in 0.061612s" Dec 13 13:28:02.569476 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 13:28:03.529884 sshd[1510]: Accepted publickey for core from 172.24.4.1 port 50092 ssh2: RSA SHA256:gMyySNlkobtnegIUOgKiq8X7+FvfBix4+97j05Vtzjs Dec 13 13:28:03.559452 sshd-session[1510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:28:03.587030 systemd-logind[1437]: New session 1 of user core. Dec 13 13:28:03.592245 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 13:28:03.608248 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 13:28:03.641180 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 13:28:03.658594 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 13:28:03.712915 (systemd)[1533]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 13:28:03.882739 systemd[1533]: Queued start job for default target default.target. Dec 13 13:28:03.889287 systemd[1533]: Created slice app.slice - User Application Slice. Dec 13 13:28:03.889312 systemd[1533]: Reached target paths.target - Paths. Dec 13 13:28:03.889630 systemd[1533]: Reached target timers.target - Timers. Dec 13 13:28:03.891207 systemd[1533]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 13:28:03.938495 systemd[1533]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 13:28:03.938795 systemd[1533]: Reached target sockets.target - Sockets. Dec 13 13:28:03.938838 systemd[1533]: Reached target basic.target - Basic System. Dec 13 13:28:03.938936 systemd[1533]: Reached target default.target - Main User Target. Dec 13 13:28:03.938999 systemd[1533]: Startup finished in 205ms. Dec 13 13:28:03.939254 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 13:28:03.952783 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 13:28:04.306713 systemd[1]: Started sshd@1-172.24.4.94:22-172.24.4.1:44858.service - OpenSSH per-connection server daemon (172.24.4.1:44858). Dec 13 13:28:05.613727 sshd[1544]: Accepted publickey for core from 172.24.4.1 port 44858 ssh2: RSA SHA256:gMyySNlkobtnegIUOgKiq8X7+FvfBix4+97j05Vtzjs Dec 13 13:28:05.617966 sshd-session[1544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:28:05.631808 systemd-logind[1437]: New session 2 of user core. Dec 13 13:28:05.637881 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 13:28:05.947425 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:28:05.974588 (kubelet)[1554]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:28:06.223732 sshd[1548]: Connection closed by 172.24.4.1 port 44858 Dec 13 13:28:06.224993 sshd-session[1544]: pam_unix(sshd:session): session closed for user core Dec 13 13:28:06.245324 systemd[1]: sshd@1-172.24.4.94:22-172.24.4.1:44858.service: Deactivated successfully. Dec 13 13:28:06.248904 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 13:28:06.253930 systemd-logind[1437]: Session 2 logged out. Waiting for processes to exit. Dec 13 13:28:06.263390 systemd[1]: Started sshd@2-172.24.4.94:22-172.24.4.1:44874.service - OpenSSH per-connection server daemon (172.24.4.1:44874). Dec 13 13:28:06.274811 systemd-logind[1437]: Removed session 2. Dec 13 13:28:07.476448 agetty[1515]: failed to open credentials directory Dec 13 13:28:07.478186 agetty[1514]: failed to open credentials directory Dec 13 13:28:07.529226 login[1514]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 13:28:07.537416 sshd[1559]: Accepted publickey for core from 172.24.4.1 port 44874 ssh2: RSA SHA256:gMyySNlkobtnegIUOgKiq8X7+FvfBix4+97j05Vtzjs Dec 13 13:28:07.540727 sshd-session[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:28:07.544054 login[1515]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 13:28:07.547518 systemd-logind[1437]: New session 3 of user core. Dec 13 13:28:07.565195 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 13:28:07.573236 systemd-logind[1437]: New session 4 of user core. Dec 13 13:28:07.585064 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 13:28:07.592935 systemd-logind[1437]: New session 5 of user core. Dec 13 13:28:07.603653 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 13:28:08.233401 sshd[1571]: Connection closed by 172.24.4.1 port 44874 Dec 13 13:28:08.234518 sshd-session[1559]: pam_unix(sshd:session): session closed for user core Dec 13 13:28:08.242962 systemd[1]: sshd@2-172.24.4.94:22-172.24.4.1:44874.service: Deactivated successfully. Dec 13 13:28:08.247784 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 13:28:08.249703 systemd-logind[1437]: Session 5 logged out. Waiting for processes to exit. Dec 13 13:28:08.251058 systemd-logind[1437]: Removed session 5. Dec 13 13:28:08.863100 coreos-metadata[1426]: Dec 13 13:28:08.863 WARN failed to locate config-drive, using the metadata service API instead Dec 13 13:28:08.957447 coreos-metadata[1426]: Dec 13 13:28:08.957 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Dec 13 13:28:09.164392 coreos-metadata[1426]: Dec 13 13:28:09.164 INFO Fetch successful Dec 13 13:28:09.164392 coreos-metadata[1426]: Dec 13 13:28:09.164 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 13:28:09.180189 coreos-metadata[1426]: Dec 13 13:28:09.180 INFO Fetch successful Dec 13 13:28:09.180461 coreos-metadata[1426]: Dec 13 13:28:09.180 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Dec 13 13:28:09.196003 coreos-metadata[1426]: Dec 13 13:28:09.195 INFO Fetch successful Dec 13 13:28:09.196003 coreos-metadata[1426]: Dec 13 13:28:09.195 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Dec 13 13:28:09.206810 coreos-metadata[1426]: Dec 13 13:28:09.206 INFO Fetch successful Dec 13 13:28:09.207059 coreos-metadata[1426]: Dec 13 13:28:09.206 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Dec 13 13:28:09.214257 coreos-metadata[1426]: Dec 13 13:28:09.214 INFO Fetch successful Dec 13 13:28:09.214257 coreos-metadata[1426]: Dec 13 13:28:09.214 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Dec 13 13:28:09.227625 coreos-metadata[1426]: Dec 13 13:28:09.226 INFO Fetch successful Dec 13 13:28:09.282623 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 13:28:09.287455 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 13:28:09.455657 kubelet[1554]: E1213 13:28:09.455543 1554 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:28:09.460601 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:28:09.460939 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:28:09.461971 systemd[1]: kubelet.service: Consumed 2.190s CPU time. Dec 13 13:28:09.598900 coreos-metadata[1522]: Dec 13 13:28:09.598 WARN failed to locate config-drive, using the metadata service API instead Dec 13 13:28:09.642002 coreos-metadata[1522]: Dec 13 13:28:09.641 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Dec 13 13:28:09.658939 coreos-metadata[1522]: Dec 13 13:28:09.658 INFO Fetch successful Dec 13 13:28:09.658939 coreos-metadata[1522]: Dec 13 13:28:09.658 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 13:28:09.670990 coreos-metadata[1522]: Dec 13 13:28:09.670 INFO Fetch successful Dec 13 13:28:09.679107 unknown[1522]: wrote ssh authorized keys file for user: core Dec 13 13:28:09.751816 update-ssh-keys[1609]: Updated "/home/core/.ssh/authorized_keys" Dec 13 13:28:09.753091 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 13:28:09.756987 systemd[1]: Finished sshkeys.service. Dec 13 13:28:09.763094 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 13:28:09.763498 systemd[1]: Startup finished in 1.190s (kernel) + 16.459s (initrd) + 12.557s (userspace) = 30.208s. Dec 13 13:28:18.261111 systemd[1]: Started sshd@3-172.24.4.94:22-172.24.4.1:60410.service - OpenSSH per-connection server daemon (172.24.4.1:60410). Dec 13 13:28:19.485786 sshd[1613]: Accepted publickey for core from 172.24.4.1 port 60410 ssh2: RSA SHA256:gMyySNlkobtnegIUOgKiq8X7+FvfBix4+97j05Vtzjs Dec 13 13:28:19.488660 sshd-session[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:28:19.491459 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 13:28:19.502864 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:28:19.510138 systemd-logind[1437]: New session 6 of user core. Dec 13 13:28:19.517890 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 13:28:19.927675 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:28:19.945474 (kubelet)[1625]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:28:20.201875 kubelet[1625]: E1213 13:28:20.201795 1625 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:28:20.208476 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:28:20.208785 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:28:20.214736 sshd[1618]: Connection closed by 172.24.4.1 port 60410 Dec 13 13:28:20.215597 sshd-session[1613]: pam_unix(sshd:session): session closed for user core Dec 13 13:28:20.229685 systemd[1]: sshd@3-172.24.4.94:22-172.24.4.1:60410.service: Deactivated successfully. Dec 13 13:28:20.232683 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 13:28:20.235640 systemd-logind[1437]: Session 6 logged out. Waiting for processes to exit. Dec 13 13:28:20.244878 systemd[1]: Started sshd@4-172.24.4.94:22-172.24.4.1:60412.service - OpenSSH per-connection server daemon (172.24.4.1:60412). Dec 13 13:28:20.247750 systemd-logind[1437]: Removed session 6. Dec 13 13:28:21.436982 sshd[1635]: Accepted publickey for core from 172.24.4.1 port 60412 ssh2: RSA SHA256:gMyySNlkobtnegIUOgKiq8X7+FvfBix4+97j05Vtzjs Dec 13 13:28:21.439829 sshd-session[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:28:21.450629 systemd-logind[1437]: New session 7 of user core. Dec 13 13:28:21.462646 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 13:28:22.044173 sshd[1637]: Connection closed by 172.24.4.1 port 60412 Dec 13 13:28:22.045259 sshd-session[1635]: pam_unix(sshd:session): session closed for user core Dec 13 13:28:22.058928 systemd[1]: sshd@4-172.24.4.94:22-172.24.4.1:60412.service: Deactivated successfully. Dec 13 13:28:22.062216 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 13:28:22.066092 systemd-logind[1437]: Session 7 logged out. Waiting for processes to exit. Dec 13 13:28:22.071915 systemd[1]: Started sshd@5-172.24.4.94:22-172.24.4.1:60414.service - OpenSSH per-connection server daemon (172.24.4.1:60414). Dec 13 13:28:22.075304 systemd-logind[1437]: Removed session 7. Dec 13 13:28:23.720123 sshd[1642]: Accepted publickey for core from 172.24.4.1 port 60414 ssh2: RSA SHA256:gMyySNlkobtnegIUOgKiq8X7+FvfBix4+97j05Vtzjs Dec 13 13:28:23.723256 sshd-session[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:28:23.737816 systemd-logind[1437]: New session 8 of user core. Dec 13 13:28:23.751749 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 13:28:24.333388 sshd[1644]: Connection closed by 172.24.4.1 port 60414 Dec 13 13:28:24.334216 sshd-session[1642]: pam_unix(sshd:session): session closed for user core Dec 13 13:28:24.346523 systemd[1]: sshd@5-172.24.4.94:22-172.24.4.1:60414.service: Deactivated successfully. Dec 13 13:28:24.349903 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 13:28:24.353443 systemd-logind[1437]: Session 8 logged out. Waiting for processes to exit. Dec 13 13:28:24.359947 systemd[1]: Started sshd@6-172.24.4.94:22-172.24.4.1:55290.service - OpenSSH per-connection server daemon (172.24.4.1:55290). Dec 13 13:28:24.363103 systemd-logind[1437]: Removed session 8. Dec 13 13:28:25.750257 sshd[1649]: Accepted publickey for core from 172.24.4.1 port 55290 ssh2: RSA SHA256:gMyySNlkobtnegIUOgKiq8X7+FvfBix4+97j05Vtzjs Dec 13 13:28:25.753034 sshd-session[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:28:25.762637 systemd-logind[1437]: New session 9 of user core. Dec 13 13:28:25.774712 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 13:28:26.368054 sudo[1652]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 13:28:26.368803 sudo[1652]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:28:26.409034 sudo[1652]: pam_unix(sudo:session): session closed for user root Dec 13 13:28:26.584401 sshd[1651]: Connection closed by 172.24.4.1 port 55290 Dec 13 13:28:26.585058 sshd-session[1649]: pam_unix(sshd:session): session closed for user core Dec 13 13:28:26.601875 systemd[1]: sshd@6-172.24.4.94:22-172.24.4.1:55290.service: Deactivated successfully. Dec 13 13:28:26.606396 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 13:28:26.610638 systemd-logind[1437]: Session 9 logged out. Waiting for processes to exit. Dec 13 13:28:26.616971 systemd[1]: Started sshd@7-172.24.4.94:22-172.24.4.1:55304.service - OpenSSH per-connection server daemon (172.24.4.1:55304). Dec 13 13:28:26.620650 systemd-logind[1437]: Removed session 9. Dec 13 13:28:27.985943 sshd[1657]: Accepted publickey for core from 172.24.4.1 port 55304 ssh2: RSA SHA256:gMyySNlkobtnegIUOgKiq8X7+FvfBix4+97j05Vtzjs Dec 13 13:28:27.988890 sshd-session[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:28:27.999377 systemd-logind[1437]: New session 10 of user core. Dec 13 13:28:28.009717 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 13:28:28.520694 sudo[1661]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 13:28:28.522189 sudo[1661]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:28:28.530470 sudo[1661]: pam_unix(sudo:session): session closed for user root Dec 13 13:28:28.542464 sudo[1660]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 13 13:28:28.543132 sudo[1660]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:28:28.572286 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:28:28.656743 augenrules[1683]: No rules Dec 13 13:28:28.658101 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:28:28.658514 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:28:28.661438 sudo[1660]: pam_unix(sudo:session): session closed for user root Dec 13 13:28:28.824212 sshd[1659]: Connection closed by 172.24.4.1 port 55304 Dec 13 13:28:28.840569 sshd-session[1657]: pam_unix(sshd:session): session closed for user core Dec 13 13:28:28.842966 systemd[1]: Started sshd@8-172.24.4.94:22-172.24.4.1:55314.service - OpenSSH per-connection server daemon (172.24.4.1:55314). Dec 13 13:28:28.852743 systemd-logind[1437]: Session 10 logged out. Waiting for processes to exit. Dec 13 13:28:28.857005 systemd[1]: sshd@7-172.24.4.94:22-172.24.4.1:55304.service: Deactivated successfully. Dec 13 13:28:28.862740 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 13:28:28.866034 systemd-logind[1437]: Removed session 10. Dec 13 13:28:30.145316 sshd[1689]: Accepted publickey for core from 172.24.4.1 port 55314 ssh2: RSA SHA256:gMyySNlkobtnegIUOgKiq8X7+FvfBix4+97j05Vtzjs Dec 13 13:28:30.148173 sshd-session[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:28:30.155867 systemd-logind[1437]: New session 11 of user core. Dec 13 13:28:30.161671 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 13:28:30.441114 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 13:28:30.454298 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:28:30.597323 sudo[1697]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 13:28:30.598084 sudo[1697]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:28:31.351664 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:28:31.364184 (kubelet)[1713]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:28:31.460357 kubelet[1713]: E1213 13:28:31.460281 1713 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:28:31.462961 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:28:31.463113 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:28:33.050204 systemd-resolved[1321]: Clock change detected. Flushing caches. Dec 13 13:28:33.050756 systemd-timesyncd[1335]: Contacted time server 5.135.158.34:123 (2.flatcar.pool.ntp.org). Dec 13 13:28:33.050845 systemd-timesyncd[1335]: Initial clock synchronization to Fri 2024-12-13 13:28:33.050119 UTC. Dec 13 13:28:33.739508 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:28:33.750427 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:28:33.827119 systemd[1]: Reloading requested from client PID 1743 ('systemctl') (unit session-11.scope)... Dec 13 13:28:33.827140 systemd[1]: Reloading... Dec 13 13:28:33.945007 zram_generator::config[1787]: No configuration found. Dec 13 13:28:34.240355 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:28:34.324135 systemd[1]: Reloading finished in 496 ms. Dec 13 13:28:34.380184 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 13:28:34.380257 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 13:28:34.380632 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:28:34.385279 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:28:34.526108 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:28:34.541684 (kubelet)[1847]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 13:28:34.599130 kubelet[1847]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:28:34.599130 kubelet[1847]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 13:28:34.599130 kubelet[1847]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:28:34.599970 kubelet[1847]: I1213 13:28:34.599164 1847 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 13:28:35.126309 kubelet[1847]: I1213 13:28:35.126225 1847 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 13:28:35.126309 kubelet[1847]: I1213 13:28:35.126272 1847 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 13:28:35.126631 kubelet[1847]: I1213 13:28:35.126566 1847 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 13:28:35.152836 kubelet[1847]: I1213 13:28:35.152721 1847 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 13:28:35.168118 kubelet[1847]: E1213 13:28:35.167860 1847 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 13:28:35.168118 kubelet[1847]: I1213 13:28:35.167978 1847 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 13:28:35.184577 kubelet[1847]: I1213 13:28:35.184432 1847 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 13:28:35.189985 kubelet[1847]: I1213 13:28:35.189512 1847 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 13:28:35.189985 kubelet[1847]: I1213 13:28:35.189845 1847 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 13:28:35.190433 kubelet[1847]: I1213 13:28:35.189984 1847 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.24.4.94","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 13:28:35.190546 kubelet[1847]: I1213 13:28:35.190442 1847 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 13:28:35.190546 kubelet[1847]: I1213 13:28:35.190471 1847 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 13:28:35.190735 kubelet[1847]: I1213 13:28:35.190690 1847 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:28:35.196449 kubelet[1847]: I1213 13:28:35.196386 1847 kubelet.go:408] "Attempting to sync node with API server" Dec 13 13:28:35.196505 kubelet[1847]: I1213 13:28:35.196468 1847 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 13:28:35.196628 kubelet[1847]: I1213 13:28:35.196583 1847 kubelet.go:314] "Adding apiserver pod source" Dec 13 13:28:35.196628 kubelet[1847]: I1213 13:28:35.196622 1847 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 13:28:35.196921 kubelet[1847]: E1213 13:28:35.196767 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:35.196921 kubelet[1847]: E1213 13:28:35.196829 1847 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:35.210237 kubelet[1847]: I1213 13:28:35.209848 1847 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Dec 13 13:28:35.210857 kubelet[1847]: W1213 13:28:35.210772 1847 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 13:28:35.211018 kubelet[1847]: E1213 13:28:35.210875 1847 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Dec 13 13:28:35.213421 kubelet[1847]: W1213 13:28:35.213301 1847 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.24.4.94" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 13:28:35.213421 kubelet[1847]: E1213 13:28:35.213353 1847 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"172.24.4.94\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Dec 13 13:28:35.215411 kubelet[1847]: I1213 13:28:35.215061 1847 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 13:28:35.216980 kubelet[1847]: W1213 13:28:35.216490 1847 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 13:28:35.218293 kubelet[1847]: I1213 13:28:35.217870 1847 server.go:1269] "Started kubelet" Dec 13 13:28:35.220748 kubelet[1847]: I1213 13:28:35.220688 1847 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 13:28:35.238309 kubelet[1847]: I1213 13:28:35.238129 1847 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 13:28:35.241548 kubelet[1847]: I1213 13:28:35.240602 1847 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 13:28:35.243527 kubelet[1847]: I1213 13:28:35.241771 1847 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 13:28:35.243527 kubelet[1847]: I1213 13:28:35.242310 1847 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 13:28:35.256031 kubelet[1847]: I1213 13:28:35.252963 1847 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 13:28:35.256031 kubelet[1847]: E1213 13:28:35.253422 1847 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.94\" not found" Dec 13 13:28:35.256031 kubelet[1847]: I1213 13:28:35.253941 1847 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 13:28:35.256430 kubelet[1847]: I1213 13:28:35.256376 1847 reconciler.go:26] "Reconciler: start to sync state" Dec 13 13:28:35.265346 kubelet[1847]: I1213 13:28:35.264272 1847 server.go:460] "Adding debug handlers to kubelet server" Dec 13 13:28:35.267005 kubelet[1847]: E1213 13:28:35.261858 1847 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.24.4.94\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Dec 13 13:28:35.268395 kubelet[1847]: I1213 13:28:35.268357 1847 factory.go:221] Registration of the systemd container factory successfully Dec 13 13:28:35.268602 kubelet[1847]: I1213 13:28:35.268560 1847 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 13:28:35.274311 kubelet[1847]: E1213 13:28:35.274256 1847 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 13:28:35.274826 kubelet[1847]: I1213 13:28:35.274752 1847 factory.go:221] Registration of the containerd container factory successfully Dec 13 13:28:35.301785 kubelet[1847]: I1213 13:28:35.301734 1847 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 13:28:35.301785 kubelet[1847]: I1213 13:28:35.301753 1847 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 13:28:35.301785 kubelet[1847]: I1213 13:28:35.301799 1847 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:28:35.307304 kubelet[1847]: I1213 13:28:35.307270 1847 policy_none.go:49] "None policy: Start" Dec 13 13:28:35.308381 kubelet[1847]: I1213 13:28:35.308331 1847 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 13:28:35.308381 kubelet[1847]: I1213 13:28:35.308354 1847 state_mem.go:35] "Initializing new in-memory state store" Dec 13 13:28:35.327662 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 13:28:35.337465 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 13:28:35.341243 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 13:28:35.349862 kubelet[1847]: I1213 13:28:35.348696 1847 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 13:28:35.349862 kubelet[1847]: I1213 13:28:35.348933 1847 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 13:28:35.349862 kubelet[1847]: I1213 13:28:35.348950 1847 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 13:28:35.350472 kubelet[1847]: I1213 13:28:35.350225 1847 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 13:28:35.353319 kubelet[1847]: E1213 13:28:35.353093 1847 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.24.4.94\" not found" Dec 13 13:28:35.358628 kubelet[1847]: I1213 13:28:35.358484 1847 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 13:28:35.360910 kubelet[1847]: I1213 13:28:35.360290 1847 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 13:28:35.360910 kubelet[1847]: I1213 13:28:35.360320 1847 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 13:28:35.360910 kubelet[1847]: I1213 13:28:35.360340 1847 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 13:28:35.360910 kubelet[1847]: E1213 13:28:35.360383 1847 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 13 13:28:35.450643 kubelet[1847]: I1213 13:28:35.450420 1847 kubelet_node_status.go:72] "Attempting to register node" node="172.24.4.94" Dec 13 13:28:35.458580 kubelet[1847]: I1213 13:28:35.458543 1847 kubelet_node_status.go:75] "Successfully registered node" node="172.24.4.94" Dec 13 13:28:35.458580 kubelet[1847]: E1213 13:28:35.458580 1847 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.24.4.94\": node \"172.24.4.94\" not found" Dec 13 13:28:35.485458 kubelet[1847]: E1213 13:28:35.485378 1847 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.94\" not found" Dec 13 13:28:35.586182 kubelet[1847]: E1213 13:28:35.586028 1847 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.94\" not found" Dec 13 13:28:35.593277 sudo[1697]: pam_unix(sudo:session): session closed for user root Dec 13 13:28:35.686539 kubelet[1847]: E1213 13:28:35.686447 1847 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.94\" not found" Dec 13 13:28:35.787632 kubelet[1847]: E1213 13:28:35.787494 1847 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.94\" not found" Dec 13 13:28:35.855295 sshd[1693]: Connection closed by 172.24.4.1 port 55314 Dec 13 13:28:35.856512 sshd-session[1689]: pam_unix(sshd:session): session closed for user core Dec 13 13:28:35.864440 systemd[1]: sshd@8-172.24.4.94:22-172.24.4.1:55314.service: Deactivated successfully. Dec 13 13:28:35.870865 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 13:28:35.871627 systemd[1]: session-11.scope: Consumed 1.113s CPU time, 75.0M memory peak, 0B memory swap peak. Dec 13 13:28:35.873315 systemd-logind[1437]: Session 11 logged out. Waiting for processes to exit. Dec 13 13:28:35.876668 systemd-logind[1437]: Removed session 11. Dec 13 13:28:35.888580 kubelet[1847]: E1213 13:28:35.888505 1847 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.94\" not found" Dec 13 13:28:35.989442 kubelet[1847]: E1213 13:28:35.989351 1847 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.94\" not found" Dec 13 13:28:36.090558 kubelet[1847]: E1213 13:28:36.090346 1847 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.94\" not found" Dec 13 13:28:36.128866 kubelet[1847]: I1213 13:28:36.128758 1847 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 13:28:36.129450 kubelet[1847]: W1213 13:28:36.129176 1847 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 13:28:36.129450 kubelet[1847]: W1213 13:28:36.129337 1847 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 13:28:36.191124 kubelet[1847]: E1213 13:28:36.191060 1847 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.94\" not found" Dec 13 13:28:36.197550 kubelet[1847]: E1213 13:28:36.197493 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:36.291612 kubelet[1847]: E1213 13:28:36.291506 1847 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.94\" not found" Dec 13 13:28:36.392943 kubelet[1847]: E1213 13:28:36.392665 1847 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.94\" not found" Dec 13 13:28:36.493571 kubelet[1847]: E1213 13:28:36.493468 1847 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.94\" not found" Dec 13 13:28:36.594719 kubelet[1847]: E1213 13:28:36.594616 1847 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.94\" not found" Dec 13 13:28:36.695335 kubelet[1847]: E1213 13:28:36.695106 1847 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.94\" not found" Dec 13 13:28:36.797999 kubelet[1847]: I1213 13:28:36.797518 1847 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 13:28:36.798782 containerd[1449]: time="2024-12-13T13:28:36.798648090Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 13:28:36.799582 kubelet[1847]: I1213 13:28:36.799210 1847 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 13:28:37.198671 kubelet[1847]: I1213 13:28:37.198225 1847 apiserver.go:52] "Watching apiserver" Dec 13 13:28:37.199212 kubelet[1847]: E1213 13:28:37.199102 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:37.214732 kubelet[1847]: E1213 13:28:37.213276 1847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-flq57" podUID="6b554734-fc9d-4b74-9b2d-0c0e98baacdf" Dec 13 13:28:37.239135 systemd[1]: Created slice kubepods-besteffort-pod60239819_d7d7_469a_a3c7_c0c6a4778ad1.slice - libcontainer container kubepods-besteffort-pod60239819_d7d7_469a_a3c7_c0c6a4778ad1.slice. Dec 13 13:28:37.257931 kubelet[1847]: I1213 13:28:37.257350 1847 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 13:28:37.257876 systemd[1]: Created slice kubepods-besteffort-podbc41c875_1c24_419f_a958_9897273ef588.slice - libcontainer container kubepods-besteffort-podbc41c875_1c24_419f_a958_9897273ef588.slice. Dec 13 13:28:37.279629 kubelet[1847]: I1213 13:28:37.277837 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/bc41c875-1c24-419f-a958-9897273ef588-var-run-calico\") pod \"calico-node-f2np9\" (UID: \"bc41c875-1c24-419f-a958-9897273ef588\") " pod="calico-system/calico-node-f2np9" Dec 13 13:28:37.279629 kubelet[1847]: I1213 13:28:37.278038 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bc41c875-1c24-419f-a958-9897273ef588-var-lib-calico\") pod \"calico-node-f2np9\" (UID: \"bc41c875-1c24-419f-a958-9897273ef588\") " pod="calico-system/calico-node-f2np9" Dec 13 13:28:37.279629 kubelet[1847]: I1213 13:28:37.278106 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6b554734-fc9d-4b74-9b2d-0c0e98baacdf-registration-dir\") pod \"csi-node-driver-flq57\" (UID: \"6b554734-fc9d-4b74-9b2d-0c0e98baacdf\") " pod="calico-system/csi-node-driver-flq57" Dec 13 13:28:37.279629 kubelet[1847]: I1213 13:28:37.278165 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgwss\" (UniqueName: \"kubernetes.io/projected/6b554734-fc9d-4b74-9b2d-0c0e98baacdf-kube-api-access-bgwss\") pod \"csi-node-driver-flq57\" (UID: \"6b554734-fc9d-4b74-9b2d-0c0e98baacdf\") " pod="calico-system/csi-node-driver-flq57" Dec 13 13:28:37.279629 kubelet[1847]: I1213 13:28:37.278216 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbktj\" (UniqueName: \"kubernetes.io/projected/60239819-d7d7-469a-a3c7-c0c6a4778ad1-kube-api-access-cbktj\") pod \"kube-proxy-xb862\" (UID: \"60239819-d7d7-469a-a3c7-c0c6a4778ad1\") " pod="kube-system/kube-proxy-xb862" Dec 13 13:28:37.280366 kubelet[1847]: I1213 13:28:37.278289 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bc41c875-1c24-419f-a958-9897273ef588-lib-modules\") pod \"calico-node-f2np9\" (UID: \"bc41c875-1c24-419f-a958-9897273ef588\") " pod="calico-system/calico-node-f2np9" Dec 13 13:28:37.280366 kubelet[1847]: I1213 13:28:37.278341 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bc41c875-1c24-419f-a958-9897273ef588-xtables-lock\") pod \"calico-node-f2np9\" (UID: \"bc41c875-1c24-419f-a958-9897273ef588\") " pod="calico-system/calico-node-f2np9" Dec 13 13:28:37.280366 kubelet[1847]: I1213 13:28:37.278388 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/60239819-d7d7-469a-a3c7-c0c6a4778ad1-xtables-lock\") pod \"kube-proxy-xb862\" (UID: \"60239819-d7d7-469a-a3c7-c0c6a4778ad1\") " pod="kube-system/kube-proxy-xb862" Dec 13 13:28:37.280366 kubelet[1847]: I1213 13:28:37.278436 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/60239819-d7d7-469a-a3c7-c0c6a4778ad1-lib-modules\") pod \"kube-proxy-xb862\" (UID: \"60239819-d7d7-469a-a3c7-c0c6a4778ad1\") " pod="kube-system/kube-proxy-xb862" Dec 13 13:28:37.280366 kubelet[1847]: I1213 13:28:37.278482 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/bc41c875-1c24-419f-a958-9897273ef588-cni-net-dir\") pod \"calico-node-f2np9\" (UID: \"bc41c875-1c24-419f-a958-9897273ef588\") " pod="calico-system/calico-node-f2np9" Dec 13 13:28:37.280696 kubelet[1847]: I1213 13:28:37.278532 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6b554734-fc9d-4b74-9b2d-0c0e98baacdf-kubelet-dir\") pod \"csi-node-driver-flq57\" (UID: \"6b554734-fc9d-4b74-9b2d-0c0e98baacdf\") " pod="calico-system/csi-node-driver-flq57" Dec 13 13:28:37.280696 kubelet[1847]: I1213 13:28:37.278665 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6b554734-fc9d-4b74-9b2d-0c0e98baacdf-socket-dir\") pod \"csi-node-driver-flq57\" (UID: \"6b554734-fc9d-4b74-9b2d-0c0e98baacdf\") " pod="calico-system/csi-node-driver-flq57" Dec 13 13:28:37.280696 kubelet[1847]: I1213 13:28:37.278721 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/bc41c875-1c24-419f-a958-9897273ef588-node-certs\") pod \"calico-node-f2np9\" (UID: \"bc41c875-1c24-419f-a958-9897273ef588\") " pod="calico-system/calico-node-f2np9" Dec 13 13:28:37.280696 kubelet[1847]: I1213 13:28:37.278765 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/bc41c875-1c24-419f-a958-9897273ef588-cni-log-dir\") pod \"calico-node-f2np9\" (UID: \"bc41c875-1c24-419f-a958-9897273ef588\") " pod="calico-system/calico-node-f2np9" Dec 13 13:28:37.280696 kubelet[1847]: I1213 13:28:37.278950 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/bc41c875-1c24-419f-a958-9897273ef588-cni-bin-dir\") pod \"calico-node-f2np9\" (UID: \"bc41c875-1c24-419f-a958-9897273ef588\") " pod="calico-system/calico-node-f2np9" Dec 13 13:28:37.281089 kubelet[1847]: I1213 13:28:37.279032 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/bc41c875-1c24-419f-a958-9897273ef588-flexvol-driver-host\") pod \"calico-node-f2np9\" (UID: \"bc41c875-1c24-419f-a958-9897273ef588\") " pod="calico-system/calico-node-f2np9" Dec 13 13:28:37.281089 kubelet[1847]: I1213 13:28:37.279086 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gh9nd\" (UniqueName: \"kubernetes.io/projected/bc41c875-1c24-419f-a958-9897273ef588-kube-api-access-gh9nd\") pod \"calico-node-f2np9\" (UID: \"bc41c875-1c24-419f-a958-9897273ef588\") " pod="calico-system/calico-node-f2np9" Dec 13 13:28:37.281089 kubelet[1847]: I1213 13:28:37.279131 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6b554734-fc9d-4b74-9b2d-0c0e98baacdf-varrun\") pod \"csi-node-driver-flq57\" (UID: \"6b554734-fc9d-4b74-9b2d-0c0e98baacdf\") " pod="calico-system/csi-node-driver-flq57" Dec 13 13:28:37.281089 kubelet[1847]: I1213 13:28:37.279175 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/60239819-d7d7-469a-a3c7-c0c6a4778ad1-kube-proxy\") pod \"kube-proxy-xb862\" (UID: \"60239819-d7d7-469a-a3c7-c0c6a4778ad1\") " pod="kube-system/kube-proxy-xb862" Dec 13 13:28:37.281089 kubelet[1847]: I1213 13:28:37.279347 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/bc41c875-1c24-419f-a958-9897273ef588-policysync\") pod \"calico-node-f2np9\" (UID: \"bc41c875-1c24-419f-a958-9897273ef588\") " pod="calico-system/calico-node-f2np9" Dec 13 13:28:37.281411 kubelet[1847]: I1213 13:28:37.279436 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc41c875-1c24-419f-a958-9897273ef588-tigera-ca-bundle\") pod \"calico-node-f2np9\" (UID: \"bc41c875-1c24-419f-a958-9897273ef588\") " pod="calico-system/calico-node-f2np9" Dec 13 13:28:37.386067 kubelet[1847]: E1213 13:28:37.385986 1847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:37.386067 kubelet[1847]: W1213 13:28:37.386035 1847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:37.386067 kubelet[1847]: E1213 13:28:37.386073 1847 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:37.386461 kubelet[1847]: E1213 13:28:37.386386 1847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:37.386461 kubelet[1847]: W1213 13:28:37.386407 1847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:37.386461 kubelet[1847]: E1213 13:28:37.386429 1847 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:37.389947 kubelet[1847]: E1213 13:28:37.386722 1847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:37.389947 kubelet[1847]: W1213 13:28:37.386757 1847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:37.389947 kubelet[1847]: E1213 13:28:37.386781 1847 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:37.389947 kubelet[1847]: E1213 13:28:37.387201 1847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:37.389947 kubelet[1847]: W1213 13:28:37.387222 1847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:37.389947 kubelet[1847]: E1213 13:28:37.387243 1847 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:37.389947 kubelet[1847]: E1213 13:28:37.388176 1847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:37.389947 kubelet[1847]: W1213 13:28:37.388201 1847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:37.389947 kubelet[1847]: E1213 13:28:37.388226 1847 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:37.390596 kubelet[1847]: E1213 13:28:37.390031 1847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:37.390596 kubelet[1847]: W1213 13:28:37.390056 1847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:37.390596 kubelet[1847]: E1213 13:28:37.390080 1847 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:37.393953 kubelet[1847]: E1213 13:28:37.392176 1847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:37.393953 kubelet[1847]: W1213 13:28:37.392210 1847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:37.393953 kubelet[1847]: E1213 13:28:37.392286 1847 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:37.393953 kubelet[1847]: E1213 13:28:37.392646 1847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:37.393953 kubelet[1847]: W1213 13:28:37.392672 1847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:37.393953 kubelet[1847]: E1213 13:28:37.392697 1847 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:37.394379 kubelet[1847]: E1213 13:28:37.394138 1847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:37.394379 kubelet[1847]: W1213 13:28:37.394162 1847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:37.394379 kubelet[1847]: E1213 13:28:37.394189 1847 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:37.394565 kubelet[1847]: E1213 13:28:37.394498 1847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:37.394565 kubelet[1847]: W1213 13:28:37.394519 1847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:37.394565 kubelet[1847]: E1213 13:28:37.394540 1847 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:37.396063 kubelet[1847]: E1213 13:28:37.396015 1847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:37.396063 kubelet[1847]: W1213 13:28:37.396053 1847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:37.396235 kubelet[1847]: E1213 13:28:37.396077 1847 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:37.399005 kubelet[1847]: E1213 13:28:37.396515 1847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:37.399005 kubelet[1847]: W1213 13:28:37.396552 1847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:37.399005 kubelet[1847]: E1213 13:28:37.396586 1847 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:37.399259 kubelet[1847]: E1213 13:28:37.399063 1847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:37.399259 kubelet[1847]: W1213 13:28:37.399091 1847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:37.399259 kubelet[1847]: E1213 13:28:37.399151 1847 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:37.401684 kubelet[1847]: E1213 13:28:37.401185 1847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:37.401684 kubelet[1847]: W1213 13:28:37.401226 1847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:37.401684 kubelet[1847]: E1213 13:28:37.401265 1847 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:37.402009 kubelet[1847]: E1213 13:28:37.401736 1847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:37.402009 kubelet[1847]: W1213 13:28:37.401760 1847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:37.403027 kubelet[1847]: E1213 13:28:37.402141 1847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:37.403027 kubelet[1847]: W1213 13:28:37.402175 1847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:37.403027 kubelet[1847]: E1213 13:28:37.402203 1847 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:37.403027 kubelet[1847]: E1213 13:28:37.402263 1847 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:37.403027 kubelet[1847]: E1213 13:28:37.402486 1847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:37.403027 kubelet[1847]: W1213 13:28:37.402506 1847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:37.403027 kubelet[1847]: E1213 13:28:37.402710 1847 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:37.403027 kubelet[1847]: E1213 13:28:37.403023 1847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:37.405012 kubelet[1847]: W1213 13:28:37.403044 1847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:37.405012 kubelet[1847]: E1213 13:28:37.403210 1847 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:37.405012 kubelet[1847]: E1213 13:28:37.403426 1847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:37.405012 kubelet[1847]: W1213 13:28:37.403446 1847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:37.405012 kubelet[1847]: E1213 13:28:37.403483 1847 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:37.405665 kubelet[1847]: E1213 13:28:37.405613 1847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:37.405665 kubelet[1847]: W1213 13:28:37.405655 1847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:37.406059 kubelet[1847]: E1213 13:28:37.405710 1847 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:37.406303 kubelet[1847]: E1213 13:28:37.406109 1847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:37.406303 kubelet[1847]: W1213 13:28:37.406131 1847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:37.406303 kubelet[1847]: E1213 13:28:37.406204 1847 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:37.406623 kubelet[1847]: E1213 13:28:37.406469 1847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:37.406623 kubelet[1847]: W1213 13:28:37.406490 1847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:37.406623 kubelet[1847]: E1213 13:28:37.406566 1847 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:37.407146 kubelet[1847]: E1213 13:28:37.406829 1847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:37.407146 kubelet[1847]: W1213 13:28:37.406852 1847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:37.407146 kubelet[1847]: E1213 13:28:37.407009 1847 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:37.407567 kubelet[1847]: E1213 13:28:37.407312 1847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:37.407567 kubelet[1847]: W1213 13:28:37.407334 1847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:37.407567 kubelet[1847]: E1213 13:28:37.407400 1847 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:37.408570 kubelet[1847]: E1213 13:28:37.407706 1847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:37.408570 kubelet[1847]: W1213 13:28:37.407729 1847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:37.408570 kubelet[1847]: E1213 13:28:37.407802 1847 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:37.408570 kubelet[1847]: E1213 13:28:37.408092 1847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:37.408570 kubelet[1847]: W1213 13:28:37.408112 1847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:37.408570 kubelet[1847]: E1213 13:28:37.408278 1847 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:37.408570 kubelet[1847]: E1213 13:28:37.408434 1847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:37.408570 kubelet[1847]: W1213 13:28:37.408455 1847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:37.408570 kubelet[1847]: E1213 13:28:37.408498 1847 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:37.409180 kubelet[1847]: E1213 13:28:37.408956 1847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:37.409180 kubelet[1847]: W1213 13:28:37.408978 1847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:37.409180 kubelet[1847]: E1213 13:28:37.409000 1847 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:37.409470 kubelet[1847]: E1213 13:28:37.409370 1847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:37.409470 kubelet[1847]: W1213 13:28:37.409391 1847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:37.409470 kubelet[1847]: E1213 13:28:37.409411 1847 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:37.429049 kubelet[1847]: E1213 13:28:37.429009 1847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:37.431871 kubelet[1847]: W1213 13:28:37.429225 1847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:37.431871 kubelet[1847]: E1213 13:28:37.429275 1847 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:37.432270 kubelet[1847]: E1213 13:28:37.432240 1847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:37.432964 kubelet[1847]: W1213 13:28:37.432398 1847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:37.432964 kubelet[1847]: E1213 13:28:37.432443 1847 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:37.434289 kubelet[1847]: E1213 13:28:37.434262 1847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:37.435956 kubelet[1847]: W1213 13:28:37.434689 1847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:37.435956 kubelet[1847]: E1213 13:28:37.434732 1847 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:37.553523 containerd[1449]: time="2024-12-13T13:28:37.553381302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xb862,Uid:60239819-d7d7-469a-a3c7-c0c6a4778ad1,Namespace:kube-system,Attempt:0,}" Dec 13 13:28:37.570841 containerd[1449]: time="2024-12-13T13:28:37.570719179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-f2np9,Uid:bc41c875-1c24-419f-a958-9897273ef588,Namespace:calico-system,Attempt:0,}" Dec 13 13:28:38.200127 kubelet[1847]: E1213 13:28:38.200028 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:38.310715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1120202288.mount: Deactivated successfully. Dec 13 13:28:38.326148 containerd[1449]: time="2024-12-13T13:28:38.326011770Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:28:38.337276 containerd[1449]: time="2024-12-13T13:28:38.337132597Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Dec 13 13:28:38.339031 containerd[1449]: time="2024-12-13T13:28:38.338739791Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:28:38.341485 containerd[1449]: time="2024-12-13T13:28:38.341395151Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:28:38.342134 containerd[1449]: time="2024-12-13T13:28:38.342023971Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 13:28:38.351613 containerd[1449]: time="2024-12-13T13:28:38.351470187Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:28:38.356001 containerd[1449]: time="2024-12-13T13:28:38.354872769Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 783.834511ms" Dec 13 13:28:38.358788 containerd[1449]: time="2024-12-13T13:28:38.358690309Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 804.905681ms" Dec 13 13:28:38.561673 containerd[1449]: time="2024-12-13T13:28:38.561528622Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:28:38.561673 containerd[1449]: time="2024-12-13T13:28:38.561691447Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:28:38.562430 containerd[1449]: time="2024-12-13T13:28:38.561731021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:38.562430 containerd[1449]: time="2024-12-13T13:28:38.561967004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:38.584011 containerd[1449]: time="2024-12-13T13:28:38.583863140Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:28:38.584295 containerd[1449]: time="2024-12-13T13:28:38.583957697Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:28:38.584295 containerd[1449]: time="2024-12-13T13:28:38.584245738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:38.590415 containerd[1449]: time="2024-12-13T13:28:38.585088468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:38.674116 systemd[1]: Started cri-containerd-0197d4fc902d6af39a37748ccc904d90eb2fb3c8778300eefed98c46d942b8df.scope - libcontainer container 0197d4fc902d6af39a37748ccc904d90eb2fb3c8778300eefed98c46d942b8df. Dec 13 13:28:38.676822 systemd[1]: Started cri-containerd-996938b1540a9e47bc80853e32d3d3e6de852a631045fe3fad43e27e1e4766fc.scope - libcontainer container 996938b1540a9e47bc80853e32d3d3e6de852a631045fe3fad43e27e1e4766fc. Dec 13 13:28:38.714933 containerd[1449]: time="2024-12-13T13:28:38.714824498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-f2np9,Uid:bc41c875-1c24-419f-a958-9897273ef588,Namespace:calico-system,Attempt:0,} returns sandbox id \"0197d4fc902d6af39a37748ccc904d90eb2fb3c8778300eefed98c46d942b8df\"" Dec 13 13:28:38.718612 containerd[1449]: time="2024-12-13T13:28:38.718581534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xb862,Uid:60239819-d7d7-469a-a3c7-c0c6a4778ad1,Namespace:kube-system,Attempt:0,} returns sandbox id \"996938b1540a9e47bc80853e32d3d3e6de852a631045fe3fad43e27e1e4766fc\"" Dec 13 13:28:38.721489 containerd[1449]: time="2024-12-13T13:28:38.721457238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 13:28:39.200453 kubelet[1847]: E1213 13:28:39.200398 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:39.362447 kubelet[1847]: E1213 13:28:39.361595 1847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-flq57" podUID="6b554734-fc9d-4b74-9b2d-0c0e98baacdf" Dec 13 13:28:39.392276 systemd[1]: run-containerd-runc-k8s.io-0197d4fc902d6af39a37748ccc904d90eb2fb3c8778300eefed98c46d942b8df-runc.Z3LzZI.mount: Deactivated successfully. Dec 13 13:28:40.203002 kubelet[1847]: E1213 13:28:40.202185 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:40.371471 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4251726882.mount: Deactivated successfully. Dec 13 13:28:40.639730 containerd[1449]: time="2024-12-13T13:28:40.639633236Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:40.640833 containerd[1449]: time="2024-12-13T13:28:40.640671694Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Dec 13 13:28:40.642030 containerd[1449]: time="2024-12-13T13:28:40.641946415Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:40.645870 containerd[1449]: time="2024-12-13T13:28:40.645799742Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:40.646641 containerd[1449]: time="2024-12-13T13:28:40.646493843Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.92474575s" Dec 13 13:28:40.646641 containerd[1449]: time="2024-12-13T13:28:40.646522688Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Dec 13 13:28:40.649377 containerd[1449]: time="2024-12-13T13:28:40.649127904Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Dec 13 13:28:40.649856 containerd[1449]: time="2024-12-13T13:28:40.649815443Z" level=info msg="CreateContainer within sandbox \"0197d4fc902d6af39a37748ccc904d90eb2fb3c8778300eefed98c46d942b8df\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 13:28:40.670010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1444135500.mount: Deactivated successfully. Dec 13 13:28:40.674827 containerd[1449]: time="2024-12-13T13:28:40.674759125Z" level=info msg="CreateContainer within sandbox \"0197d4fc902d6af39a37748ccc904d90eb2fb3c8778300eefed98c46d942b8df\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f8acf8698734e5702ef2be293093fba2ea27dfb829849e64661aefb111689dd6\"" Dec 13 13:28:40.675995 containerd[1449]: time="2024-12-13T13:28:40.675958575Z" level=info msg="StartContainer for \"f8acf8698734e5702ef2be293093fba2ea27dfb829849e64661aefb111689dd6\"" Dec 13 13:28:40.716937 systemd[1]: run-containerd-runc-k8s.io-f8acf8698734e5702ef2be293093fba2ea27dfb829849e64661aefb111689dd6-runc.LNPXKo.mount: Deactivated successfully. Dec 13 13:28:40.723132 systemd[1]: Started cri-containerd-f8acf8698734e5702ef2be293093fba2ea27dfb829849e64661aefb111689dd6.scope - libcontainer container f8acf8698734e5702ef2be293093fba2ea27dfb829849e64661aefb111689dd6. Dec 13 13:28:40.759451 containerd[1449]: time="2024-12-13T13:28:40.759350095Z" level=info msg="StartContainer for \"f8acf8698734e5702ef2be293093fba2ea27dfb829849e64661aefb111689dd6\" returns successfully" Dec 13 13:28:40.772996 systemd[1]: cri-containerd-f8acf8698734e5702ef2be293093fba2ea27dfb829849e64661aefb111689dd6.scope: Deactivated successfully. Dec 13 13:28:40.998324 containerd[1449]: time="2024-12-13T13:28:40.996283574Z" level=info msg="shim disconnected" id=f8acf8698734e5702ef2be293093fba2ea27dfb829849e64661aefb111689dd6 namespace=k8s.io Dec 13 13:28:40.998324 containerd[1449]: time="2024-12-13T13:28:40.996476526Z" level=warning msg="cleaning up after shim disconnected" id=f8acf8698734e5702ef2be293093fba2ea27dfb829849e64661aefb111689dd6 namespace=k8s.io Dec 13 13:28:40.998324 containerd[1449]: time="2024-12-13T13:28:40.996550505Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:28:41.202464 kubelet[1847]: E1213 13:28:41.202351 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:41.361085 kubelet[1847]: E1213 13:28:41.360972 1847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-flq57" podUID="6b554734-fc9d-4b74-9b2d-0c0e98baacdf" Dec 13 13:28:41.665542 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8acf8698734e5702ef2be293093fba2ea27dfb829849e64661aefb111689dd6-rootfs.mount: Deactivated successfully. Dec 13 13:28:42.203177 kubelet[1847]: E1213 13:28:42.203010 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:42.281093 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1839333591.mount: Deactivated successfully. Dec 13 13:28:42.942598 containerd[1449]: time="2024-12-13T13:28:42.942486036Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:42.944123 containerd[1449]: time="2024-12-13T13:28:42.944060619Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=30230251" Dec 13 13:28:42.945850 containerd[1449]: time="2024-12-13T13:28:42.945767851Z" level=info msg="ImageCreate event name:\"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:42.948675 containerd[1449]: time="2024-12-13T13:28:42.948590946Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:42.950021 containerd[1449]: time="2024-12-13T13:28:42.949328519Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"30229262\" in 2.300148938s" Dec 13 13:28:42.950021 containerd[1449]: time="2024-12-13T13:28:42.949381088Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\"" Dec 13 13:28:42.951122 containerd[1449]: time="2024-12-13T13:28:42.951042734Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 13:28:42.952140 containerd[1449]: time="2024-12-13T13:28:42.952081312Z" level=info msg="CreateContainer within sandbox \"996938b1540a9e47bc80853e32d3d3e6de852a631045fe3fad43e27e1e4766fc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 13:28:42.989249 containerd[1449]: time="2024-12-13T13:28:42.989154032Z" level=info msg="CreateContainer within sandbox \"996938b1540a9e47bc80853e32d3d3e6de852a631045fe3fad43e27e1e4766fc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"27772061f7a6f21489936189dcb619277bfa94c42e0fea97a4d6b6add0719909\"" Dec 13 13:28:42.990183 containerd[1449]: time="2024-12-13T13:28:42.990025146Z" level=info msg="StartContainer for \"27772061f7a6f21489936189dcb619277bfa94c42e0fea97a4d6b6add0719909\"" Dec 13 13:28:43.031121 systemd[1]: Started cri-containerd-27772061f7a6f21489936189dcb619277bfa94c42e0fea97a4d6b6add0719909.scope - libcontainer container 27772061f7a6f21489936189dcb619277bfa94c42e0fea97a4d6b6add0719909. Dec 13 13:28:43.074777 containerd[1449]: time="2024-12-13T13:28:43.074686258Z" level=info msg="StartContainer for \"27772061f7a6f21489936189dcb619277bfa94c42e0fea97a4d6b6add0719909\" returns successfully" Dec 13 13:28:43.204557 kubelet[1847]: E1213 13:28:43.204270 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:43.362786 kubelet[1847]: E1213 13:28:43.361658 1847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-flq57" podUID="6b554734-fc9d-4b74-9b2d-0c0e98baacdf" Dec 13 13:28:43.582421 kubelet[1847]: I1213 13:28:43.582241 1847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xb862" podStartSLOduration=4.3541422579999995 podStartE2EDuration="8.582157139s" podCreationTimestamp="2024-12-13 13:28:35 +0000 UTC" firstStartedPulling="2024-12-13 13:28:38.72229555 +0000 UTC m=+4.174395869" lastFinishedPulling="2024-12-13 13:28:42.950310431 +0000 UTC m=+8.402410750" observedRunningTime="2024-12-13 13:28:43.580022065 +0000 UTC m=+9.032122444" watchObservedRunningTime="2024-12-13 13:28:43.582157139 +0000 UTC m=+9.034257538" Dec 13 13:28:44.205113 kubelet[1847]: E1213 13:28:44.205036 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:45.206120 kubelet[1847]: E1213 13:28:45.206031 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:45.364460 kubelet[1847]: E1213 13:28:45.364394 1847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-flq57" podUID="6b554734-fc9d-4b74-9b2d-0c0e98baacdf" Dec 13 13:28:46.206832 kubelet[1847]: E1213 13:28:46.206757 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:47.207967 kubelet[1847]: E1213 13:28:47.207855 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:47.362955 kubelet[1847]: E1213 13:28:47.361432 1847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-flq57" podUID="6b554734-fc9d-4b74-9b2d-0c0e98baacdf" Dec 13 13:28:48.208688 kubelet[1847]: E1213 13:28:48.208629 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:48.646084 update_engine[1438]: I20241213 13:28:48.645941 1438 update_attempter.cc:509] Updating boot flags... Dec 13 13:28:48.853469 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2271) Dec 13 13:28:48.931990 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2275) Dec 13 13:28:49.209309 kubelet[1847]: E1213 13:28:49.209258 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:49.361864 kubelet[1847]: E1213 13:28:49.361401 1847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-flq57" podUID="6b554734-fc9d-4b74-9b2d-0c0e98baacdf" Dec 13 13:28:50.117259 containerd[1449]: time="2024-12-13T13:28:50.116927539Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:50.120798 containerd[1449]: time="2024-12-13T13:28:50.120665009Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Dec 13 13:28:50.132287 containerd[1449]: time="2024-12-13T13:28:50.132169516Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:50.142771 containerd[1449]: time="2024-12-13T13:28:50.142662014Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:50.144580 containerd[1449]: time="2024-12-13T13:28:50.144494852Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 7.193189515s" Dec 13 13:28:50.144580 containerd[1449]: time="2024-12-13T13:28:50.144575173Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Dec 13 13:28:50.153077 containerd[1449]: time="2024-12-13T13:28:50.152645398Z" level=info msg="CreateContainer within sandbox \"0197d4fc902d6af39a37748ccc904d90eb2fb3c8778300eefed98c46d942b8df\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 13:28:50.209673 kubelet[1847]: E1213 13:28:50.209592 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:50.351638 containerd[1449]: time="2024-12-13T13:28:50.351555514Z" level=info msg="CreateContainer within sandbox \"0197d4fc902d6af39a37748ccc904d90eb2fb3c8778300eefed98c46d942b8df\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d60b521051fbffd9398a5d5f0b3c6d45391d970386c9b5823a949e8b7e5a493d\"" Dec 13 13:28:50.352671 containerd[1449]: time="2024-12-13T13:28:50.352461183Z" level=info msg="StartContainer for \"d60b521051fbffd9398a5d5f0b3c6d45391d970386c9b5823a949e8b7e5a493d\"" Dec 13 13:28:50.422193 systemd[1]: Started cri-containerd-d60b521051fbffd9398a5d5f0b3c6d45391d970386c9b5823a949e8b7e5a493d.scope - libcontainer container d60b521051fbffd9398a5d5f0b3c6d45391d970386c9b5823a949e8b7e5a493d. Dec 13 13:28:50.653704 containerd[1449]: time="2024-12-13T13:28:50.653545106Z" level=info msg="StartContainer for \"d60b521051fbffd9398a5d5f0b3c6d45391d970386c9b5823a949e8b7e5a493d\" returns successfully" Dec 13 13:28:51.210339 kubelet[1847]: E1213 13:28:51.210244 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:51.362920 kubelet[1847]: E1213 13:28:51.361616 1847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-flq57" podUID="6b554734-fc9d-4b74-9b2d-0c0e98baacdf" Dec 13 13:28:52.211568 kubelet[1847]: E1213 13:28:52.211471 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:52.357791 systemd[1]: cri-containerd-d60b521051fbffd9398a5d5f0b3c6d45391d970386c9b5823a949e8b7e5a493d.scope: Deactivated successfully. Dec 13 13:28:52.396382 kubelet[1847]: I1213 13:28:52.395434 1847 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Dec 13 13:28:52.409408 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d60b521051fbffd9398a5d5f0b3c6d45391d970386c9b5823a949e8b7e5a493d-rootfs.mount: Deactivated successfully. Dec 13 13:28:53.151076 containerd[1449]: time="2024-12-13T13:28:53.150763087Z" level=info msg="shim disconnected" id=d60b521051fbffd9398a5d5f0b3c6d45391d970386c9b5823a949e8b7e5a493d namespace=k8s.io Dec 13 13:28:53.151076 containerd[1449]: time="2024-12-13T13:28:53.150865770Z" level=warning msg="cleaning up after shim disconnected" id=d60b521051fbffd9398a5d5f0b3c6d45391d970386c9b5823a949e8b7e5a493d namespace=k8s.io Dec 13 13:28:53.151076 containerd[1449]: time="2024-12-13T13:28:53.150953665Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:28:53.212104 kubelet[1847]: E1213 13:28:53.211979 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:53.376077 systemd[1]: Created slice kubepods-besteffort-pod6b554734_fc9d_4b74_9b2d_0c0e98baacdf.slice - libcontainer container kubepods-besteffort-pod6b554734_fc9d_4b74_9b2d_0c0e98baacdf.slice. Dec 13 13:28:53.381733 containerd[1449]: time="2024-12-13T13:28:53.381520593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-flq57,Uid:6b554734-fc9d-4b74-9b2d-0c0e98baacdf,Namespace:calico-system,Attempt:0,}" Dec 13 13:28:53.523897 containerd[1449]: time="2024-12-13T13:28:53.523689561Z" level=error msg="Failed to destroy network for sandbox \"44e70081d2cdda96567481020ec4198920f6778b69518074c0e3574f13ad78fb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:53.525961 containerd[1449]: time="2024-12-13T13:28:53.524278996Z" level=error msg="encountered an error cleaning up failed sandbox \"44e70081d2cdda96567481020ec4198920f6778b69518074c0e3574f13ad78fb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:53.526198 containerd[1449]: time="2024-12-13T13:28:53.524465175Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-flq57,Uid:6b554734-fc9d-4b74-9b2d-0c0e98baacdf,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"44e70081d2cdda96567481020ec4198920f6778b69518074c0e3574f13ad78fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:53.526408 kubelet[1847]: E1213 13:28:53.526359 1847 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44e70081d2cdda96567481020ec4198920f6778b69518074c0e3574f13ad78fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:53.526487 kubelet[1847]: E1213 13:28:53.526447 1847 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44e70081d2cdda96567481020ec4198920f6778b69518074c0e3574f13ad78fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-flq57" Dec 13 13:28:53.526487 kubelet[1847]: E1213 13:28:53.526474 1847 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44e70081d2cdda96567481020ec4198920f6778b69518074c0e3574f13ad78fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-flq57" Dec 13 13:28:53.526620 kubelet[1847]: E1213 13:28:53.526530 1847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-flq57_calico-system(6b554734-fc9d-4b74-9b2d-0c0e98baacdf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-flq57_calico-system(6b554734-fc9d-4b74-9b2d-0c0e98baacdf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"44e70081d2cdda96567481020ec4198920f6778b69518074c0e3574f13ad78fb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-flq57" podUID="6b554734-fc9d-4b74-9b2d-0c0e98baacdf" Dec 13 13:28:53.526760 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-44e70081d2cdda96567481020ec4198920f6778b69518074c0e3574f13ad78fb-shm.mount: Deactivated successfully. Dec 13 13:28:53.680112 kubelet[1847]: I1213 13:28:53.679970 1847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44e70081d2cdda96567481020ec4198920f6778b69518074c0e3574f13ad78fb" Dec 13 13:28:53.681300 containerd[1449]: time="2024-12-13T13:28:53.681209700Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 13:28:53.682284 containerd[1449]: time="2024-12-13T13:28:53.681604400Z" level=info msg="StopPodSandbox for \"44e70081d2cdda96567481020ec4198920f6778b69518074c0e3574f13ad78fb\"" Dec 13 13:28:53.682284 containerd[1449]: time="2024-12-13T13:28:53.682097355Z" level=info msg="Ensure that sandbox 44e70081d2cdda96567481020ec4198920f6778b69518074c0e3574f13ad78fb in task-service has been cleanup successfully" Dec 13 13:28:53.686276 containerd[1449]: time="2024-12-13T13:28:53.685995706Z" level=info msg="TearDown network for sandbox \"44e70081d2cdda96567481020ec4198920f6778b69518074c0e3574f13ad78fb\" successfully" Dec 13 13:28:53.686276 containerd[1449]: time="2024-12-13T13:28:53.686049938Z" level=info msg="StopPodSandbox for \"44e70081d2cdda96567481020ec4198920f6778b69518074c0e3574f13ad78fb\" returns successfully" Dec 13 13:28:53.687760 systemd[1]: run-netns-cni\x2db35e0ef7\x2ddd31\x2de2aa\x2dfd06\x2d4c77548e7ec3.mount: Deactivated successfully. Dec 13 13:28:53.692816 containerd[1449]: time="2024-12-13T13:28:53.692740376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-flq57,Uid:6b554734-fc9d-4b74-9b2d-0c0e98baacdf,Namespace:calico-system,Attempt:1,}" Dec 13 13:28:53.814551 containerd[1449]: time="2024-12-13T13:28:53.814390380Z" level=error msg="Failed to destroy network for sandbox \"b528438cb8163ad2aef30dd5d012230125b22bacbc1996f26597aacff3a1201a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:53.816196 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b528438cb8163ad2aef30dd5d012230125b22bacbc1996f26597aacff3a1201a-shm.mount: Deactivated successfully. Dec 13 13:28:53.817017 containerd[1449]: time="2024-12-13T13:28:53.816771656Z" level=error msg="encountered an error cleaning up failed sandbox \"b528438cb8163ad2aef30dd5d012230125b22bacbc1996f26597aacff3a1201a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:53.817017 containerd[1449]: time="2024-12-13T13:28:53.816923311Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-flq57,Uid:6b554734-fc9d-4b74-9b2d-0c0e98baacdf,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"b528438cb8163ad2aef30dd5d012230125b22bacbc1996f26597aacff3a1201a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:53.817649 kubelet[1847]: E1213 13:28:53.817206 1847 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b528438cb8163ad2aef30dd5d012230125b22bacbc1996f26597aacff3a1201a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:53.817649 kubelet[1847]: E1213 13:28:53.817256 1847 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b528438cb8163ad2aef30dd5d012230125b22bacbc1996f26597aacff3a1201a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-flq57" Dec 13 13:28:53.817649 kubelet[1847]: E1213 13:28:53.817280 1847 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b528438cb8163ad2aef30dd5d012230125b22bacbc1996f26597aacff3a1201a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-flq57" Dec 13 13:28:53.818075 kubelet[1847]: E1213 13:28:53.817355 1847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-flq57_calico-system(6b554734-fc9d-4b74-9b2d-0c0e98baacdf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-flq57_calico-system(6b554734-fc9d-4b74-9b2d-0c0e98baacdf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b528438cb8163ad2aef30dd5d012230125b22bacbc1996f26597aacff3a1201a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-flq57" podUID="6b554734-fc9d-4b74-9b2d-0c0e98baacdf" Dec 13 13:28:54.213368 kubelet[1847]: E1213 13:28:54.213203 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:54.685164 kubelet[1847]: I1213 13:28:54.685123 1847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b528438cb8163ad2aef30dd5d012230125b22bacbc1996f26597aacff3a1201a" Dec 13 13:28:54.686819 containerd[1449]: time="2024-12-13T13:28:54.686131070Z" level=info msg="StopPodSandbox for \"b528438cb8163ad2aef30dd5d012230125b22bacbc1996f26597aacff3a1201a\"" Dec 13 13:28:54.686819 containerd[1449]: time="2024-12-13T13:28:54.686547801Z" level=info msg="Ensure that sandbox b528438cb8163ad2aef30dd5d012230125b22bacbc1996f26597aacff3a1201a in task-service has been cleanup successfully" Dec 13 13:28:54.689221 containerd[1449]: time="2024-12-13T13:28:54.689145423Z" level=info msg="TearDown network for sandbox \"b528438cb8163ad2aef30dd5d012230125b22bacbc1996f26597aacff3a1201a\" successfully" Dec 13 13:28:54.689221 containerd[1449]: time="2024-12-13T13:28:54.689193193Z" level=info msg="StopPodSandbox for \"b528438cb8163ad2aef30dd5d012230125b22bacbc1996f26597aacff3a1201a\" returns successfully" Dec 13 13:28:54.691267 containerd[1449]: time="2024-12-13T13:28:54.689958488Z" level=info msg="StopPodSandbox for \"44e70081d2cdda96567481020ec4198920f6778b69518074c0e3574f13ad78fb\"" Dec 13 13:28:54.691267 containerd[1449]: time="2024-12-13T13:28:54.690140760Z" level=info msg="TearDown network for sandbox \"44e70081d2cdda96567481020ec4198920f6778b69518074c0e3574f13ad78fb\" successfully" Dec 13 13:28:54.691267 containerd[1449]: time="2024-12-13T13:28:54.690168552Z" level=info msg="StopPodSandbox for \"44e70081d2cdda96567481020ec4198920f6778b69518074c0e3574f13ad78fb\" returns successfully" Dec 13 13:28:54.692691 containerd[1449]: time="2024-12-13T13:28:54.692042697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-flq57,Uid:6b554734-fc9d-4b74-9b2d-0c0e98baacdf,Namespace:calico-system,Attempt:2,}" Dec 13 13:28:54.693643 systemd[1]: run-netns-cni\x2d9c47822e\x2d4cc8\x2da460\x2d340a\x2d27c573cb27bc.mount: Deactivated successfully. Dec 13 13:28:54.808240 containerd[1449]: time="2024-12-13T13:28:54.808155867Z" level=error msg="Failed to destroy network for sandbox \"80428a68e929b7a5fe0fe9197451793805d1ccc6b30657b54c5712a597bee652\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:54.808240 containerd[1449]: time="2024-12-13T13:28:54.810343149Z" level=error msg="encountered an error cleaning up failed sandbox \"80428a68e929b7a5fe0fe9197451793805d1ccc6b30657b54c5712a597bee652\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:54.808240 containerd[1449]: time="2024-12-13T13:28:54.810423680Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-flq57,Uid:6b554734-fc9d-4b74-9b2d-0c0e98baacdf,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"80428a68e929b7a5fe0fe9197451793805d1ccc6b30657b54c5712a597bee652\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:54.809831 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-80428a68e929b7a5fe0fe9197451793805d1ccc6b30657b54c5712a597bee652-shm.mount: Deactivated successfully. Dec 13 13:28:54.812567 kubelet[1847]: E1213 13:28:54.810660 1847 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80428a68e929b7a5fe0fe9197451793805d1ccc6b30657b54c5712a597bee652\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:54.812567 kubelet[1847]: E1213 13:28:54.810718 1847 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80428a68e929b7a5fe0fe9197451793805d1ccc6b30657b54c5712a597bee652\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-flq57" Dec 13 13:28:54.812567 kubelet[1847]: E1213 13:28:54.810743 1847 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80428a68e929b7a5fe0fe9197451793805d1ccc6b30657b54c5712a597bee652\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-flq57" Dec 13 13:28:54.812785 kubelet[1847]: E1213 13:28:54.810787 1847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-flq57_calico-system(6b554734-fc9d-4b74-9b2d-0c0e98baacdf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-flq57_calico-system(6b554734-fc9d-4b74-9b2d-0c0e98baacdf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"80428a68e929b7a5fe0fe9197451793805d1ccc6b30657b54c5712a597bee652\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-flq57" podUID="6b554734-fc9d-4b74-9b2d-0c0e98baacdf" Dec 13 13:28:55.197030 kubelet[1847]: E1213 13:28:55.196958 1847 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:55.214247 kubelet[1847]: E1213 13:28:55.214094 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:55.504548 systemd[1]: Created slice kubepods-besteffort-pod88a7e54e_548f_40c3_aece_f577d2fd980d.slice - libcontainer container kubepods-besteffort-pod88a7e54e_548f_40c3_aece_f577d2fd980d.slice. Dec 13 13:28:55.620767 kubelet[1847]: I1213 13:28:55.620717 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvzwp\" (UniqueName: \"kubernetes.io/projected/88a7e54e-548f-40c3-aece-f577d2fd980d-kube-api-access-vvzwp\") pod \"nginx-deployment-8587fbcb89-pf4jn\" (UID: \"88a7e54e-548f-40c3-aece-f577d2fd980d\") " pod="default/nginx-deployment-8587fbcb89-pf4jn" Dec 13 13:28:55.688474 kubelet[1847]: I1213 13:28:55.688429 1847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80428a68e929b7a5fe0fe9197451793805d1ccc6b30657b54c5712a597bee652" Dec 13 13:28:55.689010 containerd[1449]: time="2024-12-13T13:28:55.688980563Z" level=info msg="StopPodSandbox for \"80428a68e929b7a5fe0fe9197451793805d1ccc6b30657b54c5712a597bee652\"" Dec 13 13:28:55.689810 containerd[1449]: time="2024-12-13T13:28:55.689661320Z" level=info msg="Ensure that sandbox 80428a68e929b7a5fe0fe9197451793805d1ccc6b30657b54c5712a597bee652 in task-service has been cleanup successfully" Dec 13 13:28:55.691973 containerd[1449]: time="2024-12-13T13:28:55.691684445Z" level=info msg="TearDown network for sandbox \"80428a68e929b7a5fe0fe9197451793805d1ccc6b30657b54c5712a597bee652\" successfully" Dec 13 13:28:55.691973 containerd[1449]: time="2024-12-13T13:28:55.691720302Z" level=info msg="StopPodSandbox for \"80428a68e929b7a5fe0fe9197451793805d1ccc6b30657b54c5712a597bee652\" returns successfully" Dec 13 13:28:55.693321 containerd[1449]: time="2024-12-13T13:28:55.693002527Z" level=info msg="StopPodSandbox for \"b528438cb8163ad2aef30dd5d012230125b22bacbc1996f26597aacff3a1201a\"" Dec 13 13:28:55.693321 containerd[1449]: time="2024-12-13T13:28:55.693099929Z" level=info msg="TearDown network for sandbox \"b528438cb8163ad2aef30dd5d012230125b22bacbc1996f26597aacff3a1201a\" successfully" Dec 13 13:28:55.693321 containerd[1449]: time="2024-12-13T13:28:55.693119025Z" level=info msg="StopPodSandbox for \"b528438cb8163ad2aef30dd5d012230125b22bacbc1996f26597aacff3a1201a\" returns successfully" Dec 13 13:28:55.693662 containerd[1449]: time="2024-12-13T13:28:55.693633660Z" level=info msg="StopPodSandbox for \"44e70081d2cdda96567481020ec4198920f6778b69518074c0e3574f13ad78fb\"" Dec 13 13:28:55.693853 containerd[1449]: time="2024-12-13T13:28:55.693828406Z" level=info msg="TearDown network for sandbox \"44e70081d2cdda96567481020ec4198920f6778b69518074c0e3574f13ad78fb\" successfully" Dec 13 13:28:55.694005 containerd[1449]: time="2024-12-13T13:28:55.693979359Z" level=info msg="StopPodSandbox for \"44e70081d2cdda96567481020ec4198920f6778b69518074c0e3574f13ad78fb\" returns successfully" Dec 13 13:28:55.694580 containerd[1449]: time="2024-12-13T13:28:55.694549869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-flq57,Uid:6b554734-fc9d-4b74-9b2d-0c0e98baacdf,Namespace:calico-system,Attempt:3,}" Dec 13 13:28:55.695930 systemd[1]: run-netns-cni\x2dd25c8f1b\x2dfd08\x2dbeab\x2dad63\x2d2a8daf03af9b.mount: Deactivated successfully. Dec 13 13:28:55.816475 containerd[1449]: time="2024-12-13T13:28:55.816091410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-pf4jn,Uid:88a7e54e-548f-40c3-aece-f577d2fd980d,Namespace:default,Attempt:0,}" Dec 13 13:28:55.861905 containerd[1449]: time="2024-12-13T13:28:55.861823350Z" level=error msg="Failed to destroy network for sandbox \"600f5c9de67c290a123d858023bf36d579c6cdd3fa0b85cf2a9e47c426e3979f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:55.862341 containerd[1449]: time="2024-12-13T13:28:55.862305244Z" level=error msg="encountered an error cleaning up failed sandbox \"600f5c9de67c290a123d858023bf36d579c6cdd3fa0b85cf2a9e47c426e3979f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:55.862409 containerd[1449]: time="2024-12-13T13:28:55.862390484Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-flq57,Uid:6b554734-fc9d-4b74-9b2d-0c0e98baacdf,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"600f5c9de67c290a123d858023bf36d579c6cdd3fa0b85cf2a9e47c426e3979f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:55.862726 kubelet[1847]: E1213 13:28:55.862660 1847 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"600f5c9de67c290a123d858023bf36d579c6cdd3fa0b85cf2a9e47c426e3979f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:55.862785 kubelet[1847]: E1213 13:28:55.862750 1847 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"600f5c9de67c290a123d858023bf36d579c6cdd3fa0b85cf2a9e47c426e3979f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-flq57" Dec 13 13:28:55.862837 kubelet[1847]: E1213 13:28:55.862779 1847 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"600f5c9de67c290a123d858023bf36d579c6cdd3fa0b85cf2a9e47c426e3979f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-flq57" Dec 13 13:28:55.862875 kubelet[1847]: E1213 13:28:55.862838 1847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-flq57_calico-system(6b554734-fc9d-4b74-9b2d-0c0e98baacdf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-flq57_calico-system(6b554734-fc9d-4b74-9b2d-0c0e98baacdf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"600f5c9de67c290a123d858023bf36d579c6cdd3fa0b85cf2a9e47c426e3979f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-flq57" podUID="6b554734-fc9d-4b74-9b2d-0c0e98baacdf" Dec 13 13:28:55.937957 containerd[1449]: time="2024-12-13T13:28:55.937781079Z" level=error msg="Failed to destroy network for sandbox \"d7631a251585364a892d934d5b4172df7d392390f1512d95497ea59cdaa439f6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:55.938184 containerd[1449]: time="2024-12-13T13:28:55.938142226Z" level=error msg="encountered an error cleaning up failed sandbox \"d7631a251585364a892d934d5b4172df7d392390f1512d95497ea59cdaa439f6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:55.938243 containerd[1449]: time="2024-12-13T13:28:55.938206296Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-pf4jn,Uid:88a7e54e-548f-40c3-aece-f577d2fd980d,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d7631a251585364a892d934d5b4172df7d392390f1512d95497ea59cdaa439f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:55.938438 kubelet[1847]: E1213 13:28:55.938398 1847 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7631a251585364a892d934d5b4172df7d392390f1512d95497ea59cdaa439f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:55.938516 kubelet[1847]: E1213 13:28:55.938467 1847 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7631a251585364a892d934d5b4172df7d392390f1512d95497ea59cdaa439f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-pf4jn" Dec 13 13:28:55.938516 kubelet[1847]: E1213 13:28:55.938493 1847 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7631a251585364a892d934d5b4172df7d392390f1512d95497ea59cdaa439f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-pf4jn" Dec 13 13:28:55.938587 kubelet[1847]: E1213 13:28:55.938542 1847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-pf4jn_default(88a7e54e-548f-40c3-aece-f577d2fd980d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-pf4jn_default(88a7e54e-548f-40c3-aece-f577d2fd980d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d7631a251585364a892d934d5b4172df7d392390f1512d95497ea59cdaa439f6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-pf4jn" podUID="88a7e54e-548f-40c3-aece-f577d2fd980d" Dec 13 13:28:56.216481 kubelet[1847]: E1213 13:28:56.214738 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:56.692872 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d7631a251585364a892d934d5b4172df7d392390f1512d95497ea59cdaa439f6-shm.mount: Deactivated successfully. Dec 13 13:28:56.694046 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-600f5c9de67c290a123d858023bf36d579c6cdd3fa0b85cf2a9e47c426e3979f-shm.mount: Deactivated successfully. Dec 13 13:28:56.702956 kubelet[1847]: I1213 13:28:56.701613 1847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="600f5c9de67c290a123d858023bf36d579c6cdd3fa0b85cf2a9e47c426e3979f" Dec 13 13:28:56.704158 containerd[1449]: time="2024-12-13T13:28:56.703451664Z" level=info msg="StopPodSandbox for \"600f5c9de67c290a123d858023bf36d579c6cdd3fa0b85cf2a9e47c426e3979f\"" Dec 13 13:28:56.704158 containerd[1449]: time="2024-12-13T13:28:56.703861322Z" level=info msg="Ensure that sandbox 600f5c9de67c290a123d858023bf36d579c6cdd3fa0b85cf2a9e47c426e3979f in task-service has been cleanup successfully" Dec 13 13:28:56.706394 systemd[1]: run-netns-cni\x2d1bfb2e3d\x2d0dd4\x2d6a81\x2d4186\x2d3b961bbb3de3.mount: Deactivated successfully. Dec 13 13:28:56.708988 containerd[1449]: time="2024-12-13T13:28:56.708933345Z" level=info msg="TearDown network for sandbox \"600f5c9de67c290a123d858023bf36d579c6cdd3fa0b85cf2a9e47c426e3979f\" successfully" Dec 13 13:28:56.711744 containerd[1449]: time="2024-12-13T13:28:56.709155892Z" level=info msg="StopPodSandbox for \"600f5c9de67c290a123d858023bf36d579c6cdd3fa0b85cf2a9e47c426e3979f\" returns successfully" Dec 13 13:28:56.712580 containerd[1449]: time="2024-12-13T13:28:56.712526714Z" level=info msg="StopPodSandbox for \"80428a68e929b7a5fe0fe9197451793805d1ccc6b30657b54c5712a597bee652\"" Dec 13 13:28:56.713409 containerd[1449]: time="2024-12-13T13:28:56.712848027Z" level=info msg="TearDown network for sandbox \"80428a68e929b7a5fe0fe9197451793805d1ccc6b30657b54c5712a597bee652\" successfully" Dec 13 13:28:56.714361 containerd[1449]: time="2024-12-13T13:28:56.713541487Z" level=info msg="StopPodSandbox for \"80428a68e929b7a5fe0fe9197451793805d1ccc6b30657b54c5712a597bee652\" returns successfully" Dec 13 13:28:56.715519 containerd[1449]: time="2024-12-13T13:28:56.715001245Z" level=info msg="StopPodSandbox for \"b528438cb8163ad2aef30dd5d012230125b22bacbc1996f26597aacff3a1201a\"" Dec 13 13:28:56.715519 containerd[1449]: time="2024-12-13T13:28:56.715164311Z" level=info msg="TearDown network for sandbox \"b528438cb8163ad2aef30dd5d012230125b22bacbc1996f26597aacff3a1201a\" successfully" Dec 13 13:28:56.715519 containerd[1449]: time="2024-12-13T13:28:56.715194748Z" level=info msg="StopPodSandbox for \"b528438cb8163ad2aef30dd5d012230125b22bacbc1996f26597aacff3a1201a\" returns successfully" Dec 13 13:28:56.717032 kubelet[1847]: I1213 13:28:56.716106 1847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7631a251585364a892d934d5b4172df7d392390f1512d95497ea59cdaa439f6" Dec 13 13:28:56.717406 containerd[1449]: time="2024-12-13T13:28:56.717357324Z" level=info msg="StopPodSandbox for \"44e70081d2cdda96567481020ec4198920f6778b69518074c0e3574f13ad78fb\"" Dec 13 13:28:56.718194 containerd[1449]: time="2024-12-13T13:28:56.718153247Z" level=info msg="TearDown network for sandbox \"44e70081d2cdda96567481020ec4198920f6778b69518074c0e3574f13ad78fb\" successfully" Dec 13 13:28:56.718543 containerd[1449]: time="2024-12-13T13:28:56.718481152Z" level=info msg="StopPodSandbox for \"44e70081d2cdda96567481020ec4198920f6778b69518074c0e3574f13ad78fb\" returns successfully" Dec 13 13:28:56.720615 containerd[1449]: time="2024-12-13T13:28:56.719813821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-flq57,Uid:6b554734-fc9d-4b74-9b2d-0c0e98baacdf,Namespace:calico-system,Attempt:4,}" Dec 13 13:28:56.740818 containerd[1449]: time="2024-12-13T13:28:56.739801608Z" level=info msg="StopPodSandbox for \"d7631a251585364a892d934d5b4172df7d392390f1512d95497ea59cdaa439f6\"" Dec 13 13:28:56.740818 containerd[1449]: time="2024-12-13T13:28:56.740479009Z" level=info msg="Ensure that sandbox d7631a251585364a892d934d5b4172df7d392390f1512d95497ea59cdaa439f6 in task-service has been cleanup successfully" Dec 13 13:28:56.742332 systemd[1]: run-netns-cni\x2dfc76c5a3\x2d534a\x2d9f7a\x2d0f76\x2d0ace9ae0ff7e.mount: Deactivated successfully. Dec 13 13:28:56.742730 containerd[1449]: time="2024-12-13T13:28:56.742684365Z" level=info msg="TearDown network for sandbox \"d7631a251585364a892d934d5b4172df7d392390f1512d95497ea59cdaa439f6\" successfully" Dec 13 13:28:56.745783 containerd[1449]: time="2024-12-13T13:28:56.743047566Z" level=info msg="StopPodSandbox for \"d7631a251585364a892d934d5b4172df7d392390f1512d95497ea59cdaa439f6\" returns successfully" Dec 13 13:28:56.778690 containerd[1449]: time="2024-12-13T13:28:56.778630031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-pf4jn,Uid:88a7e54e-548f-40c3-aece-f577d2fd980d,Namespace:default,Attempt:1,}" Dec 13 13:28:56.865998 containerd[1449]: time="2024-12-13T13:28:56.865873816Z" level=error msg="Failed to destroy network for sandbox \"7569277e045706f329ab489063636603c9060725a50ae1285872cb139bd6a320\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:56.866603 containerd[1449]: time="2024-12-13T13:28:56.866561917Z" level=error msg="encountered an error cleaning up failed sandbox \"7569277e045706f329ab489063636603c9060725a50ae1285872cb139bd6a320\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:56.866692 containerd[1449]: time="2024-12-13T13:28:56.866658057Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-flq57,Uid:6b554734-fc9d-4b74-9b2d-0c0e98baacdf,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"7569277e045706f329ab489063636603c9060725a50ae1285872cb139bd6a320\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:56.867073 kubelet[1847]: E1213 13:28:56.867029 1847 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7569277e045706f329ab489063636603c9060725a50ae1285872cb139bd6a320\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:56.867262 kubelet[1847]: E1213 13:28:56.867220 1847 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7569277e045706f329ab489063636603c9060725a50ae1285872cb139bd6a320\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-flq57" Dec 13 13:28:56.867369 kubelet[1847]: E1213 13:28:56.867350 1847 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7569277e045706f329ab489063636603c9060725a50ae1285872cb139bd6a320\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-flq57" Dec 13 13:28:56.867537 kubelet[1847]: E1213 13:28:56.867487 1847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-flq57_calico-system(6b554734-fc9d-4b74-9b2d-0c0e98baacdf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-flq57_calico-system(6b554734-fc9d-4b74-9b2d-0c0e98baacdf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7569277e045706f329ab489063636603c9060725a50ae1285872cb139bd6a320\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-flq57" podUID="6b554734-fc9d-4b74-9b2d-0c0e98baacdf" Dec 13 13:28:56.906102 containerd[1449]: time="2024-12-13T13:28:56.906034117Z" level=error msg="Failed to destroy network for sandbox \"e386a4390adf5c3f69b06794cc703409bc0f40a0334acdf9ce30a67e76443e2a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:56.906553 containerd[1449]: time="2024-12-13T13:28:56.906510040Z" level=error msg="encountered an error cleaning up failed sandbox \"e386a4390adf5c3f69b06794cc703409bc0f40a0334acdf9ce30a67e76443e2a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:56.906638 containerd[1449]: time="2024-12-13T13:28:56.906597073Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-pf4jn,Uid:88a7e54e-548f-40c3-aece-f577d2fd980d,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"e386a4390adf5c3f69b06794cc703409bc0f40a0334acdf9ce30a67e76443e2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:56.907344 kubelet[1847]: E1213 13:28:56.906846 1847 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e386a4390adf5c3f69b06794cc703409bc0f40a0334acdf9ce30a67e76443e2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:56.907344 kubelet[1847]: E1213 13:28:56.906955 1847 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e386a4390adf5c3f69b06794cc703409bc0f40a0334acdf9ce30a67e76443e2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-pf4jn" Dec 13 13:28:56.907344 kubelet[1847]: E1213 13:28:56.906986 1847 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e386a4390adf5c3f69b06794cc703409bc0f40a0334acdf9ce30a67e76443e2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-pf4jn" Dec 13 13:28:56.907472 kubelet[1847]: E1213 13:28:56.907046 1847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-pf4jn_default(88a7e54e-548f-40c3-aece-f577d2fd980d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-pf4jn_default(88a7e54e-548f-40c3-aece-f577d2fd980d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e386a4390adf5c3f69b06794cc703409bc0f40a0334acdf9ce30a67e76443e2a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-pf4jn" podUID="88a7e54e-548f-40c3-aece-f577d2fd980d" Dec 13 13:28:57.215544 kubelet[1847]: E1213 13:28:57.215475 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:57.698486 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7569277e045706f329ab489063636603c9060725a50ae1285872cb139bd6a320-shm.mount: Deactivated successfully. Dec 13 13:28:57.720943 kubelet[1847]: I1213 13:28:57.720846 1847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e386a4390adf5c3f69b06794cc703409bc0f40a0334acdf9ce30a67e76443e2a" Dec 13 13:28:57.730819 containerd[1449]: time="2024-12-13T13:28:57.723355203Z" level=info msg="StopPodSandbox for \"e386a4390adf5c3f69b06794cc703409bc0f40a0334acdf9ce30a67e76443e2a\"" Dec 13 13:28:57.730819 containerd[1449]: time="2024-12-13T13:28:57.723729935Z" level=info msg="Ensure that sandbox e386a4390adf5c3f69b06794cc703409bc0f40a0334acdf9ce30a67e76443e2a in task-service has been cleanup successfully" Dec 13 13:28:57.729437 systemd[1]: run-netns-cni\x2d8507e8f2\x2d9fb1\x2d2abd\x2d30dc\x2db0fd650a4584.mount: Deactivated successfully. Dec 13 13:28:57.735825 containerd[1449]: time="2024-12-13T13:28:57.732245807Z" level=info msg="TearDown network for sandbox \"e386a4390adf5c3f69b06794cc703409bc0f40a0334acdf9ce30a67e76443e2a\" successfully" Dec 13 13:28:57.735825 containerd[1449]: time="2024-12-13T13:28:57.732297694Z" level=info msg="StopPodSandbox for \"e386a4390adf5c3f69b06794cc703409bc0f40a0334acdf9ce30a67e76443e2a\" returns successfully" Dec 13 13:28:57.735825 containerd[1449]: time="2024-12-13T13:28:57.735291770Z" level=info msg="StopPodSandbox for \"d7631a251585364a892d934d5b4172df7d392390f1512d95497ea59cdaa439f6\"" Dec 13 13:28:57.735825 containerd[1449]: time="2024-12-13T13:28:57.735477468Z" level=info msg="TearDown network for sandbox \"d7631a251585364a892d934d5b4172df7d392390f1512d95497ea59cdaa439f6\" successfully" Dec 13 13:28:57.735825 containerd[1449]: time="2024-12-13T13:28:57.735505571Z" level=info msg="StopPodSandbox for \"d7631a251585364a892d934d5b4172df7d392390f1512d95497ea59cdaa439f6\" returns successfully" Dec 13 13:28:57.738572 containerd[1449]: time="2024-12-13T13:28:57.737988257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-pf4jn,Uid:88a7e54e-548f-40c3-aece-f577d2fd980d,Namespace:default,Attempt:2,}" Dec 13 13:28:57.777095 kubelet[1847]: I1213 13:28:57.776978 1847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7569277e045706f329ab489063636603c9060725a50ae1285872cb139bd6a320" Dec 13 13:28:57.782051 containerd[1449]: time="2024-12-13T13:28:57.777574862Z" level=info msg="StopPodSandbox for \"7569277e045706f329ab489063636603c9060725a50ae1285872cb139bd6a320\"" Dec 13 13:28:57.782051 containerd[1449]: time="2024-12-13T13:28:57.777790867Z" level=info msg="Ensure that sandbox 7569277e045706f329ab489063636603c9060725a50ae1285872cb139bd6a320 in task-service has been cleanup successfully" Dec 13 13:28:57.782051 containerd[1449]: time="2024-12-13T13:28:57.778033892Z" level=info msg="TearDown network for sandbox \"7569277e045706f329ab489063636603c9060725a50ae1285872cb139bd6a320\" successfully" Dec 13 13:28:57.782051 containerd[1449]: time="2024-12-13T13:28:57.778049612Z" level=info msg="StopPodSandbox for \"7569277e045706f329ab489063636603c9060725a50ae1285872cb139bd6a320\" returns successfully" Dec 13 13:28:57.790235 containerd[1449]: time="2024-12-13T13:28:57.786815723Z" level=info msg="StopPodSandbox for \"600f5c9de67c290a123d858023bf36d579c6cdd3fa0b85cf2a9e47c426e3979f\"" Dec 13 13:28:57.790235 containerd[1449]: time="2024-12-13T13:28:57.786972256Z" level=info msg="TearDown network for sandbox \"600f5c9de67c290a123d858023bf36d579c6cdd3fa0b85cf2a9e47c426e3979f\" successfully" Dec 13 13:28:57.790235 containerd[1449]: time="2024-12-13T13:28:57.786986603Z" level=info msg="StopPodSandbox for \"600f5c9de67c290a123d858023bf36d579c6cdd3fa0b85cf2a9e47c426e3979f\" returns successfully" Dec 13 13:28:57.790235 containerd[1449]: time="2024-12-13T13:28:57.787293809Z" level=info msg="StopPodSandbox for \"80428a68e929b7a5fe0fe9197451793805d1ccc6b30657b54c5712a597bee652\"" Dec 13 13:28:57.790235 containerd[1449]: time="2024-12-13T13:28:57.787393176Z" level=info msg="TearDown network for sandbox \"80428a68e929b7a5fe0fe9197451793805d1ccc6b30657b54c5712a597bee652\" successfully" Dec 13 13:28:57.790235 containerd[1449]: time="2024-12-13T13:28:57.787405188Z" level=info msg="StopPodSandbox for \"80428a68e929b7a5fe0fe9197451793805d1ccc6b30657b54c5712a597bee652\" returns successfully" Dec 13 13:28:57.790235 containerd[1449]: time="2024-12-13T13:28:57.787608119Z" level=info msg="StopPodSandbox for \"b528438cb8163ad2aef30dd5d012230125b22bacbc1996f26597aacff3a1201a\"" Dec 13 13:28:57.790235 containerd[1449]: time="2024-12-13T13:28:57.787674353Z" level=info msg="TearDown network for sandbox \"b528438cb8163ad2aef30dd5d012230125b22bacbc1996f26597aacff3a1201a\" successfully" Dec 13 13:28:57.790235 containerd[1449]: time="2024-12-13T13:28:57.787685704Z" level=info msg="StopPodSandbox for \"b528438cb8163ad2aef30dd5d012230125b22bacbc1996f26597aacff3a1201a\" returns successfully" Dec 13 13:28:57.790235 containerd[1449]: time="2024-12-13T13:28:57.787867095Z" level=info msg="StopPodSandbox for \"44e70081d2cdda96567481020ec4198920f6778b69518074c0e3574f13ad78fb\"" Dec 13 13:28:57.790235 containerd[1449]: time="2024-12-13T13:28:57.787953156Z" level=info msg="TearDown network for sandbox \"44e70081d2cdda96567481020ec4198920f6778b69518074c0e3574f13ad78fb\" successfully" Dec 13 13:28:57.790235 containerd[1449]: time="2024-12-13T13:28:57.787965629Z" level=info msg="StopPodSandbox for \"44e70081d2cdda96567481020ec4198920f6778b69518074c0e3574f13ad78fb\" returns successfully" Dec 13 13:28:57.790235 containerd[1449]: time="2024-12-13T13:28:57.788327819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-flq57,Uid:6b554734-fc9d-4b74-9b2d-0c0e98baacdf,Namespace:calico-system,Attempt:5,}" Dec 13 13:28:57.789437 systemd[1]: run-netns-cni\x2da80ced4b\x2dfeea\x2d5458\x2d5c96\x2d261afc5aeab0.mount: Deactivated successfully. Dec 13 13:28:57.883561 containerd[1449]: time="2024-12-13T13:28:57.882528001Z" level=error msg="Failed to destroy network for sandbox \"9ea676dc99018201564ed28aae61581b63ab75a5e21e331992630787bf9116f6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:57.883561 containerd[1449]: time="2024-12-13T13:28:57.882902023Z" level=error msg="encountered an error cleaning up failed sandbox \"9ea676dc99018201564ed28aae61581b63ab75a5e21e331992630787bf9116f6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:57.883561 containerd[1449]: time="2024-12-13T13:28:57.882958278Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-pf4jn,Uid:88a7e54e-548f-40c3-aece-f577d2fd980d,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"9ea676dc99018201564ed28aae61581b63ab75a5e21e331992630787bf9116f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:57.883777 kubelet[1847]: E1213 13:28:57.883175 1847 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ea676dc99018201564ed28aae61581b63ab75a5e21e331992630787bf9116f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:57.883777 kubelet[1847]: E1213 13:28:57.883231 1847 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ea676dc99018201564ed28aae61581b63ab75a5e21e331992630787bf9116f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-pf4jn" Dec 13 13:28:57.883777 kubelet[1847]: E1213 13:28:57.883253 1847 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ea676dc99018201564ed28aae61581b63ab75a5e21e331992630787bf9116f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-pf4jn" Dec 13 13:28:57.883894 kubelet[1847]: E1213 13:28:57.883295 1847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-pf4jn_default(88a7e54e-548f-40c3-aece-f577d2fd980d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-pf4jn_default(88a7e54e-548f-40c3-aece-f577d2fd980d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9ea676dc99018201564ed28aae61581b63ab75a5e21e331992630787bf9116f6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-pf4jn" podUID="88a7e54e-548f-40c3-aece-f577d2fd980d" Dec 13 13:28:57.924860 containerd[1449]: time="2024-12-13T13:28:57.924797528Z" level=error msg="Failed to destroy network for sandbox \"6bf4d027f35ad0d592900f73b004bb1954d4e8416164842fcfcb8a315c21cfe5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:57.925928 containerd[1449]: time="2024-12-13T13:28:57.925188822Z" level=error msg="encountered an error cleaning up failed sandbox \"6bf4d027f35ad0d592900f73b004bb1954d4e8416164842fcfcb8a315c21cfe5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:57.925928 containerd[1449]: time="2024-12-13T13:28:57.925264654Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-flq57,Uid:6b554734-fc9d-4b74-9b2d-0c0e98baacdf,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"6bf4d027f35ad0d592900f73b004bb1954d4e8416164842fcfcb8a315c21cfe5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:57.926102 kubelet[1847]: E1213 13:28:57.925524 1847 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bf4d027f35ad0d592900f73b004bb1954d4e8416164842fcfcb8a315c21cfe5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:57.926102 kubelet[1847]: E1213 13:28:57.925592 1847 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bf4d027f35ad0d592900f73b004bb1954d4e8416164842fcfcb8a315c21cfe5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-flq57" Dec 13 13:28:57.926102 kubelet[1847]: E1213 13:28:57.925623 1847 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bf4d027f35ad0d592900f73b004bb1954d4e8416164842fcfcb8a315c21cfe5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-flq57" Dec 13 13:28:57.926222 kubelet[1847]: E1213 13:28:57.925673 1847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-flq57_calico-system(6b554734-fc9d-4b74-9b2d-0c0e98baacdf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-flq57_calico-system(6b554734-fc9d-4b74-9b2d-0c0e98baacdf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6bf4d027f35ad0d592900f73b004bb1954d4e8416164842fcfcb8a315c21cfe5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-flq57" podUID="6b554734-fc9d-4b74-9b2d-0c0e98baacdf" Dec 13 13:28:58.216533 kubelet[1847]: E1213 13:28:58.216417 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:58.693801 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6bf4d027f35ad0d592900f73b004bb1954d4e8416164842fcfcb8a315c21cfe5-shm.mount: Deactivated successfully. Dec 13 13:28:58.694103 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9ea676dc99018201564ed28aae61581b63ab75a5e21e331992630787bf9116f6-shm.mount: Deactivated successfully. Dec 13 13:28:58.785713 kubelet[1847]: I1213 13:28:58.785634 1847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bf4d027f35ad0d592900f73b004bb1954d4e8416164842fcfcb8a315c21cfe5" Dec 13 13:28:58.790942 containerd[1449]: time="2024-12-13T13:28:58.787863458Z" level=info msg="StopPodSandbox for \"6bf4d027f35ad0d592900f73b004bb1954d4e8416164842fcfcb8a315c21cfe5\"" Dec 13 13:28:58.790942 containerd[1449]: time="2024-12-13T13:28:58.788337497Z" level=info msg="Ensure that sandbox 6bf4d027f35ad0d592900f73b004bb1954d4e8416164842fcfcb8a315c21cfe5 in task-service has been cleanup successfully" Dec 13 13:28:58.792432 containerd[1449]: time="2024-12-13T13:28:58.791820549Z" level=info msg="TearDown network for sandbox \"6bf4d027f35ad0d592900f73b004bb1954d4e8416164842fcfcb8a315c21cfe5\" successfully" Dec 13 13:28:58.792432 containerd[1449]: time="2024-12-13T13:28:58.791928762Z" level=info msg="StopPodSandbox for \"6bf4d027f35ad0d592900f73b004bb1954d4e8416164842fcfcb8a315c21cfe5\" returns successfully" Dec 13 13:28:58.794303 containerd[1449]: time="2024-12-13T13:28:58.794156801Z" level=info msg="StopPodSandbox for \"7569277e045706f329ab489063636603c9060725a50ae1285872cb139bd6a320\"" Dec 13 13:28:58.794303 containerd[1449]: time="2024-12-13T13:28:58.794352287Z" level=info msg="TearDown network for sandbox \"7569277e045706f329ab489063636603c9060725a50ae1285872cb139bd6a320\" successfully" Dec 13 13:28:58.794303 containerd[1449]: time="2024-12-13T13:28:58.794382835Z" level=info msg="StopPodSandbox for \"7569277e045706f329ab489063636603c9060725a50ae1285872cb139bd6a320\" returns successfully" Dec 13 13:28:58.797427 containerd[1449]: time="2024-12-13T13:28:58.796232794Z" level=info msg="StopPodSandbox for \"600f5c9de67c290a123d858023bf36d579c6cdd3fa0b85cf2a9e47c426e3979f\"" Dec 13 13:28:58.797427 containerd[1449]: time="2024-12-13T13:28:58.796420947Z" level=info msg="TearDown network for sandbox \"600f5c9de67c290a123d858023bf36d579c6cdd3fa0b85cf2a9e47c426e3979f\" successfully" Dec 13 13:28:58.797427 containerd[1449]: time="2024-12-13T13:28:58.796449240Z" level=info msg="StopPodSandbox for \"600f5c9de67c290a123d858023bf36d579c6cdd3fa0b85cf2a9e47c426e3979f\" returns successfully" Dec 13 13:28:58.797427 containerd[1449]: time="2024-12-13T13:28:58.796472434Z" level=info msg="StopPodSandbox for \"9ea676dc99018201564ed28aae61581b63ab75a5e21e331992630787bf9116f6\"" Dec 13 13:28:58.795797 systemd[1]: run-netns-cni\x2d2ec80da8\x2d5271\x2d13ed\x2d7f48\x2d8d94c3ab2ffd.mount: Deactivated successfully. Dec 13 13:28:58.798783 kubelet[1847]: I1213 13:28:58.794566 1847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ea676dc99018201564ed28aae61581b63ab75a5e21e331992630787bf9116f6" Dec 13 13:28:58.802603 containerd[1449]: time="2024-12-13T13:28:58.799676262Z" level=info msg="StopPodSandbox for \"80428a68e929b7a5fe0fe9197451793805d1ccc6b30657b54c5712a597bee652\"" Dec 13 13:28:58.802603 containerd[1449]: time="2024-12-13T13:28:58.799841813Z" level=info msg="TearDown network for sandbox \"80428a68e929b7a5fe0fe9197451793805d1ccc6b30657b54c5712a597bee652\" successfully" Dec 13 13:28:58.802603 containerd[1449]: time="2024-12-13T13:28:58.799868824Z" level=info msg="StopPodSandbox for \"80428a68e929b7a5fe0fe9197451793805d1ccc6b30657b54c5712a597bee652\" returns successfully" Dec 13 13:28:58.802603 containerd[1449]: time="2024-12-13T13:28:58.800451496Z" level=info msg="StopPodSandbox for \"b528438cb8163ad2aef30dd5d012230125b22bacbc1996f26597aacff3a1201a\"" Dec 13 13:28:58.802603 containerd[1449]: time="2024-12-13T13:28:58.800751499Z" level=info msg="TearDown network for sandbox \"b528438cb8163ad2aef30dd5d012230125b22bacbc1996f26597aacff3a1201a\" successfully" Dec 13 13:28:58.802603 containerd[1449]: time="2024-12-13T13:28:58.800783519Z" level=info msg="StopPodSandbox for \"b528438cb8163ad2aef30dd5d012230125b22bacbc1996f26597aacff3a1201a\" returns successfully" Dec 13 13:28:58.802603 containerd[1449]: time="2024-12-13T13:28:58.801104140Z" level=info msg="Ensure that sandbox 9ea676dc99018201564ed28aae61581b63ab75a5e21e331992630787bf9116f6 in task-service has been cleanup successfully" Dec 13 13:28:58.802603 containerd[1449]: time="2024-12-13T13:28:58.801953814Z" level=info msg="StopPodSandbox for \"44e70081d2cdda96567481020ec4198920f6778b69518074c0e3574f13ad78fb\"" Dec 13 13:28:58.803312 containerd[1449]: time="2024-12-13T13:28:58.802782358Z" level=info msg="TearDown network for sandbox \"44e70081d2cdda96567481020ec4198920f6778b69518074c0e3574f13ad78fb\" successfully" Dec 13 13:28:58.803312 containerd[1449]: time="2024-12-13T13:28:58.802816773Z" level=info msg="StopPodSandbox for \"44e70081d2cdda96567481020ec4198920f6778b69518074c0e3574f13ad78fb\" returns successfully" Dec 13 13:28:58.805927 containerd[1449]: time="2024-12-13T13:28:58.804291959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-flq57,Uid:6b554734-fc9d-4b74-9b2d-0c0e98baacdf,Namespace:calico-system,Attempt:6,}" Dec 13 13:28:58.806518 containerd[1449]: time="2024-12-13T13:28:58.805982199Z" level=info msg="TearDown network for sandbox \"9ea676dc99018201564ed28aae61581b63ab75a5e21e331992630787bf9116f6\" successfully" Dec 13 13:28:58.806518 containerd[1449]: time="2024-12-13T13:28:58.806441731Z" level=info msg="StopPodSandbox for \"9ea676dc99018201564ed28aae61581b63ab75a5e21e331992630787bf9116f6\" returns successfully" Dec 13 13:28:58.808662 systemd[1]: run-netns-cni\x2d7dbb327c\x2daae7\x2deb72\x2d26cf\x2d2888436d8a77.mount: Deactivated successfully. Dec 13 13:28:58.809571 containerd[1449]: time="2024-12-13T13:28:58.808740593Z" level=info msg="StopPodSandbox for \"e386a4390adf5c3f69b06794cc703409bc0f40a0334acdf9ce30a67e76443e2a\"" Dec 13 13:28:58.809571 containerd[1449]: time="2024-12-13T13:28:58.808958311Z" level=info msg="TearDown network for sandbox \"e386a4390adf5c3f69b06794cc703409bc0f40a0334acdf9ce30a67e76443e2a\" successfully" Dec 13 13:28:58.809571 containerd[1449]: time="2024-12-13T13:28:58.808987165Z" level=info msg="StopPodSandbox for \"e386a4390adf5c3f69b06794cc703409bc0f40a0334acdf9ce30a67e76443e2a\" returns successfully" Dec 13 13:28:58.809934 containerd[1449]: time="2024-12-13T13:28:58.809562805Z" level=info msg="StopPodSandbox for \"d7631a251585364a892d934d5b4172df7d392390f1512d95497ea59cdaa439f6\"" Dec 13 13:28:58.809934 containerd[1449]: time="2024-12-13T13:28:58.809694301Z" level=info msg="TearDown network for sandbox \"d7631a251585364a892d934d5b4172df7d392390f1512d95497ea59cdaa439f6\" successfully" Dec 13 13:28:58.809934 containerd[1449]: time="2024-12-13T13:28:58.809717545Z" level=info msg="StopPodSandbox for \"d7631a251585364a892d934d5b4172df7d392390f1512d95497ea59cdaa439f6\" returns successfully" Dec 13 13:28:58.811577 containerd[1449]: time="2024-12-13T13:28:58.810714415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-pf4jn,Uid:88a7e54e-548f-40c3-aece-f577d2fd980d,Namespace:default,Attempt:3,}" Dec 13 13:28:59.216932 kubelet[1847]: E1213 13:28:59.216613 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:59.657096 containerd[1449]: time="2024-12-13T13:28:59.657043765Z" level=error msg="Failed to destroy network for sandbox \"3151e7323d43d89e68150771ae3be48f603cfd61d37faaa81b3ba32a6f392db6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:59.659422 containerd[1449]: time="2024-12-13T13:28:59.659165925Z" level=error msg="encountered an error cleaning up failed sandbox \"3151e7323d43d89e68150771ae3be48f603cfd61d37faaa81b3ba32a6f392db6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:59.659422 containerd[1449]: time="2024-12-13T13:28:59.659254642Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-flq57,Uid:6b554734-fc9d-4b74-9b2d-0c0e98baacdf,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"3151e7323d43d89e68150771ae3be48f603cfd61d37faaa81b3ba32a6f392db6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:59.660122 kubelet[1847]: E1213 13:28:59.659524 1847 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3151e7323d43d89e68150771ae3be48f603cfd61d37faaa81b3ba32a6f392db6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:59.660122 kubelet[1847]: E1213 13:28:59.659598 1847 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3151e7323d43d89e68150771ae3be48f603cfd61d37faaa81b3ba32a6f392db6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-flq57" Dec 13 13:28:59.660122 kubelet[1847]: E1213 13:28:59.659637 1847 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3151e7323d43d89e68150771ae3be48f603cfd61d37faaa81b3ba32a6f392db6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-flq57" Dec 13 13:28:59.660297 kubelet[1847]: E1213 13:28:59.659697 1847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-flq57_calico-system(6b554734-fc9d-4b74-9b2d-0c0e98baacdf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-flq57_calico-system(6b554734-fc9d-4b74-9b2d-0c0e98baacdf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3151e7323d43d89e68150771ae3be48f603cfd61d37faaa81b3ba32a6f392db6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-flq57" podUID="6b554734-fc9d-4b74-9b2d-0c0e98baacdf" Dec 13 13:28:59.667472 containerd[1449]: time="2024-12-13T13:28:59.667318255Z" level=error msg="Failed to destroy network for sandbox \"bffb35d558fabfad7d5fc42f34272403a108038fd0e74c603ae553a22acacbef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:59.667991 containerd[1449]: time="2024-12-13T13:28:59.667770784Z" level=error msg="encountered an error cleaning up failed sandbox \"bffb35d558fabfad7d5fc42f34272403a108038fd0e74c603ae553a22acacbef\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:59.667991 containerd[1449]: time="2024-12-13T13:28:59.667835685Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-pf4jn,Uid:88a7e54e-548f-40c3-aece-f577d2fd980d,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"bffb35d558fabfad7d5fc42f34272403a108038fd0e74c603ae553a22acacbef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:59.668127 kubelet[1847]: E1213 13:28:59.668058 1847 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bffb35d558fabfad7d5fc42f34272403a108038fd0e74c603ae553a22acacbef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:59.668170 kubelet[1847]: E1213 13:28:59.668123 1847 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bffb35d558fabfad7d5fc42f34272403a108038fd0e74c603ae553a22acacbef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-pf4jn" Dec 13 13:28:59.668170 kubelet[1847]: E1213 13:28:59.668148 1847 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bffb35d558fabfad7d5fc42f34272403a108038fd0e74c603ae553a22acacbef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-pf4jn" Dec 13 13:28:59.668225 kubelet[1847]: E1213 13:28:59.668192 1847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-pf4jn_default(88a7e54e-548f-40c3-aece-f577d2fd980d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-pf4jn_default(88a7e54e-548f-40c3-aece-f577d2fd980d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bffb35d558fabfad7d5fc42f34272403a108038fd0e74c603ae553a22acacbef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-pf4jn" podUID="88a7e54e-548f-40c3-aece-f577d2fd980d" Dec 13 13:28:59.695830 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3151e7323d43d89e68150771ae3be48f603cfd61d37faaa81b3ba32a6f392db6-shm.mount: Deactivated successfully. Dec 13 13:28:59.802344 kubelet[1847]: I1213 13:28:59.802284 1847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3151e7323d43d89e68150771ae3be48f603cfd61d37faaa81b3ba32a6f392db6" Dec 13 13:28:59.803160 containerd[1449]: time="2024-12-13T13:28:59.803121955Z" level=info msg="StopPodSandbox for \"3151e7323d43d89e68150771ae3be48f603cfd61d37faaa81b3ba32a6f392db6\"" Dec 13 13:28:59.806655 containerd[1449]: time="2024-12-13T13:28:59.803902839Z" level=info msg="Ensure that sandbox 3151e7323d43d89e68150771ae3be48f603cfd61d37faaa81b3ba32a6f392db6 in task-service has been cleanup successfully" Dec 13 13:28:59.806655 containerd[1449]: time="2024-12-13T13:28:59.806027594Z" level=info msg="TearDown network for sandbox \"3151e7323d43d89e68150771ae3be48f603cfd61d37faaa81b3ba32a6f392db6\" successfully" Dec 13 13:28:59.806655 containerd[1449]: time="2024-12-13T13:28:59.806049195Z" level=info msg="StopPodSandbox for \"3151e7323d43d89e68150771ae3be48f603cfd61d37faaa81b3ba32a6f392db6\" returns successfully" Dec 13 13:28:59.807088 containerd[1449]: time="2024-12-13T13:28:59.807048409Z" level=info msg="StopPodSandbox for \"6bf4d027f35ad0d592900f73b004bb1954d4e8416164842fcfcb8a315c21cfe5\"" Dec 13 13:28:59.807188 containerd[1449]: time="2024-12-13T13:28:59.807161020Z" level=info msg="TearDown network for sandbox \"6bf4d027f35ad0d592900f73b004bb1954d4e8416164842fcfcb8a315c21cfe5\" successfully" Dec 13 13:28:59.807188 containerd[1449]: time="2024-12-13T13:28:59.807181989Z" level=info msg="StopPodSandbox for \"6bf4d027f35ad0d592900f73b004bb1954d4e8416164842fcfcb8a315c21cfe5\" returns successfully" Dec 13 13:28:59.807747 systemd[1]: run-netns-cni\x2dfa401d45\x2d0c08\x2d9efd\x2de5ce\x2d7ec44795b096.mount: Deactivated successfully. Dec 13 13:28:59.809354 containerd[1449]: time="2024-12-13T13:28:59.809331380Z" level=info msg="StopPodSandbox for \"7569277e045706f329ab489063636603c9060725a50ae1285872cb139bd6a320\"" Dec 13 13:28:59.810200 containerd[1449]: time="2024-12-13T13:28:59.810180864Z" level=info msg="TearDown network for sandbox \"7569277e045706f329ab489063636603c9060725a50ae1285872cb139bd6a320\" successfully" Dec 13 13:28:59.810282 containerd[1449]: time="2024-12-13T13:28:59.810266795Z" level=info msg="StopPodSandbox for \"7569277e045706f329ab489063636603c9060725a50ae1285872cb139bd6a320\" returns successfully" Dec 13 13:28:59.811475 containerd[1449]: time="2024-12-13T13:28:59.811399820Z" level=info msg="StopPodSandbox for \"600f5c9de67c290a123d858023bf36d579c6cdd3fa0b85cf2a9e47c426e3979f\"" Dec 13 13:28:59.811559 containerd[1449]: time="2024-12-13T13:28:59.811532048Z" level=info msg="TearDown network for sandbox \"600f5c9de67c290a123d858023bf36d579c6cdd3fa0b85cf2a9e47c426e3979f\" successfully" Dec 13 13:28:59.811559 containerd[1449]: time="2024-12-13T13:28:59.811552236Z" level=info msg="StopPodSandbox for \"600f5c9de67c290a123d858023bf36d579c6cdd3fa0b85cf2a9e47c426e3979f\" returns successfully" Dec 13 13:28:59.812827 containerd[1449]: time="2024-12-13T13:28:59.812805115Z" level=info msg="StopPodSandbox for \"80428a68e929b7a5fe0fe9197451793805d1ccc6b30657b54c5712a597bee652\"" Dec 13 13:28:59.812968 kubelet[1847]: I1213 13:28:59.812934 1847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bffb35d558fabfad7d5fc42f34272403a108038fd0e74c603ae553a22acacbef" Dec 13 13:28:59.813074 containerd[1449]: time="2024-12-13T13:28:59.813056477Z" level=info msg="TearDown network for sandbox \"80428a68e929b7a5fe0fe9197451793805d1ccc6b30657b54c5712a597bee652\" successfully" Dec 13 13:28:59.813139 containerd[1449]: time="2024-12-13T13:28:59.813124955Z" level=info msg="StopPodSandbox for \"80428a68e929b7a5fe0fe9197451793805d1ccc6b30657b54c5712a597bee652\" returns successfully" Dec 13 13:28:59.813786 containerd[1449]: time="2024-12-13T13:28:59.813753544Z" level=info msg="StopPodSandbox for \"bffb35d558fabfad7d5fc42f34272403a108038fd0e74c603ae553a22acacbef\"" Dec 13 13:28:59.814020 containerd[1449]: time="2024-12-13T13:28:59.813990308Z" level=info msg="Ensure that sandbox bffb35d558fabfad7d5fc42f34272403a108038fd0e74c603ae553a22acacbef in task-service has been cleanup successfully" Dec 13 13:28:59.816488 containerd[1449]: time="2024-12-13T13:28:59.814299889Z" level=info msg="TearDown network for sandbox \"bffb35d558fabfad7d5fc42f34272403a108038fd0e74c603ae553a22acacbef\" successfully" Dec 13 13:28:59.816488 containerd[1449]: time="2024-12-13T13:28:59.814322652Z" level=info msg="StopPodSandbox for \"bffb35d558fabfad7d5fc42f34272403a108038fd0e74c603ae553a22acacbef\" returns successfully" Dec 13 13:28:59.816488 containerd[1449]: time="2024-12-13T13:28:59.814631942Z" level=info msg="StopPodSandbox for \"9ea676dc99018201564ed28aae61581b63ab75a5e21e331992630787bf9116f6\"" Dec 13 13:28:59.816488 containerd[1449]: time="2024-12-13T13:28:59.814716190Z" level=info msg="TearDown network for sandbox \"9ea676dc99018201564ed28aae61581b63ab75a5e21e331992630787bf9116f6\" successfully" Dec 13 13:28:59.816488 containerd[1449]: time="2024-12-13T13:28:59.814729444Z" level=info msg="StopPodSandbox for \"9ea676dc99018201564ed28aae61581b63ab75a5e21e331992630787bf9116f6\" returns successfully" Dec 13 13:28:59.816488 containerd[1449]: time="2024-12-13T13:28:59.814798694Z" level=info msg="StopPodSandbox for \"b528438cb8163ad2aef30dd5d012230125b22bacbc1996f26597aacff3a1201a\"" Dec 13 13:28:59.816488 containerd[1449]: time="2024-12-13T13:28:59.814861302Z" level=info msg="TearDown network for sandbox \"b528438cb8163ad2aef30dd5d012230125b22bacbc1996f26597aacff3a1201a\" successfully" Dec 13 13:28:59.816488 containerd[1449]: time="2024-12-13T13:28:59.814872483Z" level=info msg="StopPodSandbox for \"b528438cb8163ad2aef30dd5d012230125b22bacbc1996f26597aacff3a1201a\" returns successfully" Dec 13 13:28:59.816922 containerd[1449]: time="2024-12-13T13:28:59.816860972Z" level=info msg="StopPodSandbox for \"e386a4390adf5c3f69b06794cc703409bc0f40a0334acdf9ce30a67e76443e2a\"" Dec 13 13:28:59.817013 containerd[1449]: time="2024-12-13T13:28:59.816987930Z" level=info msg="TearDown network for sandbox \"e386a4390adf5c3f69b06794cc703409bc0f40a0334acdf9ce30a67e76443e2a\" successfully" Dec 13 13:28:59.817013 containerd[1449]: time="2024-12-13T13:28:59.817008088Z" level=info msg="StopPodSandbox for \"e386a4390adf5c3f69b06794cc703409bc0f40a0334acdf9ce30a67e76443e2a\" returns successfully" Dec 13 13:28:59.817100 containerd[1449]: time="2024-12-13T13:28:59.817074773Z" level=info msg="StopPodSandbox for \"44e70081d2cdda96567481020ec4198920f6778b69518074c0e3574f13ad78fb\"" Dec 13 13:28:59.817171 containerd[1449]: time="2024-12-13T13:28:59.817147860Z" level=info msg="TearDown network for sandbox \"44e70081d2cdda96567481020ec4198920f6778b69518074c0e3574f13ad78fb\" successfully" Dec 13 13:28:59.817171 containerd[1449]: time="2024-12-13T13:28:59.817166044Z" level=info msg="StopPodSandbox for \"44e70081d2cdda96567481020ec4198920f6778b69518074c0e3574f13ad78fb\" returns successfully" Dec 13 13:28:59.817258 systemd[1]: run-netns-cni\x2db6b31a30\x2d90dd\x2d70a9\x2d6bb6\x2d9ea6e8bcf630.mount: Deactivated successfully. Dec 13 13:28:59.819440 containerd[1449]: time="2024-12-13T13:28:59.819398221Z" level=info msg="StopPodSandbox for \"d7631a251585364a892d934d5b4172df7d392390f1512d95497ea59cdaa439f6\"" Dec 13 13:28:59.819527 containerd[1449]: time="2024-12-13T13:28:59.819497898Z" level=info msg="TearDown network for sandbox \"d7631a251585364a892d934d5b4172df7d392390f1512d95497ea59cdaa439f6\" successfully" Dec 13 13:28:59.819527 containerd[1449]: time="2024-12-13T13:28:59.819517164Z" level=info msg="StopPodSandbox for \"d7631a251585364a892d934d5b4172df7d392390f1512d95497ea59cdaa439f6\" returns successfully" Dec 13 13:28:59.819644 containerd[1449]: time="2024-12-13T13:28:59.819620287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-flq57,Uid:6b554734-fc9d-4b74-9b2d-0c0e98baacdf,Namespace:calico-system,Attempt:7,}" Dec 13 13:28:59.822139 containerd[1449]: time="2024-12-13T13:28:59.821557621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-pf4jn,Uid:88a7e54e-548f-40c3-aece-f577d2fd980d,Namespace:default,Attempt:4,}" Dec 13 13:28:59.955128 containerd[1449]: time="2024-12-13T13:28:59.954542544Z" level=error msg="Failed to destroy network for sandbox \"6668aa2d04ff7815328ab920bbcbee8449f6409a72a6ff9cdabfbcff8ecd813b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:59.956819 containerd[1449]: time="2024-12-13T13:28:59.956689931Z" level=error msg="encountered an error cleaning up failed sandbox \"6668aa2d04ff7815328ab920bbcbee8449f6409a72a6ff9cdabfbcff8ecd813b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:59.956819 containerd[1449]: time="2024-12-13T13:28:59.956807412Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-flq57,Uid:6b554734-fc9d-4b74-9b2d-0c0e98baacdf,Namespace:calico-system,Attempt:7,} failed, error" error="failed to setup network for sandbox \"6668aa2d04ff7815328ab920bbcbee8449f6409a72a6ff9cdabfbcff8ecd813b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:59.957688 kubelet[1847]: E1213 13:28:59.957256 1847 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6668aa2d04ff7815328ab920bbcbee8449f6409a72a6ff9cdabfbcff8ecd813b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:59.957688 kubelet[1847]: E1213 13:28:59.957341 1847 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6668aa2d04ff7815328ab920bbcbee8449f6409a72a6ff9cdabfbcff8ecd813b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-flq57" Dec 13 13:28:59.957688 kubelet[1847]: E1213 13:28:59.957366 1847 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6668aa2d04ff7815328ab920bbcbee8449f6409a72a6ff9cdabfbcff8ecd813b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-flq57" Dec 13 13:28:59.957836 kubelet[1847]: E1213 13:28:59.957414 1847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-flq57_calico-system(6b554734-fc9d-4b74-9b2d-0c0e98baacdf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-flq57_calico-system(6b554734-fc9d-4b74-9b2d-0c0e98baacdf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6668aa2d04ff7815328ab920bbcbee8449f6409a72a6ff9cdabfbcff8ecd813b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-flq57" podUID="6b554734-fc9d-4b74-9b2d-0c0e98baacdf" Dec 13 13:28:59.959224 containerd[1449]: time="2024-12-13T13:28:59.959062762Z" level=error msg="Failed to destroy network for sandbox \"9ff155ebc05695af7a55ca479fc18572324264e1fd3461519ef5c759d227ddb0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:59.959670 containerd[1449]: time="2024-12-13T13:28:59.959618113Z" level=error msg="encountered an error cleaning up failed sandbox \"9ff155ebc05695af7a55ca479fc18572324264e1fd3461519ef5c759d227ddb0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:59.959838 containerd[1449]: time="2024-12-13T13:28:59.959797900Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-pf4jn,Uid:88a7e54e-548f-40c3-aece-f577d2fd980d,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"9ff155ebc05695af7a55ca479fc18572324264e1fd3461519ef5c759d227ddb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:59.960378 kubelet[1847]: E1213 13:28:59.960203 1847 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ff155ebc05695af7a55ca479fc18572324264e1fd3461519ef5c759d227ddb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:59.960378 kubelet[1847]: E1213 13:28:59.960258 1847 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ff155ebc05695af7a55ca479fc18572324264e1fd3461519ef5c759d227ddb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-pf4jn" Dec 13 13:28:59.960378 kubelet[1847]: E1213 13:28:59.960285 1847 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ff155ebc05695af7a55ca479fc18572324264e1fd3461519ef5c759d227ddb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-pf4jn" Dec 13 13:28:59.960489 kubelet[1847]: E1213 13:28:59.960333 1847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-pf4jn_default(88a7e54e-548f-40c3-aece-f577d2fd980d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-pf4jn_default(88a7e54e-548f-40c3-aece-f577d2fd980d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9ff155ebc05695af7a55ca479fc18572324264e1fd3461519ef5c759d227ddb0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-pf4jn" podUID="88a7e54e-548f-40c3-aece-f577d2fd980d" Dec 13 13:29:00.217737 kubelet[1847]: E1213 13:29:00.217495 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:00.693670 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6668aa2d04ff7815328ab920bbcbee8449f6409a72a6ff9cdabfbcff8ecd813b-shm.mount: Deactivated successfully. Dec 13 13:29:00.817622 kubelet[1847]: I1213 13:29:00.817570 1847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6668aa2d04ff7815328ab920bbcbee8449f6409a72a6ff9cdabfbcff8ecd813b" Dec 13 13:29:00.820831 containerd[1449]: time="2024-12-13T13:29:00.818571480Z" level=info msg="StopPodSandbox for \"6668aa2d04ff7815328ab920bbcbee8449f6409a72a6ff9cdabfbcff8ecd813b\"" Dec 13 13:29:00.820831 containerd[1449]: time="2024-12-13T13:29:00.818939931Z" level=info msg="Ensure that sandbox 6668aa2d04ff7815328ab920bbcbee8449f6409a72a6ff9cdabfbcff8ecd813b in task-service has been cleanup successfully" Dec 13 13:29:00.821590 containerd[1449]: time="2024-12-13T13:29:00.821432696Z" level=info msg="TearDown network for sandbox \"6668aa2d04ff7815328ab920bbcbee8449f6409a72a6ff9cdabfbcff8ecd813b\" successfully" Dec 13 13:29:00.821590 containerd[1449]: time="2024-12-13T13:29:00.821474655Z" level=info msg="StopPodSandbox for \"6668aa2d04ff7815328ab920bbcbee8449f6409a72a6ff9cdabfbcff8ecd813b\" returns successfully" Dec 13 13:29:00.822770 containerd[1449]: time="2024-12-13T13:29:00.822346981Z" level=info msg="StopPodSandbox for \"3151e7323d43d89e68150771ae3be48f603cfd61d37faaa81b3ba32a6f392db6\"" Dec 13 13:29:00.822770 containerd[1449]: time="2024-12-13T13:29:00.822443582Z" level=info msg="TearDown network for sandbox \"3151e7323d43d89e68150771ae3be48f603cfd61d37faaa81b3ba32a6f392db6\" successfully" Dec 13 13:29:00.822770 containerd[1449]: time="2024-12-13T13:29:00.822457508Z" level=info msg="StopPodSandbox for \"3151e7323d43d89e68150771ae3be48f603cfd61d37faaa81b3ba32a6f392db6\" returns successfully" Dec 13 13:29:00.822823 systemd[1]: run-netns-cni\x2d71756063\x2df937\x2dbfff\x2d05eb\x2d62023c998a85.mount: Deactivated successfully. Dec 13 13:29:00.824412 containerd[1449]: time="2024-12-13T13:29:00.823929018Z" level=info msg="StopPodSandbox for \"6bf4d027f35ad0d592900f73b004bb1954d4e8416164842fcfcb8a315c21cfe5\"" Dec 13 13:29:00.824412 containerd[1449]: time="2024-12-13T13:29:00.824021792Z" level=info msg="TearDown network for sandbox \"6bf4d027f35ad0d592900f73b004bb1954d4e8416164842fcfcb8a315c21cfe5\" successfully" Dec 13 13:29:00.824412 containerd[1449]: time="2024-12-13T13:29:00.824036399Z" level=info msg="StopPodSandbox for \"6bf4d027f35ad0d592900f73b004bb1954d4e8416164842fcfcb8a315c21cfe5\" returns successfully" Dec 13 13:29:00.826952 containerd[1449]: time="2024-12-13T13:29:00.826703472Z" level=info msg="StopPodSandbox for \"7569277e045706f329ab489063636603c9060725a50ae1285872cb139bd6a320\"" Dec 13 13:29:00.826952 containerd[1449]: time="2024-12-13T13:29:00.826789393Z" level=info msg="TearDown network for sandbox \"7569277e045706f329ab489063636603c9060725a50ae1285872cb139bd6a320\" successfully" Dec 13 13:29:00.826952 containerd[1449]: time="2024-12-13T13:29:00.826801946Z" level=info msg="StopPodSandbox for \"7569277e045706f329ab489063636603c9060725a50ae1285872cb139bd6a320\" returns successfully" Dec 13 13:29:00.827422 containerd[1449]: time="2024-12-13T13:29:00.827374911Z" level=info msg="StopPodSandbox for \"600f5c9de67c290a123d858023bf36d579c6cdd3fa0b85cf2a9e47c426e3979f\"" Dec 13 13:29:00.827477 containerd[1449]: time="2024-12-13T13:29:00.827456965Z" level=info msg="TearDown network for sandbox \"600f5c9de67c290a123d858023bf36d579c6cdd3fa0b85cf2a9e47c426e3979f\" successfully" Dec 13 13:29:00.827477 containerd[1449]: time="2024-12-13T13:29:00.827469849Z" level=info msg="StopPodSandbox for \"600f5c9de67c290a123d858023bf36d579c6cdd3fa0b85cf2a9e47c426e3979f\" returns successfully" Dec 13 13:29:00.827673 kubelet[1847]: I1213 13:29:00.827628 1847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ff155ebc05695af7a55ca479fc18572324264e1fd3461519ef5c759d227ddb0" Dec 13 13:29:00.828775 containerd[1449]: time="2024-12-13T13:29:00.828468953Z" level=info msg="StopPodSandbox for \"9ff155ebc05695af7a55ca479fc18572324264e1fd3461519ef5c759d227ddb0\"" Dec 13 13:29:00.828775 containerd[1449]: time="2024-12-13T13:29:00.828687472Z" level=info msg="Ensure that sandbox 9ff155ebc05695af7a55ca479fc18572324264e1fd3461519ef5c759d227ddb0 in task-service has been cleanup successfully" Dec 13 13:29:00.830819 containerd[1449]: time="2024-12-13T13:29:00.830783093Z" level=info msg="TearDown network for sandbox \"9ff155ebc05695af7a55ca479fc18572324264e1fd3461519ef5c759d227ddb0\" successfully" Dec 13 13:29:00.830819 containerd[1449]: time="2024-12-13T13:29:00.830812598Z" level=info msg="StopPodSandbox for \"9ff155ebc05695af7a55ca479fc18572324264e1fd3461519ef5c759d227ddb0\" returns successfully" Dec 13 13:29:00.831046 containerd[1449]: time="2024-12-13T13:29:00.831017953Z" level=info msg="StopPodSandbox for \"80428a68e929b7a5fe0fe9197451793805d1ccc6b30657b54c5712a597bee652\"" Dec 13 13:29:00.831929 containerd[1449]: time="2024-12-13T13:29:00.831098745Z" level=info msg="TearDown network for sandbox \"80428a68e929b7a5fe0fe9197451793805d1ccc6b30657b54c5712a597bee652\" successfully" Dec 13 13:29:00.831929 containerd[1449]: time="2024-12-13T13:29:00.831117320Z" level=info msg="StopPodSandbox for \"80428a68e929b7a5fe0fe9197451793805d1ccc6b30657b54c5712a597bee652\" returns successfully" Dec 13 13:29:00.832320 systemd[1]: run-netns-cni\x2d64d262ff\x2dead4\x2dfb53\x2da0fd\x2dc361ef982759.mount: Deactivated successfully. Dec 13 13:29:00.833032 containerd[1449]: time="2024-12-13T13:29:00.832423069Z" level=info msg="StopPodSandbox for \"bffb35d558fabfad7d5fc42f34272403a108038fd0e74c603ae553a22acacbef\"" Dec 13 13:29:00.833032 containerd[1449]: time="2024-12-13T13:29:00.832496106Z" level=info msg="TearDown network for sandbox \"bffb35d558fabfad7d5fc42f34272403a108038fd0e74c603ae553a22acacbef\" successfully" Dec 13 13:29:00.833032 containerd[1449]: time="2024-12-13T13:29:00.832507096Z" level=info msg="StopPodSandbox for \"bffb35d558fabfad7d5fc42f34272403a108038fd0e74c603ae553a22acacbef\" returns successfully" Dec 13 13:29:00.834064 containerd[1449]: time="2024-12-13T13:29:00.832558553Z" level=info msg="StopPodSandbox for \"b528438cb8163ad2aef30dd5d012230125b22bacbc1996f26597aacff3a1201a\"" Dec 13 13:29:00.834064 containerd[1449]: time="2024-12-13T13:29:00.833616477Z" level=info msg="TearDown network for sandbox \"b528438cb8163ad2aef30dd5d012230125b22bacbc1996f26597aacff3a1201a\" successfully" Dec 13 13:29:00.834064 containerd[1449]: time="2024-12-13T13:29:00.833642285Z" level=info msg="StopPodSandbox for \"b528438cb8163ad2aef30dd5d012230125b22bacbc1996f26597aacff3a1201a\" returns successfully" Dec 13 13:29:00.834761 containerd[1449]: time="2024-12-13T13:29:00.834708615Z" level=info msg="StopPodSandbox for \"9ea676dc99018201564ed28aae61581b63ab75a5e21e331992630787bf9116f6\"" Dec 13 13:29:00.834847 containerd[1449]: time="2024-12-13T13:29:00.834822208Z" level=info msg="TearDown network for sandbox \"9ea676dc99018201564ed28aae61581b63ab75a5e21e331992630787bf9116f6\" successfully" Dec 13 13:29:00.834847 containerd[1449]: time="2024-12-13T13:29:00.834842546Z" level=info msg="StopPodSandbox for \"9ea676dc99018201564ed28aae61581b63ab75a5e21e331992630787bf9116f6\" returns successfully" Dec 13 13:29:00.834962 containerd[1449]: time="2024-12-13T13:29:00.834947954Z" level=info msg="StopPodSandbox for \"44e70081d2cdda96567481020ec4198920f6778b69518074c0e3574f13ad78fb\"" Dec 13 13:29:00.835049 containerd[1449]: time="2024-12-13T13:29:00.835022724Z" level=info msg="TearDown network for sandbox \"44e70081d2cdda96567481020ec4198920f6778b69518074c0e3574f13ad78fb\" successfully" Dec 13 13:29:00.835049 containerd[1449]: time="2024-12-13T13:29:00.835044655Z" level=info msg="StopPodSandbox for \"44e70081d2cdda96567481020ec4198920f6778b69518074c0e3574f13ad78fb\" returns successfully" Dec 13 13:29:00.835747 containerd[1449]: time="2024-12-13T13:29:00.835605988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-flq57,Uid:6b554734-fc9d-4b74-9b2d-0c0e98baacdf,Namespace:calico-system,Attempt:8,}" Dec 13 13:29:00.837429 containerd[1449]: time="2024-12-13T13:29:00.837403489Z" level=info msg="StopPodSandbox for \"e386a4390adf5c3f69b06794cc703409bc0f40a0334acdf9ce30a67e76443e2a\"" Dec 13 13:29:00.837712 containerd[1449]: time="2024-12-13T13:29:00.837693934Z" level=info msg="TearDown network for sandbox \"e386a4390adf5c3f69b06794cc703409bc0f40a0334acdf9ce30a67e76443e2a\" successfully" Dec 13 13:29:00.837790 containerd[1449]: time="2024-12-13T13:29:00.837775457Z" level=info msg="StopPodSandbox for \"e386a4390adf5c3f69b06794cc703409bc0f40a0334acdf9ce30a67e76443e2a\" returns successfully" Dec 13 13:29:00.838946 containerd[1449]: time="2024-12-13T13:29:00.838925304Z" level=info msg="StopPodSandbox for \"d7631a251585364a892d934d5b4172df7d392390f1512d95497ea59cdaa439f6\"" Dec 13 13:29:00.839133 containerd[1449]: time="2024-12-13T13:29:00.839114238Z" level=info msg="TearDown network for sandbox \"d7631a251585364a892d934d5b4172df7d392390f1512d95497ea59cdaa439f6\" successfully" Dec 13 13:29:00.839203 containerd[1449]: time="2024-12-13T13:29:00.839188397Z" level=info msg="StopPodSandbox for \"d7631a251585364a892d934d5b4172df7d392390f1512d95497ea59cdaa439f6\" returns successfully" Dec 13 13:29:00.839663 containerd[1449]: time="2024-12-13T13:29:00.839641357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-pf4jn,Uid:88a7e54e-548f-40c3-aece-f577d2fd980d,Namespace:default,Attempt:5,}" Dec 13 13:29:00.967332 containerd[1449]: time="2024-12-13T13:29:00.967212426Z" level=error msg="Failed to destroy network for sandbox \"99b9d16fb8fe14762daea9ce0f6475277613de5362a06a134624b846dd5a6c6b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:29:00.967794 containerd[1449]: time="2024-12-13T13:29:00.967765644Z" level=error msg="encountered an error cleaning up failed sandbox \"99b9d16fb8fe14762daea9ce0f6475277613de5362a06a134624b846dd5a6c6b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:29:00.968048 containerd[1449]: time="2024-12-13T13:29:00.967949639Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-flq57,Uid:6b554734-fc9d-4b74-9b2d-0c0e98baacdf,Namespace:calico-system,Attempt:8,} failed, error" error="failed to setup network for sandbox \"99b9d16fb8fe14762daea9ce0f6475277613de5362a06a134624b846dd5a6c6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:29:00.970124 kubelet[1847]: E1213 13:29:00.970076 1847 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99b9d16fb8fe14762daea9ce0f6475277613de5362a06a134624b846dd5a6c6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:29:00.970218 kubelet[1847]: E1213 13:29:00.970154 1847 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99b9d16fb8fe14762daea9ce0f6475277613de5362a06a134624b846dd5a6c6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-flq57" Dec 13 13:29:00.970218 kubelet[1847]: E1213 13:29:00.970179 1847 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99b9d16fb8fe14762daea9ce0f6475277613de5362a06a134624b846dd5a6c6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-flq57" Dec 13 13:29:00.970685 kubelet[1847]: E1213 13:29:00.970257 1847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-flq57_calico-system(6b554734-fc9d-4b74-9b2d-0c0e98baacdf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-flq57_calico-system(6b554734-fc9d-4b74-9b2d-0c0e98baacdf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"99b9d16fb8fe14762daea9ce0f6475277613de5362a06a134624b846dd5a6c6b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-flq57" podUID="6b554734-fc9d-4b74-9b2d-0c0e98baacdf" Dec 13 13:29:00.994672 containerd[1449]: time="2024-12-13T13:29:00.994625439Z" level=error msg="Failed to destroy network for sandbox \"9d0aae2d2a669130f6b3d818adce49287f982e022e4481e543774a8e5a7e4f0e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:29:00.995215 containerd[1449]: time="2024-12-13T13:29:00.995151335Z" level=error msg="encountered an error cleaning up failed sandbox \"9d0aae2d2a669130f6b3d818adce49287f982e022e4481e543774a8e5a7e4f0e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:29:00.995285 containerd[1449]: time="2024-12-13T13:29:00.995253006Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-pf4jn,Uid:88a7e54e-548f-40c3-aece-f577d2fd980d,Namespace:default,Attempt:5,} failed, error" error="failed to setup network for sandbox \"9d0aae2d2a669130f6b3d818adce49287f982e022e4481e543774a8e5a7e4f0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:29:00.995838 kubelet[1847]: E1213 13:29:00.995466 1847 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d0aae2d2a669130f6b3d818adce49287f982e022e4481e543774a8e5a7e4f0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:29:00.995838 kubelet[1847]: E1213 13:29:00.995520 1847 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d0aae2d2a669130f6b3d818adce49287f982e022e4481e543774a8e5a7e4f0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-pf4jn" Dec 13 13:29:00.995838 kubelet[1847]: E1213 13:29:00.995549 1847 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d0aae2d2a669130f6b3d818adce49287f982e022e4481e543774a8e5a7e4f0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-pf4jn" Dec 13 13:29:00.996013 kubelet[1847]: E1213 13:29:00.995600 1847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-pf4jn_default(88a7e54e-548f-40c3-aece-f577d2fd980d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-pf4jn_default(88a7e54e-548f-40c3-aece-f577d2fd980d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9d0aae2d2a669130f6b3d818adce49287f982e022e4481e543774a8e5a7e4f0e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-pf4jn" podUID="88a7e54e-548f-40c3-aece-f577d2fd980d" Dec 13 13:29:01.219352 kubelet[1847]: E1213 13:29:01.218669 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:01.693187 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-99b9d16fb8fe14762daea9ce0f6475277613de5362a06a134624b846dd5a6c6b-shm.mount: Deactivated successfully. Dec 13 13:29:01.835735 kubelet[1847]: I1213 13:29:01.835555 1847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99b9d16fb8fe14762daea9ce0f6475277613de5362a06a134624b846dd5a6c6b" Dec 13 13:29:01.836730 containerd[1449]: time="2024-12-13T13:29:01.836695662Z" level=info msg="StopPodSandbox for \"99b9d16fb8fe14762daea9ce0f6475277613de5362a06a134624b846dd5a6c6b\"" Dec 13 13:29:01.837011 containerd[1449]: time="2024-12-13T13:29:01.836923118Z" level=info msg="Ensure that sandbox 99b9d16fb8fe14762daea9ce0f6475277613de5362a06a134624b846dd5a6c6b in task-service has been cleanup successfully" Dec 13 13:29:01.839255 containerd[1449]: time="2024-12-13T13:29:01.839036662Z" level=info msg="TearDown network for sandbox \"99b9d16fb8fe14762daea9ce0f6475277613de5362a06a134624b846dd5a6c6b\" successfully" Dec 13 13:29:01.839255 containerd[1449]: time="2024-12-13T13:29:01.839061479Z" level=info msg="StopPodSandbox for \"99b9d16fb8fe14762daea9ce0f6475277613de5362a06a134624b846dd5a6c6b\" returns successfully" Dec 13 13:29:01.839973 systemd[1]: run-netns-cni\x2dff6a036f\x2d022c\x2db50b\x2d8490\x2dccb804be31e0.mount: Deactivated successfully. Dec 13 13:29:01.840198 containerd[1449]: time="2024-12-13T13:29:01.840044683Z" level=info msg="StopPodSandbox for \"6668aa2d04ff7815328ab920bbcbee8449f6409a72a6ff9cdabfbcff8ecd813b\"" Dec 13 13:29:01.840198 containerd[1449]: time="2024-12-13T13:29:01.840116307Z" level=info msg="TearDown network for sandbox \"6668aa2d04ff7815328ab920bbcbee8449f6409a72a6ff9cdabfbcff8ecd813b\" successfully" Dec 13 13:29:01.840198 containerd[1449]: time="2024-12-13T13:29:01.840127969Z" level=info msg="StopPodSandbox for \"6668aa2d04ff7815328ab920bbcbee8449f6409a72a6ff9cdabfbcff8ecd813b\" returns successfully" Dec 13 13:29:01.840843 containerd[1449]: time="2024-12-13T13:29:01.840805309Z" level=info msg="StopPodSandbox for \"3151e7323d43d89e68150771ae3be48f603cfd61d37faaa81b3ba32a6f392db6\"" Dec 13 13:29:01.840920 containerd[1449]: time="2024-12-13T13:29:01.840872966Z" level=info msg="TearDown network for sandbox \"3151e7323d43d89e68150771ae3be48f603cfd61d37faaa81b3ba32a6f392db6\" successfully" Dec 13 13:29:01.840920 containerd[1449]: time="2024-12-13T13:29:01.840904505Z" level=info msg="StopPodSandbox for \"3151e7323d43d89e68150771ae3be48f603cfd61d37faaa81b3ba32a6f392db6\" returns successfully" Dec 13 13:29:01.841233 containerd[1449]: time="2024-12-13T13:29:01.841195251Z" level=info msg="StopPodSandbox for \"6bf4d027f35ad0d592900f73b004bb1954d4e8416164842fcfcb8a315c21cfe5\"" Dec 13 13:29:01.841309 containerd[1449]: time="2024-12-13T13:29:01.841266665Z" level=info msg="TearDown network for sandbox \"6bf4d027f35ad0d592900f73b004bb1954d4e8416164842fcfcb8a315c21cfe5\" successfully" Dec 13 13:29:01.841309 containerd[1449]: time="2024-12-13T13:29:01.841284618Z" level=info msg="StopPodSandbox for \"6bf4d027f35ad0d592900f73b004bb1954d4e8416164842fcfcb8a315c21cfe5\" returns successfully" Dec 13 13:29:01.843456 containerd[1449]: time="2024-12-13T13:29:01.843340364Z" level=info msg="StopPodSandbox for \"7569277e045706f329ab489063636603c9060725a50ae1285872cb139bd6a320\"" Dec 13 13:29:01.843697 containerd[1449]: time="2024-12-13T13:29:01.843587567Z" level=info msg="TearDown network for sandbox \"7569277e045706f329ab489063636603c9060725a50ae1285872cb139bd6a320\" successfully" Dec 13 13:29:01.843697 containerd[1449]: time="2024-12-13T13:29:01.843604479Z" level=info msg="StopPodSandbox for \"7569277e045706f329ab489063636603c9060725a50ae1285872cb139bd6a320\" returns successfully" Dec 13 13:29:01.845072 containerd[1449]: time="2024-12-13T13:29:01.845033790Z" level=info msg="StopPodSandbox for \"600f5c9de67c290a123d858023bf36d579c6cdd3fa0b85cf2a9e47c426e3979f\"" Dec 13 13:29:01.845130 containerd[1449]: time="2024-12-13T13:29:01.845118919Z" level=info msg="TearDown network for sandbox \"600f5c9de67c290a123d858023bf36d579c6cdd3fa0b85cf2a9e47c426e3979f\" successfully" Dec 13 13:29:01.845160 containerd[1449]: time="2024-12-13T13:29:01.845132214Z" level=info msg="StopPodSandbox for \"600f5c9de67c290a123d858023bf36d579c6cdd3fa0b85cf2a9e47c426e3979f\" returns successfully" Dec 13 13:29:01.845362 containerd[1449]: time="2024-12-13T13:29:01.845336938Z" level=info msg="StopPodSandbox for \"80428a68e929b7a5fe0fe9197451793805d1ccc6b30657b54c5712a597bee652\"" Dec 13 13:29:01.845514 containerd[1449]: time="2024-12-13T13:29:01.845403894Z" level=info msg="TearDown network for sandbox \"80428a68e929b7a5fe0fe9197451793805d1ccc6b30657b54c5712a597bee652\" successfully" Dec 13 13:29:01.845514 containerd[1449]: time="2024-12-13T13:29:01.845416237Z" level=info msg="StopPodSandbox for \"80428a68e929b7a5fe0fe9197451793805d1ccc6b30657b54c5712a597bee652\" returns successfully" Dec 13 13:29:01.845906 containerd[1449]: time="2024-12-13T13:29:01.845859849Z" level=info msg="StopPodSandbox for \"b528438cb8163ad2aef30dd5d012230125b22bacbc1996f26597aacff3a1201a\"" Dec 13 13:29:01.845954 containerd[1449]: time="2024-12-13T13:29:01.845942163Z" level=info msg="TearDown network for sandbox \"b528438cb8163ad2aef30dd5d012230125b22bacbc1996f26597aacff3a1201a\" successfully" Dec 13 13:29:01.845980 containerd[1449]: time="2024-12-13T13:29:01.845955779Z" level=info msg="StopPodSandbox for \"b528438cb8163ad2aef30dd5d012230125b22bacbc1996f26597aacff3a1201a\" returns successfully" Dec 13 13:29:01.846942 kubelet[1847]: I1213 13:29:01.846204 1847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d0aae2d2a669130f6b3d818adce49287f982e022e4481e543774a8e5a7e4f0e" Dec 13 13:29:01.847310 containerd[1449]: time="2024-12-13T13:29:01.847151582Z" level=info msg="StopPodSandbox for \"44e70081d2cdda96567481020ec4198920f6778b69518074c0e3574f13ad78fb\"" Dec 13 13:29:01.847310 containerd[1449]: time="2024-12-13T13:29:01.847240979Z" level=info msg="TearDown network for sandbox \"44e70081d2cdda96567481020ec4198920f6778b69518074c0e3574f13ad78fb\" successfully" Dec 13 13:29:01.847310 containerd[1449]: time="2024-12-13T13:29:01.847254475Z" level=info msg="StopPodSandbox for \"44e70081d2cdda96567481020ec4198920f6778b69518074c0e3574f13ad78fb\" returns successfully" Dec 13 13:29:01.847310 containerd[1449]: time="2024-12-13T13:29:01.847260987Z" level=info msg="StopPodSandbox for \"9d0aae2d2a669130f6b3d818adce49287f982e022e4481e543774a8e5a7e4f0e\"" Dec 13 13:29:01.847497 containerd[1449]: time="2024-12-13T13:29:01.847437147Z" level=info msg="Ensure that sandbox 9d0aae2d2a669130f6b3d818adce49287f982e022e4481e543774a8e5a7e4f0e in task-service has been cleanup successfully" Dec 13 13:29:01.849230 containerd[1449]: time="2024-12-13T13:29:01.849192369Z" level=info msg="TearDown network for sandbox \"9d0aae2d2a669130f6b3d818adce49287f982e022e4481e543774a8e5a7e4f0e\" successfully" Dec 13 13:29:01.849230 containerd[1449]: time="2024-12-13T13:29:01.849220282Z" level=info msg="StopPodSandbox for \"9d0aae2d2a669130f6b3d818adce49287f982e022e4481e543774a8e5a7e4f0e\" returns successfully" Dec 13 13:29:01.849521 containerd[1449]: time="2024-12-13T13:29:01.849487262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-flq57,Uid:6b554734-fc9d-4b74-9b2d-0c0e98baacdf,Namespace:calico-system,Attempt:9,}" Dec 13 13:29:01.850672 systemd[1]: run-netns-cni\x2d130c8a85\x2d78bb\x2dc3cc\x2de9f0\x2d9cc218c5dbf9.mount: Deactivated successfully. Dec 13 13:29:01.852278 containerd[1449]: time="2024-12-13T13:29:01.852222152Z" level=info msg="StopPodSandbox for \"9ff155ebc05695af7a55ca479fc18572324264e1fd3461519ef5c759d227ddb0\"" Dec 13 13:29:01.852376 containerd[1449]: time="2024-12-13T13:29:01.852314966Z" level=info msg="TearDown network for sandbox \"9ff155ebc05695af7a55ca479fc18572324264e1fd3461519ef5c759d227ddb0\" successfully" Dec 13 13:29:01.852376 containerd[1449]: time="2024-12-13T13:29:01.852327820Z" level=info msg="StopPodSandbox for \"9ff155ebc05695af7a55ca479fc18572324264e1fd3461519ef5c759d227ddb0\" returns successfully" Dec 13 13:29:01.855055 containerd[1449]: time="2024-12-13T13:29:01.855022433Z" level=info msg="StopPodSandbox for \"bffb35d558fabfad7d5fc42f34272403a108038fd0e74c603ae553a22acacbef\"" Dec 13 13:29:01.855131 containerd[1449]: time="2024-12-13T13:29:01.855104788Z" level=info msg="TearDown network for sandbox \"bffb35d558fabfad7d5fc42f34272403a108038fd0e74c603ae553a22acacbef\" successfully" Dec 13 13:29:01.855131 containerd[1449]: time="2024-12-13T13:29:01.855123293Z" level=info msg="StopPodSandbox for \"bffb35d558fabfad7d5fc42f34272403a108038fd0e74c603ae553a22acacbef\" returns successfully" Dec 13 13:29:01.856930 containerd[1449]: time="2024-12-13T13:29:01.856875088Z" level=info msg="StopPodSandbox for \"9ea676dc99018201564ed28aae61581b63ab75a5e21e331992630787bf9116f6\"" Dec 13 13:29:01.856979 containerd[1449]: time="2024-12-13T13:29:01.856963604Z" level=info msg="TearDown network for sandbox \"9ea676dc99018201564ed28aae61581b63ab75a5e21e331992630787bf9116f6\" successfully" Dec 13 13:29:01.856979 containerd[1449]: time="2024-12-13T13:29:01.856975366Z" level=info msg="StopPodSandbox for \"9ea676dc99018201564ed28aae61581b63ab75a5e21e331992630787bf9116f6\" returns successfully" Dec 13 13:29:01.860280 containerd[1449]: time="2024-12-13T13:29:01.860248505Z" level=info msg="StopPodSandbox for \"e386a4390adf5c3f69b06794cc703409bc0f40a0334acdf9ce30a67e76443e2a\"" Dec 13 13:29:01.860337 containerd[1449]: time="2024-12-13T13:29:01.860320069Z" level=info msg="TearDown network for sandbox \"e386a4390adf5c3f69b06794cc703409bc0f40a0334acdf9ce30a67e76443e2a\" successfully" Dec 13 13:29:01.860337 containerd[1449]: time="2024-12-13T13:29:01.860332943Z" level=info msg="StopPodSandbox for \"e386a4390adf5c3f69b06794cc703409bc0f40a0334acdf9ce30a67e76443e2a\" returns successfully" Dec 13 13:29:01.860560 containerd[1449]: time="2024-12-13T13:29:01.860535724Z" level=info msg="StopPodSandbox for \"d7631a251585364a892d934d5b4172df7d392390f1512d95497ea59cdaa439f6\"" Dec 13 13:29:01.860623 containerd[1449]: time="2024-12-13T13:29:01.860604453Z" level=info msg="TearDown network for sandbox \"d7631a251585364a892d934d5b4172df7d392390f1512d95497ea59cdaa439f6\" successfully" Dec 13 13:29:01.860651 containerd[1449]: time="2024-12-13T13:29:01.860621545Z" level=info msg="StopPodSandbox for \"d7631a251585364a892d934d5b4172df7d392390f1512d95497ea59cdaa439f6\" returns successfully" Dec 13 13:29:01.864535 containerd[1449]: time="2024-12-13T13:29:01.864259087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-pf4jn,Uid:88a7e54e-548f-40c3-aece-f577d2fd980d,Namespace:default,Attempt:6,}" Dec 13 13:29:02.233925 kubelet[1847]: E1213 13:29:02.233753 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:02.412662 containerd[1449]: time="2024-12-13T13:29:02.412381069Z" level=error msg="Failed to destroy network for sandbox \"10120d9980b653e4451fb292195c4c1014167a51da97b8f34ddfc37e9a44261f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:29:02.413820 containerd[1449]: time="2024-12-13T13:29:02.413646052Z" level=error msg="encountered an error cleaning up failed sandbox \"10120d9980b653e4451fb292195c4c1014167a51da97b8f34ddfc37e9a44261f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:29:02.413820 containerd[1449]: time="2024-12-13T13:29:02.413711665Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-flq57,Uid:6b554734-fc9d-4b74-9b2d-0c0e98baacdf,Namespace:calico-system,Attempt:9,} failed, error" error="failed to setup network for sandbox \"10120d9980b653e4451fb292195c4c1014167a51da97b8f34ddfc37e9a44261f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:29:02.414542 kubelet[1847]: E1213 13:29:02.414082 1847 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10120d9980b653e4451fb292195c4c1014167a51da97b8f34ddfc37e9a44261f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:29:02.414542 kubelet[1847]: E1213 13:29:02.414153 1847 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10120d9980b653e4451fb292195c4c1014167a51da97b8f34ddfc37e9a44261f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-flq57" Dec 13 13:29:02.424705 kubelet[1847]: E1213 13:29:02.424231 1847 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10120d9980b653e4451fb292195c4c1014167a51da97b8f34ddfc37e9a44261f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-flq57" Dec 13 13:29:02.424705 kubelet[1847]: E1213 13:29:02.424312 1847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-flq57_calico-system(6b554734-fc9d-4b74-9b2d-0c0e98baacdf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-flq57_calico-system(6b554734-fc9d-4b74-9b2d-0c0e98baacdf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"10120d9980b653e4451fb292195c4c1014167a51da97b8f34ddfc37e9a44261f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-flq57" podUID="6b554734-fc9d-4b74-9b2d-0c0e98baacdf" Dec 13 13:29:02.435277 containerd[1449]: time="2024-12-13T13:29:02.435200056Z" level=error msg="Failed to destroy network for sandbox \"a54aca91cadd65341fbf2200f2f3dd1ba0985ce903a04094f876773253f96c86\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:29:02.435898 containerd[1449]: time="2024-12-13T13:29:02.435742233Z" level=error msg="encountered an error cleaning up failed sandbox \"a54aca91cadd65341fbf2200f2f3dd1ba0985ce903a04094f876773253f96c86\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:29:02.436043 containerd[1449]: time="2024-12-13T13:29:02.435844946Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-pf4jn,Uid:88a7e54e-548f-40c3-aece-f577d2fd980d,Namespace:default,Attempt:6,} failed, error" error="failed to setup network for sandbox \"a54aca91cadd65341fbf2200f2f3dd1ba0985ce903a04094f876773253f96c86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:29:02.436921 kubelet[1847]: E1213 13:29:02.436445 1847 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a54aca91cadd65341fbf2200f2f3dd1ba0985ce903a04094f876773253f96c86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:29:02.436921 kubelet[1847]: E1213 13:29:02.436542 1847 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a54aca91cadd65341fbf2200f2f3dd1ba0985ce903a04094f876773253f96c86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-pf4jn" Dec 13 13:29:02.436921 kubelet[1847]: E1213 13:29:02.436572 1847 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a54aca91cadd65341fbf2200f2f3dd1ba0985ce903a04094f876773253f96c86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-pf4jn" Dec 13 13:29:02.437107 kubelet[1847]: E1213 13:29:02.436631 1847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-pf4jn_default(88a7e54e-548f-40c3-aece-f577d2fd980d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-pf4jn_default(88a7e54e-548f-40c3-aece-f577d2fd980d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a54aca91cadd65341fbf2200f2f3dd1ba0985ce903a04094f876773253f96c86\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-pf4jn" podUID="88a7e54e-548f-40c3-aece-f577d2fd980d" Dec 13 13:29:02.692645 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a54aca91cadd65341fbf2200f2f3dd1ba0985ce903a04094f876773253f96c86-shm.mount: Deactivated successfully. Dec 13 13:29:02.692778 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-10120d9980b653e4451fb292195c4c1014167a51da97b8f34ddfc37e9a44261f-shm.mount: Deactivated successfully. Dec 13 13:29:02.869350 kubelet[1847]: I1213 13:29:02.869286 1847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10120d9980b653e4451fb292195c4c1014167a51da97b8f34ddfc37e9a44261f" Dec 13 13:29:02.870483 containerd[1449]: time="2024-12-13T13:29:02.869952795Z" level=info msg="StopPodSandbox for \"10120d9980b653e4451fb292195c4c1014167a51da97b8f34ddfc37e9a44261f\"" Dec 13 13:29:02.870483 containerd[1449]: time="2024-12-13T13:29:02.870187866Z" level=info msg="Ensure that sandbox 10120d9980b653e4451fb292195c4c1014167a51da97b8f34ddfc37e9a44261f in task-service has been cleanup successfully" Dec 13 13:29:02.872700 systemd[1]: run-netns-cni\x2da5e2f832\x2db4ea\x2dbd7b\x2df29c\x2d9f2a8eebb745.mount: Deactivated successfully. Dec 13 13:29:02.874239 containerd[1449]: time="2024-12-13T13:29:02.874208657Z" level=info msg="TearDown network for sandbox \"10120d9980b653e4451fb292195c4c1014167a51da97b8f34ddfc37e9a44261f\" successfully" Dec 13 13:29:02.874239 containerd[1449]: time="2024-12-13T13:29:02.874234596Z" level=info msg="StopPodSandbox for \"10120d9980b653e4451fb292195c4c1014167a51da97b8f34ddfc37e9a44261f\" returns successfully" Dec 13 13:29:02.874857 containerd[1449]: time="2024-12-13T13:29:02.874527495Z" level=info msg="StopPodSandbox for \"99b9d16fb8fe14762daea9ce0f6475277613de5362a06a134624b846dd5a6c6b\"" Dec 13 13:29:02.874857 containerd[1449]: time="2024-12-13T13:29:02.874590724Z" level=info msg="TearDown network for sandbox \"99b9d16fb8fe14762daea9ce0f6475277613de5362a06a134624b846dd5a6c6b\" successfully" Dec 13 13:29:02.874857 containerd[1449]: time="2024-12-13T13:29:02.874602125Z" level=info msg="StopPodSandbox for \"99b9d16fb8fe14762daea9ce0f6475277613de5362a06a134624b846dd5a6c6b\" returns successfully" Dec 13 13:29:02.876125 containerd[1449]: time="2024-12-13T13:29:02.875827954Z" level=info msg="StopPodSandbox for \"6668aa2d04ff7815328ab920bbcbee8449f6409a72a6ff9cdabfbcff8ecd813b\"" Dec 13 13:29:02.876125 containerd[1449]: time="2024-12-13T13:29:02.875937249Z" level=info msg="TearDown network for sandbox \"6668aa2d04ff7815328ab920bbcbee8449f6409a72a6ff9cdabfbcff8ecd813b\" successfully" Dec 13 13:29:02.876125 containerd[1449]: time="2024-12-13T13:29:02.875949662Z" level=info msg="StopPodSandbox for \"6668aa2d04ff7815328ab920bbcbee8449f6409a72a6ff9cdabfbcff8ecd813b\" returns successfully" Dec 13 13:29:02.876953 containerd[1449]: time="2024-12-13T13:29:02.876631431Z" level=info msg="StopPodSandbox for \"3151e7323d43d89e68150771ae3be48f603cfd61d37faaa81b3ba32a6f392db6\"" Dec 13 13:29:02.876953 containerd[1449]: time="2024-12-13T13:29:02.876697906Z" level=info msg="TearDown network for sandbox \"3151e7323d43d89e68150771ae3be48f603cfd61d37faaa81b3ba32a6f392db6\" successfully" Dec 13 13:29:02.876953 containerd[1449]: time="2024-12-13T13:29:02.876708606Z" level=info msg="StopPodSandbox for \"3151e7323d43d89e68150771ae3be48f603cfd61d37faaa81b3ba32a6f392db6\" returns successfully" Dec 13 13:29:02.877128 containerd[1449]: time="2024-12-13T13:29:02.877101563Z" level=info msg="StopPodSandbox for \"6bf4d027f35ad0d592900f73b004bb1954d4e8416164842fcfcb8a315c21cfe5\"" Dec 13 13:29:02.877360 containerd[1449]: time="2024-12-13T13:29:02.877169099Z" level=info msg="TearDown network for sandbox \"6bf4d027f35ad0d592900f73b004bb1954d4e8416164842fcfcb8a315c21cfe5\" successfully" Dec 13 13:29:02.877360 containerd[1449]: time="2024-12-13T13:29:02.877186502Z" level=info msg="StopPodSandbox for \"6bf4d027f35ad0d592900f73b004bb1954d4e8416164842fcfcb8a315c21cfe5\" returns successfully" Dec 13 13:29:02.878398 containerd[1449]: time="2024-12-13T13:29:02.878320549Z" level=info msg="StopPodSandbox for \"7569277e045706f329ab489063636603c9060725a50ae1285872cb139bd6a320\"" Dec 13 13:29:02.878488 containerd[1449]: time="2024-12-13T13:29:02.878406330Z" level=info msg="TearDown network for sandbox \"7569277e045706f329ab489063636603c9060725a50ae1285872cb139bd6a320\" successfully" Dec 13 13:29:02.878488 containerd[1449]: time="2024-12-13T13:29:02.878463838Z" level=info msg="StopPodSandbox for \"7569277e045706f329ab489063636603c9060725a50ae1285872cb139bd6a320\" returns successfully" Dec 13 13:29:02.879929 containerd[1449]: time="2024-12-13T13:29:02.879029539Z" level=info msg="StopPodSandbox for \"600f5c9de67c290a123d858023bf36d579c6cdd3fa0b85cf2a9e47c426e3979f\"" Dec 13 13:29:02.879929 containerd[1449]: time="2024-12-13T13:29:02.879106663Z" level=info msg="TearDown network for sandbox \"600f5c9de67c290a123d858023bf36d579c6cdd3fa0b85cf2a9e47c426e3979f\" successfully" Dec 13 13:29:02.879929 containerd[1449]: time="2024-12-13T13:29:02.879120219Z" level=info msg="StopPodSandbox for \"600f5c9de67c290a123d858023bf36d579c6cdd3fa0b85cf2a9e47c426e3979f\" returns successfully" Dec 13 13:29:02.880129 kubelet[1847]: I1213 13:29:02.879654 1847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a54aca91cadd65341fbf2200f2f3dd1ba0985ce903a04094f876773253f96c86" Dec 13 13:29:02.880589 containerd[1449]: time="2024-12-13T13:29:02.880277890Z" level=info msg="StopPodSandbox for \"a54aca91cadd65341fbf2200f2f3dd1ba0985ce903a04094f876773253f96c86\"" Dec 13 13:29:02.880589 containerd[1449]: time="2024-12-13T13:29:02.880463247Z" level=info msg="Ensure that sandbox a54aca91cadd65341fbf2200f2f3dd1ba0985ce903a04094f876773253f96c86 in task-service has been cleanup successfully" Dec 13 13:29:02.882931 containerd[1449]: time="2024-12-13T13:29:02.880713747Z" level=info msg="TearDown network for sandbox \"a54aca91cadd65341fbf2200f2f3dd1ba0985ce903a04094f876773253f96c86\" successfully" Dec 13 13:29:02.882757 systemd[1]: run-netns-cni\x2daaf769cc\x2dc90b\x2d3b8a\x2dd0f0\x2dfcc02be65de9.mount: Deactivated successfully. Dec 13 13:29:02.883164 containerd[1449]: time="2024-12-13T13:29:02.883135389Z" level=info msg="StopPodSandbox for \"a54aca91cadd65341fbf2200f2f3dd1ba0985ce903a04094f876773253f96c86\" returns successfully" Dec 13 13:29:02.883336 containerd[1449]: time="2024-12-13T13:29:02.883310407Z" level=info msg="StopPodSandbox for \"80428a68e929b7a5fe0fe9197451793805d1ccc6b30657b54c5712a597bee652\"" Dec 13 13:29:02.883509 containerd[1449]: time="2024-12-13T13:29:02.883489423Z" level=info msg="TearDown network for sandbox \"80428a68e929b7a5fe0fe9197451793805d1ccc6b30657b54c5712a597bee652\" successfully" Dec 13 13:29:02.883985 containerd[1449]: time="2024-12-13T13:29:02.883573641Z" level=info msg="StopPodSandbox for \"80428a68e929b7a5fe0fe9197451793805d1ccc6b30657b54c5712a597bee652\" returns successfully" Dec 13 13:29:02.885531 containerd[1449]: time="2024-12-13T13:29:02.884638528Z" level=info msg="StopPodSandbox for \"9d0aae2d2a669130f6b3d818adce49287f982e022e4481e543774a8e5a7e4f0e\"" Dec 13 13:29:02.885531 containerd[1449]: time="2024-12-13T13:29:02.884745980Z" level=info msg="TearDown network for sandbox \"9d0aae2d2a669130f6b3d818adce49287f982e022e4481e543774a8e5a7e4f0e\" successfully" Dec 13 13:29:02.885531 containerd[1449]: time="2024-12-13T13:29:02.884758233Z" level=info msg="StopPodSandbox for \"9d0aae2d2a669130f6b3d818adce49287f982e022e4481e543774a8e5a7e4f0e\" returns successfully" Dec 13 13:29:02.885531 containerd[1449]: time="2024-12-13T13:29:02.884822133Z" level=info msg="StopPodSandbox for \"b528438cb8163ad2aef30dd5d012230125b22bacbc1996f26597aacff3a1201a\"" Dec 13 13:29:02.885531 containerd[1449]: time="2024-12-13T13:29:02.885306130Z" level=info msg="TearDown network for sandbox \"b528438cb8163ad2aef30dd5d012230125b22bacbc1996f26597aacff3a1201a\" successfully" Dec 13 13:29:02.885531 containerd[1449]: time="2024-12-13T13:29:02.885325517Z" level=info msg="StopPodSandbox for \"b528438cb8163ad2aef30dd5d012230125b22bacbc1996f26597aacff3a1201a\" returns successfully" Dec 13 13:29:02.886799 containerd[1449]: time="2024-12-13T13:29:02.886767681Z" level=info msg="StopPodSandbox for \"44e70081d2cdda96567481020ec4198920f6778b69518074c0e3574f13ad78fb\"" Dec 13 13:29:02.886855 containerd[1449]: time="2024-12-13T13:29:02.886839636Z" level=info msg="TearDown network for sandbox \"44e70081d2cdda96567481020ec4198920f6778b69518074c0e3574f13ad78fb\" successfully" Dec 13 13:29:02.886855 containerd[1449]: time="2024-12-13T13:29:02.886851158Z" level=info msg="StopPodSandbox for \"44e70081d2cdda96567481020ec4198920f6778b69518074c0e3574f13ad78fb\" returns successfully" Dec 13 13:29:02.887487 containerd[1449]: time="2024-12-13T13:29:02.887452506Z" level=info msg="StopPodSandbox for \"9ff155ebc05695af7a55ca479fc18572324264e1fd3461519ef5c759d227ddb0\"" Dec 13 13:29:02.887636 containerd[1449]: time="2024-12-13T13:29:02.887607316Z" level=info msg="TearDown network for sandbox \"9ff155ebc05695af7a55ca479fc18572324264e1fd3461519ef5c759d227ddb0\" successfully" Dec 13 13:29:02.887636 containerd[1449]: time="2024-12-13T13:29:02.887627714Z" level=info msg="StopPodSandbox for \"9ff155ebc05695af7a55ca479fc18572324264e1fd3461519ef5c759d227ddb0\" returns successfully" Dec 13 13:29:02.888116 containerd[1449]: time="2024-12-13T13:29:02.888088809Z" level=info msg="StopPodSandbox for \"bffb35d558fabfad7d5fc42f34272403a108038fd0e74c603ae553a22acacbef\"" Dec 13 13:29:02.888269 containerd[1449]: time="2024-12-13T13:29:02.888251625Z" level=info msg="TearDown network for sandbox \"bffb35d558fabfad7d5fc42f34272403a108038fd0e74c603ae553a22acacbef\" successfully" Dec 13 13:29:02.888342 containerd[1449]: time="2024-12-13T13:29:02.888326946Z" level=info msg="StopPodSandbox for \"bffb35d558fabfad7d5fc42f34272403a108038fd0e74c603ae553a22acacbef\" returns successfully" Dec 13 13:29:02.888657 containerd[1449]: time="2024-12-13T13:29:02.888500581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-flq57,Uid:6b554734-fc9d-4b74-9b2d-0c0e98baacdf,Namespace:calico-system,Attempt:10,}" Dec 13 13:29:02.889213 containerd[1449]: time="2024-12-13T13:29:02.889179885Z" level=info msg="StopPodSandbox for \"9ea676dc99018201564ed28aae61581b63ab75a5e21e331992630787bf9116f6\"" Dec 13 13:29:02.889268 containerd[1449]: time="2024-12-13T13:29:02.889257501Z" level=info msg="TearDown network for sandbox \"9ea676dc99018201564ed28aae61581b63ab75a5e21e331992630787bf9116f6\" successfully" Dec 13 13:29:02.889302 containerd[1449]: time="2024-12-13T13:29:02.889270505Z" level=info msg="StopPodSandbox for \"9ea676dc99018201564ed28aae61581b63ab75a5e21e331992630787bf9116f6\" returns successfully" Dec 13 13:29:02.890569 containerd[1449]: time="2024-12-13T13:29:02.890534316Z" level=info msg="StopPodSandbox for \"e386a4390adf5c3f69b06794cc703409bc0f40a0334acdf9ce30a67e76443e2a\"" Dec 13 13:29:02.890636 containerd[1449]: time="2024-12-13T13:29:02.890602203Z" level=info msg="TearDown network for sandbox \"e386a4390adf5c3f69b06794cc703409bc0f40a0334acdf9ce30a67e76443e2a\" successfully" Dec 13 13:29:02.890636 containerd[1449]: time="2024-12-13T13:29:02.890613795Z" level=info msg="StopPodSandbox for \"e386a4390adf5c3f69b06794cc703409bc0f40a0334acdf9ce30a67e76443e2a\" returns successfully" Dec 13 13:29:02.893202 containerd[1449]: time="2024-12-13T13:29:02.893171021Z" level=info msg="StopPodSandbox for \"d7631a251585364a892d934d5b4172df7d392390f1512d95497ea59cdaa439f6\"" Dec 13 13:29:02.893275 containerd[1449]: time="2024-12-13T13:29:02.893248606Z" level=info msg="TearDown network for sandbox \"d7631a251585364a892d934d5b4172df7d392390f1512d95497ea59cdaa439f6\" successfully" Dec 13 13:29:02.893275 containerd[1449]: time="2024-12-13T13:29:02.893267331Z" level=info msg="StopPodSandbox for \"d7631a251585364a892d934d5b4172df7d392390f1512d95497ea59cdaa439f6\" returns successfully" Dec 13 13:29:02.893961 containerd[1449]: time="2024-12-13T13:29:02.893851517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-pf4jn,Uid:88a7e54e-548f-40c3-aece-f577d2fd980d,Namespace:default,Attempt:7,}" Dec 13 13:29:03.234326 kubelet[1847]: E1213 13:29:03.234280 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:04.235615 kubelet[1847]: E1213 13:29:04.235472 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:04.531174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3729321257.mount: Deactivated successfully. Dec 13 13:29:04.589994 containerd[1449]: time="2024-12-13T13:29:04.589931024Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:29:04.593273 containerd[1449]: time="2024-12-13T13:29:04.592923496Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Dec 13 13:29:04.594324 containerd[1449]: time="2024-12-13T13:29:04.594297383Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:29:04.600139 containerd[1449]: time="2024-12-13T13:29:04.600099485Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:29:04.600993 containerd[1449]: time="2024-12-13T13:29:04.600970288Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 10.919688573s" Dec 13 13:29:04.601373 containerd[1449]: time="2024-12-13T13:29:04.601353206Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Dec 13 13:29:04.632556 containerd[1449]: time="2024-12-13T13:29:04.632522745Z" level=info msg="CreateContainer within sandbox \"0197d4fc902d6af39a37748ccc904d90eb2fb3c8778300eefed98c46d942b8df\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 13:29:04.646445 containerd[1449]: time="2024-12-13T13:29:04.646254609Z" level=error msg="Failed to destroy network for sandbox \"c529d7492b650cde74b9c28a9ec55e4b1be32175dd6206b053ef2cbcd9005bfc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:29:04.647665 containerd[1449]: time="2024-12-13T13:29:04.646963107Z" level=error msg="encountered an error cleaning up failed sandbox \"c529d7492b650cde74b9c28a9ec55e4b1be32175dd6206b053ef2cbcd9005bfc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:29:04.647665 containerd[1449]: time="2024-12-13T13:29:04.647026857Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-pf4jn,Uid:88a7e54e-548f-40c3-aece-f577d2fd980d,Namespace:default,Attempt:7,} failed, error" error="failed to setup network for sandbox \"c529d7492b650cde74b9c28a9ec55e4b1be32175dd6206b053ef2cbcd9005bfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:29:04.647968 kubelet[1847]: E1213 13:29:04.647234 1847 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c529d7492b650cde74b9c28a9ec55e4b1be32175dd6206b053ef2cbcd9005bfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:29:04.647968 kubelet[1847]: E1213 13:29:04.647300 1847 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c529d7492b650cde74b9c28a9ec55e4b1be32175dd6206b053ef2cbcd9005bfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-pf4jn" Dec 13 13:29:04.647968 kubelet[1847]: E1213 13:29:04.647328 1847 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c529d7492b650cde74b9c28a9ec55e4b1be32175dd6206b053ef2cbcd9005bfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-pf4jn" Dec 13 13:29:04.648147 kubelet[1847]: E1213 13:29:04.647380 1847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-pf4jn_default(88a7e54e-548f-40c3-aece-f577d2fd980d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-pf4jn_default(88a7e54e-548f-40c3-aece-f577d2fd980d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c529d7492b650cde74b9c28a9ec55e4b1be32175dd6206b053ef2cbcd9005bfc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-pf4jn" podUID="88a7e54e-548f-40c3-aece-f577d2fd980d" Dec 13 13:29:04.653142 containerd[1449]: time="2024-12-13T13:29:04.652930529Z" level=info msg="CreateContainer within sandbox \"0197d4fc902d6af39a37748ccc904d90eb2fb3c8778300eefed98c46d942b8df\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e580090b51e6e1a5f2fe71f855762599b77558799fc42c0a02e5e8e4b5bde745\"" Dec 13 13:29:04.654075 containerd[1449]: time="2024-12-13T13:29:04.654025914Z" level=info msg="StartContainer for \"e580090b51e6e1a5f2fe71f855762599b77558799fc42c0a02e5e8e4b5bde745\"" Dec 13 13:29:04.676046 containerd[1449]: time="2024-12-13T13:29:04.675980850Z" level=error msg="Failed to destroy network for sandbox \"3a30e1f6d6c1c84373fcffbd8a750b69cb596a27bb7acad6ccece778c3a0773b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:29:04.678542 containerd[1449]: time="2024-12-13T13:29:04.678016017Z" level=error msg="encountered an error cleaning up failed sandbox \"3a30e1f6d6c1c84373fcffbd8a750b69cb596a27bb7acad6ccece778c3a0773b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:29:04.678542 containerd[1449]: time="2024-12-13T13:29:04.678113580Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-flq57,Uid:6b554734-fc9d-4b74-9b2d-0c0e98baacdf,Namespace:calico-system,Attempt:10,} failed, error" error="failed to setup network for sandbox \"3a30e1f6d6c1c84373fcffbd8a750b69cb596a27bb7acad6ccece778c3a0773b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:29:04.678995 kubelet[1847]: E1213 13:29:04.678324 1847 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a30e1f6d6c1c84373fcffbd8a750b69cb596a27bb7acad6ccece778c3a0773b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:29:04.678995 kubelet[1847]: E1213 13:29:04.678383 1847 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a30e1f6d6c1c84373fcffbd8a750b69cb596a27bb7acad6ccece778c3a0773b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-flq57" Dec 13 13:29:04.678995 kubelet[1847]: E1213 13:29:04.678407 1847 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a30e1f6d6c1c84373fcffbd8a750b69cb596a27bb7acad6ccece778c3a0773b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-flq57" Dec 13 13:29:04.679127 kubelet[1847]: E1213 13:29:04.678449 1847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-flq57_calico-system(6b554734-fc9d-4b74-9b2d-0c0e98baacdf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-flq57_calico-system(6b554734-fc9d-4b74-9b2d-0c0e98baacdf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3a30e1f6d6c1c84373fcffbd8a750b69cb596a27bb7acad6ccece778c3a0773b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-flq57" podUID="6b554734-fc9d-4b74-9b2d-0c0e98baacdf" Dec 13 13:29:04.776193 systemd[1]: Started cri-containerd-e580090b51e6e1a5f2fe71f855762599b77558799fc42c0a02e5e8e4b5bde745.scope - libcontainer container e580090b51e6e1a5f2fe71f855762599b77558799fc42c0a02e5e8e4b5bde745. Dec 13 13:29:04.931458 containerd[1449]: time="2024-12-13T13:29:04.931100357Z" level=info msg="StartContainer for \"e580090b51e6e1a5f2fe71f855762599b77558799fc42c0a02e5e8e4b5bde745\" returns successfully" Dec 13 13:29:05.077727 kubelet[1847]: I1213 13:29:05.075973 1847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a30e1f6d6c1c84373fcffbd8a750b69cb596a27bb7acad6ccece778c3a0773b" Dec 13 13:29:05.079108 containerd[1449]: time="2024-12-13T13:29:05.078068395Z" level=info msg="StopPodSandbox for \"3a30e1f6d6c1c84373fcffbd8a750b69cb596a27bb7acad6ccece778c3a0773b\"" Dec 13 13:29:05.080080 containerd[1449]: time="2024-12-13T13:29:05.079799943Z" level=info msg="Ensure that sandbox 3a30e1f6d6c1c84373fcffbd8a750b69cb596a27bb7acad6ccece778c3a0773b in task-service has been cleanup successfully" Dec 13 13:29:05.080616 containerd[1449]: time="2024-12-13T13:29:05.080353531Z" level=info msg="TearDown network for sandbox \"3a30e1f6d6c1c84373fcffbd8a750b69cb596a27bb7acad6ccece778c3a0773b\" successfully" Dec 13 13:29:05.080616 containerd[1449]: time="2024-12-13T13:29:05.080378438Z" level=info msg="StopPodSandbox for \"3a30e1f6d6c1c84373fcffbd8a750b69cb596a27bb7acad6ccece778c3a0773b\" returns successfully" Dec 13 13:29:05.082633 containerd[1449]: time="2024-12-13T13:29:05.082443140Z" level=info msg="StopPodSandbox for \"10120d9980b653e4451fb292195c4c1014167a51da97b8f34ddfc37e9a44261f\"" Dec 13 13:29:05.083027 containerd[1449]: time="2024-12-13T13:29:05.082574507Z" level=info msg="TearDown network for sandbox \"10120d9980b653e4451fb292195c4c1014167a51da97b8f34ddfc37e9a44261f\" successfully" Dec 13 13:29:05.083027 containerd[1449]: time="2024-12-13T13:29:05.082656540Z" level=info msg="StopPodSandbox for \"10120d9980b653e4451fb292195c4c1014167a51da97b8f34ddfc37e9a44261f\" returns successfully" Dec 13 13:29:05.083931 containerd[1449]: time="2024-12-13T13:29:05.083792801Z" level=info msg="StopPodSandbox for \"99b9d16fb8fe14762daea9ce0f6475277613de5362a06a134624b846dd5a6c6b\"" Dec 13 13:29:05.084645 containerd[1449]: time="2024-12-13T13:29:05.084601839Z" level=info msg="TearDown network for sandbox \"99b9d16fb8fe14762daea9ce0f6475277613de5362a06a134624b846dd5a6c6b\" successfully" Dec 13 13:29:05.084645 containerd[1449]: time="2024-12-13T13:29:05.084627246Z" level=info msg="StopPodSandbox for \"99b9d16fb8fe14762daea9ce0f6475277613de5362a06a134624b846dd5a6c6b\" returns successfully" Dec 13 13:29:05.085843 containerd[1449]: time="2024-12-13T13:29:05.085657619Z" level=info msg="StopPodSandbox for \"6668aa2d04ff7815328ab920bbcbee8449f6409a72a6ff9cdabfbcff8ecd813b\"" Dec 13 13:29:05.086579 containerd[1449]: time="2024-12-13T13:29:05.086128783Z" level=info msg="TearDown network for sandbox \"6668aa2d04ff7815328ab920bbcbee8449f6409a72a6ff9cdabfbcff8ecd813b\" successfully" Dec 13 13:29:05.086579 containerd[1449]: time="2024-12-13T13:29:05.086300404Z" level=info msg="StopPodSandbox for \"6668aa2d04ff7815328ab920bbcbee8449f6409a72a6ff9cdabfbcff8ecd813b\" returns successfully" Dec 13 13:29:05.087938 containerd[1449]: time="2024-12-13T13:29:05.087061933Z" level=info msg="StopPodSandbox for \"3151e7323d43d89e68150771ae3be48f603cfd61d37faaa81b3ba32a6f392db6\"" Dec 13 13:29:05.087938 containerd[1449]: time="2024-12-13T13:29:05.087162271Z" level=info msg="TearDown network for sandbox \"3151e7323d43d89e68150771ae3be48f603cfd61d37faaa81b3ba32a6f392db6\" successfully" Dec 13 13:29:05.087938 containerd[1449]: time="2024-12-13T13:29:05.087180235Z" level=info msg="StopPodSandbox for \"3151e7323d43d89e68150771ae3be48f603cfd61d37faaa81b3ba32a6f392db6\" returns successfully" Dec 13 13:29:05.089060 containerd[1449]: time="2024-12-13T13:29:05.089025285Z" level=info msg="StopPodSandbox for \"6bf4d027f35ad0d592900f73b004bb1954d4e8416164842fcfcb8a315c21cfe5\"" Dec 13 13:29:05.089505 containerd[1449]: time="2024-12-13T13:29:05.089474918Z" level=info msg="TearDown network for sandbox \"6bf4d027f35ad0d592900f73b004bb1954d4e8416164842fcfcb8a315c21cfe5\" successfully" Dec 13 13:29:05.089505 containerd[1449]: time="2024-12-13T13:29:05.089495617Z" level=info msg="StopPodSandbox for \"6bf4d027f35ad0d592900f73b004bb1954d4e8416164842fcfcb8a315c21cfe5\" returns successfully" Dec 13 13:29:05.090472 kubelet[1847]: I1213 13:29:05.090430 1847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c529d7492b650cde74b9c28a9ec55e4b1be32175dd6206b053ef2cbcd9005bfc" Dec 13 13:29:05.091271 containerd[1449]: time="2024-12-13T13:29:05.091216685Z" level=info msg="StopPodSandbox for \"7569277e045706f329ab489063636603c9060725a50ae1285872cb139bd6a320\"" Dec 13 13:29:05.092185 containerd[1449]: time="2024-12-13T13:29:05.091318185Z" level=info msg="TearDown network for sandbox \"7569277e045706f329ab489063636603c9060725a50ae1285872cb139bd6a320\" successfully" Dec 13 13:29:05.092251 containerd[1449]: time="2024-12-13T13:29:05.092217512Z" level=info msg="StopPodSandbox for \"7569277e045706f329ab489063636603c9060725a50ae1285872cb139bd6a320\" returns successfully" Dec 13 13:29:05.092284 containerd[1449]: time="2024-12-13T13:29:05.091955280Z" level=info msg="StopPodSandbox for \"c529d7492b650cde74b9c28a9ec55e4b1be32175dd6206b053ef2cbcd9005bfc\"" Dec 13 13:29:05.092521 containerd[1449]: time="2024-12-13T13:29:05.092493740Z" level=info msg="Ensure that sandbox c529d7492b650cde74b9c28a9ec55e4b1be32175dd6206b053ef2cbcd9005bfc in task-service has been cleanup successfully" Dec 13 13:29:05.093237 containerd[1449]: time="2024-12-13T13:29:05.093208461Z" level=info msg="StopPodSandbox for \"600f5c9de67c290a123d858023bf36d579c6cdd3fa0b85cf2a9e47c426e3979f\"" Dec 13 13:29:05.093334 containerd[1449]: time="2024-12-13T13:29:05.093309069Z" level=info msg="TearDown network for sandbox \"600f5c9de67c290a123d858023bf36d579c6cdd3fa0b85cf2a9e47c426e3979f\" successfully" Dec 13 13:29:05.093334 containerd[1449]: time="2024-12-13T13:29:05.093329738Z" level=info msg="StopPodSandbox for \"600f5c9de67c290a123d858023bf36d579c6cdd3fa0b85cf2a9e47c426e3979f\" returns successfully" Dec 13 13:29:05.094712 containerd[1449]: time="2024-12-13T13:29:05.094682435Z" level=info msg="TearDown network for sandbox \"c529d7492b650cde74b9c28a9ec55e4b1be32175dd6206b053ef2cbcd9005bfc\" successfully" Dec 13 13:29:05.094712 containerd[1449]: time="2024-12-13T13:29:05.094707191Z" level=info msg="StopPodSandbox for \"c529d7492b650cde74b9c28a9ec55e4b1be32175dd6206b053ef2cbcd9005bfc\" returns successfully" Dec 13 13:29:05.095246 containerd[1449]: time="2024-12-13T13:29:05.095126788Z" level=info msg="StopPodSandbox for \"80428a68e929b7a5fe0fe9197451793805d1ccc6b30657b54c5712a597bee652\"" Dec 13 13:29:05.095246 containerd[1449]: time="2024-12-13T13:29:05.095224612Z" level=info msg="TearDown network for sandbox \"80428a68e929b7a5fe0fe9197451793805d1ccc6b30657b54c5712a597bee652\" successfully" Dec 13 13:29:05.095246 containerd[1449]: time="2024-12-13T13:29:05.095237235Z" level=info msg="StopPodSandbox for \"80428a68e929b7a5fe0fe9197451793805d1ccc6b30657b54c5712a597bee652\" returns successfully" Dec 13 13:29:05.095520 containerd[1449]: time="2024-12-13T13:29:05.095410821Z" level=info msg="StopPodSandbox for \"a54aca91cadd65341fbf2200f2f3dd1ba0985ce903a04094f876773253f96c86\"" Dec 13 13:29:05.095631 containerd[1449]: time="2024-12-13T13:29:05.095527430Z" level=info msg="StopPodSandbox for \"b528438cb8163ad2aef30dd5d012230125b22bacbc1996f26597aacff3a1201a\"" Dec 13 13:29:05.095631 containerd[1449]: time="2024-12-13T13:29:05.095592031Z" level=info msg="TearDown network for sandbox \"b528438cb8163ad2aef30dd5d012230125b22bacbc1996f26597aacff3a1201a\" successfully" Dec 13 13:29:05.095631 containerd[1449]: time="2024-12-13T13:29:05.095602511Z" level=info msg="StopPodSandbox for \"b528438cb8163ad2aef30dd5d012230125b22bacbc1996f26597aacff3a1201a\" returns successfully" Dec 13 13:29:05.098037 kubelet[1847]: I1213 13:29:05.097526 1847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-f2np9" podStartSLOduration=4.214091293 podStartE2EDuration="30.097484691s" podCreationTimestamp="2024-12-13 13:28:35 +0000 UTC" firstStartedPulling="2024-12-13 13:28:38.718821224 +0000 UTC m=+4.170921543" lastFinishedPulling="2024-12-13 13:29:04.602214622 +0000 UTC m=+30.054314941" observedRunningTime="2024-12-13 13:29:05.0946107 +0000 UTC m=+30.546711019" watchObservedRunningTime="2024-12-13 13:29:05.097484691 +0000 UTC m=+30.549585020" Dec 13 13:29:05.098150 containerd[1449]: time="2024-12-13T13:29:05.098052666Z" level=info msg="TearDown network for sandbox \"a54aca91cadd65341fbf2200f2f3dd1ba0985ce903a04094f876773253f96c86\" successfully" Dec 13 13:29:05.098150 containerd[1449]: time="2024-12-13T13:29:05.098077412Z" level=info msg="StopPodSandbox for \"a54aca91cadd65341fbf2200f2f3dd1ba0985ce903a04094f876773253f96c86\" returns successfully" Dec 13 13:29:05.098327 containerd[1449]: time="2024-12-13T13:29:05.098300040Z" level=info msg="StopPodSandbox for \"44e70081d2cdda96567481020ec4198920f6778b69518074c0e3574f13ad78fb\"" Dec 13 13:29:05.098415 containerd[1449]: time="2024-12-13T13:29:05.098389678Z" level=info msg="TearDown network for sandbox \"44e70081d2cdda96567481020ec4198920f6778b69518074c0e3574f13ad78fb\" successfully" Dec 13 13:29:05.098415 containerd[1449]: time="2024-12-13T13:29:05.098407261Z" level=info msg="StopPodSandbox for \"44e70081d2cdda96567481020ec4198920f6778b69518074c0e3574f13ad78fb\" returns successfully" Dec 13 13:29:05.100207 containerd[1449]: time="2024-12-13T13:29:05.100174716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-flq57,Uid:6b554734-fc9d-4b74-9b2d-0c0e98baacdf,Namespace:calico-system,Attempt:11,}" Dec 13 13:29:05.102008 containerd[1449]: time="2024-12-13T13:29:05.101952600Z" level=info msg="StopPodSandbox for \"9d0aae2d2a669130f6b3d818adce49287f982e022e4481e543774a8e5a7e4f0e\"" Dec 13 13:29:05.102175 containerd[1449]: time="2024-12-13T13:29:05.102096500Z" level=info msg="TearDown network for sandbox \"9d0aae2d2a669130f6b3d818adce49287f982e022e4481e543774a8e5a7e4f0e\" successfully" Dec 13 13:29:05.102175 containerd[1449]: time="2024-12-13T13:29:05.102119794Z" level=info msg="StopPodSandbox for \"9d0aae2d2a669130f6b3d818adce49287f982e022e4481e543774a8e5a7e4f0e\" returns successfully" Dec 13 13:29:05.104732 containerd[1449]: time="2024-12-13T13:29:05.104546796Z" level=info msg="StopPodSandbox for \"9ff155ebc05695af7a55ca479fc18572324264e1fd3461519ef5c759d227ddb0\"" Dec 13 13:29:05.104732 containerd[1449]: time="2024-12-13T13:29:05.104652043Z" level=info msg="TearDown network for sandbox \"9ff155ebc05695af7a55ca479fc18572324264e1fd3461519ef5c759d227ddb0\" successfully" Dec 13 13:29:05.104732 containerd[1449]: time="2024-12-13T13:29:05.104664987Z" level=info msg="StopPodSandbox for \"9ff155ebc05695af7a55ca479fc18572324264e1fd3461519ef5c759d227ddb0\" returns successfully" Dec 13 13:29:05.105346 containerd[1449]: time="2024-12-13T13:29:05.105240647Z" level=info msg="StopPodSandbox for \"bffb35d558fabfad7d5fc42f34272403a108038fd0e74c603ae553a22acacbef\"" Dec 13 13:29:05.105346 containerd[1449]: time="2024-12-13T13:29:05.105321168Z" level=info msg="TearDown network for sandbox \"bffb35d558fabfad7d5fc42f34272403a108038fd0e74c603ae553a22acacbef\" successfully" Dec 13 13:29:05.105346 containerd[1449]: time="2024-12-13T13:29:05.105334022Z" level=info msg="StopPodSandbox for \"bffb35d558fabfad7d5fc42f34272403a108038fd0e74c603ae553a22acacbef\" returns successfully" Dec 13 13:29:05.107258 containerd[1449]: time="2024-12-13T13:29:05.107202206Z" level=info msg="StopPodSandbox for \"9ea676dc99018201564ed28aae61581b63ab75a5e21e331992630787bf9116f6\"" Dec 13 13:29:05.107313 containerd[1449]: time="2024-12-13T13:29:05.107291744Z" level=info msg="TearDown network for sandbox \"9ea676dc99018201564ed28aae61581b63ab75a5e21e331992630787bf9116f6\" successfully" Dec 13 13:29:05.107313 containerd[1449]: time="2024-12-13T13:29:05.107304808Z" level=info msg="StopPodSandbox for \"9ea676dc99018201564ed28aae61581b63ab75a5e21e331992630787bf9116f6\" returns successfully" Dec 13 13:29:05.107712 containerd[1449]: time="2024-12-13T13:29:05.107670554Z" level=info msg="StopPodSandbox for \"e386a4390adf5c3f69b06794cc703409bc0f40a0334acdf9ce30a67e76443e2a\"" Dec 13 13:29:05.107761 containerd[1449]: time="2024-12-13T13:29:05.107750845Z" level=info msg="TearDown network for sandbox \"e386a4390adf5c3f69b06794cc703409bc0f40a0334acdf9ce30a67e76443e2a\" successfully" Dec 13 13:29:05.107791 containerd[1449]: time="2024-12-13T13:29:05.107764160Z" level=info msg="StopPodSandbox for \"e386a4390adf5c3f69b06794cc703409bc0f40a0334acdf9ce30a67e76443e2a\" returns successfully" Dec 13 13:29:05.109930 containerd[1449]: time="2024-12-13T13:29:05.109286405Z" level=info msg="StopPodSandbox for \"d7631a251585364a892d934d5b4172df7d392390f1512d95497ea59cdaa439f6\"" Dec 13 13:29:05.109930 containerd[1449]: time="2024-12-13T13:29:05.109429223Z" level=info msg="TearDown network for sandbox \"d7631a251585364a892d934d5b4172df7d392390f1512d95497ea59cdaa439f6\" successfully" Dec 13 13:29:05.109930 containerd[1449]: time="2024-12-13T13:29:05.109443459Z" level=info msg="StopPodSandbox for \"d7631a251585364a892d934d5b4172df7d392390f1512d95497ea59cdaa439f6\" returns successfully" Dec 13 13:29:05.109930 containerd[1449]: time="2024-12-13T13:29:05.109802212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-pf4jn,Uid:88a7e54e-548f-40c3-aece-f577d2fd980d,Namespace:default,Attempt:8,}" Dec 13 13:29:05.113912 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 13:29:05.114144 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 13:29:05.236419 kubelet[1847]: E1213 13:29:05.236302 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:05.508631 systemd[1]: run-netns-cni\x2dcb42dcf6\x2d984e\x2dd40c\x2db5d6\x2d04b977d00b78.mount: Deactivated successfully. Dec 13 13:29:05.508739 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3a30e1f6d6c1c84373fcffbd8a750b69cb596a27bb7acad6ccece778c3a0773b-shm.mount: Deactivated successfully. Dec 13 13:29:05.508814 systemd[1]: run-netns-cni\x2d8b24bf1a\x2d3e29\x2d3be4\x2d9d7b\x2db944a84aa31e.mount: Deactivated successfully. Dec 13 13:29:05.509338 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c529d7492b650cde74b9c28a9ec55e4b1be32175dd6206b053ef2cbcd9005bfc-shm.mount: Deactivated successfully. Dec 13 13:29:05.963563 systemd-networkd[1362]: calif657fe9ca38: Link UP Dec 13 13:29:05.966728 systemd-networkd[1362]: calif657fe9ca38: Gained carrier Dec 13 13:29:05.993344 containerd[1449]: 2024-12-13 13:29:05.243 [INFO][3014] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 13:29:05.993344 containerd[1449]: 2024-12-13 13:29:05.467 [INFO][3014] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.94-k8s-nginx--deployment--8587fbcb89--pf4jn-eth0 nginx-deployment-8587fbcb89- default 88a7e54e-548f-40c3-aece-f577d2fd980d 1145 0 2024-12-13 13:28:55 +0000 UTC map[app:nginx pod-template-hash:8587fbcb89 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.24.4.94 nginx-deployment-8587fbcb89-pf4jn eth0 default [] [] [kns.default ksa.default.default] calif657fe9ca38 [] []}} ContainerID="9efab0e173ffa5836f9e0f404c272a24f8cca5580f42866ad3bd4f72fac4ae9d" Namespace="default" Pod="nginx-deployment-8587fbcb89-pf4jn" WorkloadEndpoint="172.24.4.94-k8s-nginx--deployment--8587fbcb89--pf4jn-" Dec 13 13:29:05.993344 containerd[1449]: 2024-12-13 13:29:05.467 [INFO][3014] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9efab0e173ffa5836f9e0f404c272a24f8cca5580f42866ad3bd4f72fac4ae9d" Namespace="default" Pod="nginx-deployment-8587fbcb89-pf4jn" WorkloadEndpoint="172.24.4.94-k8s-nginx--deployment--8587fbcb89--pf4jn-eth0" Dec 13 13:29:05.993344 containerd[1449]: 2024-12-13 13:29:05.552 [INFO][3037] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9efab0e173ffa5836f9e0f404c272a24f8cca5580f42866ad3bd4f72fac4ae9d" HandleID="k8s-pod-network.9efab0e173ffa5836f9e0f404c272a24f8cca5580f42866ad3bd4f72fac4ae9d" Workload="172.24.4.94-k8s-nginx--deployment--8587fbcb89--pf4jn-eth0" Dec 13 13:29:05.993344 containerd[1449]: 2024-12-13 13:29:05.595 [INFO][3037] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9efab0e173ffa5836f9e0f404c272a24f8cca5580f42866ad3bd4f72fac4ae9d" HandleID="k8s-pod-network.9efab0e173ffa5836f9e0f404c272a24f8cca5580f42866ad3bd4f72fac4ae9d" Workload="172.24.4.94-k8s-nginx--deployment--8587fbcb89--pf4jn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ba460), Attrs:map[string]string{"namespace":"default", "node":"172.24.4.94", "pod":"nginx-deployment-8587fbcb89-pf4jn", "timestamp":"2024-12-13 13:29:05.552518136 +0000 UTC"}, Hostname:"172.24.4.94", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:29:05.993344 containerd[1449]: 2024-12-13 13:29:05.595 [INFO][3037] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:29:05.993344 containerd[1449]: 2024-12-13 13:29:05.595 [INFO][3037] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:29:05.993344 containerd[1449]: 2024-12-13 13:29:05.595 [INFO][3037] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.94' Dec 13 13:29:05.993344 containerd[1449]: 2024-12-13 13:29:05.599 [INFO][3037] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9efab0e173ffa5836f9e0f404c272a24f8cca5580f42866ad3bd4f72fac4ae9d" host="172.24.4.94" Dec 13 13:29:05.993344 containerd[1449]: 2024-12-13 13:29:05.604 [INFO][3037] ipam/ipam.go 372: Looking up existing affinities for host host="172.24.4.94" Dec 13 13:29:05.993344 containerd[1449]: 2024-12-13 13:29:05.617 [INFO][3037] ipam/ipam.go 521: Ran out of existing affine blocks for host host="172.24.4.94" Dec 13 13:29:05.993344 containerd[1449]: 2024-12-13 13:29:05.621 [INFO][3037] ipam/ipam.go 538: Tried all affine blocks. Looking for an affine block with space, or a new unclaimed block host="172.24.4.94" Dec 13 13:29:05.993344 containerd[1449]: 2024-12-13 13:29:05.791 [INFO][3037] ipam/ipam.go 550: Found unclaimed block host="172.24.4.94" subnet=192.168.110.192/26 Dec 13 13:29:05.993344 containerd[1449]: 2024-12-13 13:29:05.791 [INFO][3037] ipam/ipam_block_reader_writer.go 171: Trying to create affinity in pending state host="172.24.4.94" subnet=192.168.110.192/26 Dec 13 13:29:05.993344 containerd[1449]: 2024-12-13 13:29:05.803 [INFO][3037] ipam/ipam_block_reader_writer.go 182: Block affinity already exists, getting existing affinity host="172.24.4.94" subnet=192.168.110.192/26 Dec 13 13:29:05.993344 containerd[1449]: 2024-12-13 13:29:05.810 [INFO][3037] ipam/ipam_block_reader_writer.go 190: Got existing affinity host="172.24.4.94" subnet=192.168.110.192/26 Dec 13 13:29:05.993344 containerd[1449]: 2024-12-13 13:29:05.810 [INFO][3037] ipam/ipam_block_reader_writer.go 198: Existing affinity is already confirmed host="172.24.4.94" subnet=192.168.110.192/26 Dec 13 13:29:05.993344 containerd[1449]: 2024-12-13 13:29:05.810 [INFO][3037] ipam/ipam.go 155: Attempting to load block cidr=192.168.110.192/26 host="172.24.4.94" Dec 13 13:29:05.993344 containerd[1449]: 2024-12-13 13:29:05.820 [INFO][3037] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.110.192/26 host="172.24.4.94" Dec 13 13:29:05.993344 containerd[1449]: 2024-12-13 13:29:05.820 [INFO][3037] ipam/ipam.go 585: Block '192.168.110.192/26' has 64 free ips which is more than 1 ips required. host="172.24.4.94" subnet=192.168.110.192/26 Dec 13 13:29:05.993344 containerd[1449]: 2024-12-13 13:29:05.820 [INFO][3037] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.110.192/26 handle="k8s-pod-network.9efab0e173ffa5836f9e0f404c272a24f8cca5580f42866ad3bd4f72fac4ae9d" host="172.24.4.94" Dec 13 13:29:05.993344 containerd[1449]: 2024-12-13 13:29:05.828 [INFO][3037] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9efab0e173ffa5836f9e0f404c272a24f8cca5580f42866ad3bd4f72fac4ae9d Dec 13 13:29:05.993344 containerd[1449]: 2024-12-13 13:29:05.842 [INFO][3037] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.110.192/26 handle="k8s-pod-network.9efab0e173ffa5836f9e0f404c272a24f8cca5580f42866ad3bd4f72fac4ae9d" host="172.24.4.94" Dec 13 13:29:05.995400 containerd[1449]: 2024-12-13 13:29:05.848 [ERROR][3037] ipam/customresource.go 184: Error updating resource Key=IPAMBlock(192-168-110-192-26) Name="192-168-110-192-26" Resource="IPAMBlocks" Value=&v3.IPAMBlock{TypeMeta:v1.TypeMeta{Kind:"IPAMBlock", APIVersion:"crd.projectcalico.org/v1"}, ObjectMeta:v1.ObjectMeta{Name:"192-168-110-192-26", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"1207", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.IPAMBlockSpec{CIDR:"192.168.110.192/26", Affinity:(*string)(0xc0002ed660), Allocations:[]*int{(*int)(0xc000012620), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil)}, Unallocated:[]int{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63}, Attributes:[]v3.AllocationAttribute{v3.AllocationAttribute{AttrPrimary:(*string)(0xc0003ba460), AttrSecondary:map[string]string{"namespace":"default", "node":"172.24.4.94", "pod":"nginx-deployment-8587fbcb89-pf4jn", "timestamp":"2024-12-13 13:29:05.552518136 +0000 UTC"}}}, SequenceNumber:0x1810bf9ee08009ad, SequenceNumberForAllocation:map[string]uint64{"0":0x1810bf9ee08009ac}, Deleted:false, DeprecatedStrictAffinity:false}} error=Operation cannot be fulfilled on ipamblocks.crd.projectcalico.org "192-168-110-192-26": the object has been modified; please apply your changes to the latest version and try again Dec 13 13:29:05.995400 containerd[1449]: 2024-12-13 13:29:05.848 [INFO][3037] ipam/ipam.go 1207: Failed to update block block=192.168.110.192/26 error=update conflict: IPAMBlock(192-168-110-192-26) handle="k8s-pod-network.9efab0e173ffa5836f9e0f404c272a24f8cca5580f42866ad3bd4f72fac4ae9d" host="172.24.4.94" Dec 13 13:29:05.995400 containerd[1449]: 2024-12-13 13:29:05.906 [INFO][3037] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.110.192/26 handle="k8s-pod-network.9efab0e173ffa5836f9e0f404c272a24f8cca5580f42866ad3bd4f72fac4ae9d" host="172.24.4.94" Dec 13 13:29:05.995400 containerd[1449]: 2024-12-13 13:29:05.909 [INFO][3037] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9efab0e173ffa5836f9e0f404c272a24f8cca5580f42866ad3bd4f72fac4ae9d Dec 13 13:29:05.995400 containerd[1449]: 2024-12-13 13:29:05.919 [INFO][3037] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.110.192/26 handle="k8s-pod-network.9efab0e173ffa5836f9e0f404c272a24f8cca5580f42866ad3bd4f72fac4ae9d" host="172.24.4.94" Dec 13 13:29:05.995400 containerd[1449]: 2024-12-13 13:29:05.928 [INFO][3037] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.110.193/26] block=192.168.110.192/26 handle="k8s-pod-network.9efab0e173ffa5836f9e0f404c272a24f8cca5580f42866ad3bd4f72fac4ae9d" host="172.24.4.94" Dec 13 13:29:05.995400 containerd[1449]: 2024-12-13 13:29:05.928 [INFO][3037] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.110.193/26] handle="k8s-pod-network.9efab0e173ffa5836f9e0f404c272a24f8cca5580f42866ad3bd4f72fac4ae9d" host="172.24.4.94" Dec 13 13:29:05.995400 containerd[1449]: 2024-12-13 13:29:05.929 [INFO][3037] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:29:05.995400 containerd[1449]: 2024-12-13 13:29:05.929 [INFO][3037] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.110.193/26] IPv6=[] ContainerID="9efab0e173ffa5836f9e0f404c272a24f8cca5580f42866ad3bd4f72fac4ae9d" HandleID="k8s-pod-network.9efab0e173ffa5836f9e0f404c272a24f8cca5580f42866ad3bd4f72fac4ae9d" Workload="172.24.4.94-k8s-nginx--deployment--8587fbcb89--pf4jn-eth0" Dec 13 13:29:05.995400 containerd[1449]: 2024-12-13 13:29:05.935 [INFO][3014] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9efab0e173ffa5836f9e0f404c272a24f8cca5580f42866ad3bd4f72fac4ae9d" Namespace="default" Pod="nginx-deployment-8587fbcb89-pf4jn" WorkloadEndpoint="172.24.4.94-k8s-nginx--deployment--8587fbcb89--pf4jn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.94-k8s-nginx--deployment--8587fbcb89--pf4jn-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"88a7e54e-548f-40c3-aece-f577d2fd980d", ResourceVersion:"1145", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 28, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.94", ContainerID:"", Pod:"nginx-deployment-8587fbcb89-pf4jn", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.110.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calif657fe9ca38", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:29:05.995765 containerd[1449]: 2024-12-13 13:29:05.936 [INFO][3014] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.110.193/32] ContainerID="9efab0e173ffa5836f9e0f404c272a24f8cca5580f42866ad3bd4f72fac4ae9d" Namespace="default" Pod="nginx-deployment-8587fbcb89-pf4jn" WorkloadEndpoint="172.24.4.94-k8s-nginx--deployment--8587fbcb89--pf4jn-eth0" Dec 13 13:29:05.995765 containerd[1449]: 2024-12-13 13:29:05.936 [INFO][3014] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif657fe9ca38 ContainerID="9efab0e173ffa5836f9e0f404c272a24f8cca5580f42866ad3bd4f72fac4ae9d" Namespace="default" Pod="nginx-deployment-8587fbcb89-pf4jn" WorkloadEndpoint="172.24.4.94-k8s-nginx--deployment--8587fbcb89--pf4jn-eth0" Dec 13 13:29:05.995765 containerd[1449]: 2024-12-13 13:29:05.965 [INFO][3014] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9efab0e173ffa5836f9e0f404c272a24f8cca5580f42866ad3bd4f72fac4ae9d" Namespace="default" Pod="nginx-deployment-8587fbcb89-pf4jn" WorkloadEndpoint="172.24.4.94-k8s-nginx--deployment--8587fbcb89--pf4jn-eth0" Dec 13 13:29:05.995765 containerd[1449]: 2024-12-13 13:29:05.968 [INFO][3014] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9efab0e173ffa5836f9e0f404c272a24f8cca5580f42866ad3bd4f72fac4ae9d" Namespace="default" Pod="nginx-deployment-8587fbcb89-pf4jn" WorkloadEndpoint="172.24.4.94-k8s-nginx--deployment--8587fbcb89--pf4jn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.94-k8s-nginx--deployment--8587fbcb89--pf4jn-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"88a7e54e-548f-40c3-aece-f577d2fd980d", ResourceVersion:"1145", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 28, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.94", ContainerID:"9efab0e173ffa5836f9e0f404c272a24f8cca5580f42866ad3bd4f72fac4ae9d", Pod:"nginx-deployment-8587fbcb89-pf4jn", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.110.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calif657fe9ca38", MAC:"d6:51:05:e1:5a:ea", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:29:05.995765 containerd[1449]: 2024-12-13 13:29:05.987 [INFO][3014] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9efab0e173ffa5836f9e0f404c272a24f8cca5580f42866ad3bd4f72fac4ae9d" Namespace="default" Pod="nginx-deployment-8587fbcb89-pf4jn" WorkloadEndpoint="172.24.4.94-k8s-nginx--deployment--8587fbcb89--pf4jn-eth0" Dec 13 13:29:06.033846 systemd-networkd[1362]: cali624ee51bc94: Link UP Dec 13 13:29:06.034707 systemd-networkd[1362]: cali624ee51bc94: Gained carrier Dec 13 13:29:06.038971 containerd[1449]: time="2024-12-13T13:29:06.038624469Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:29:06.038971 containerd[1449]: time="2024-12-13T13:29:06.038702496Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:29:06.038971 containerd[1449]: time="2024-12-13T13:29:06.038722955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:29:06.038971 containerd[1449]: time="2024-12-13T13:29:06.038812574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:29:06.054023 containerd[1449]: 2024-12-13 13:29:05.225 [INFO][3003] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 13:29:06.054023 containerd[1449]: 2024-12-13 13:29:05.466 [INFO][3003] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.94-k8s-csi--node--driver--flq57-eth0 csi-node-driver- calico-system 6b554734-fc9d-4b74-9b2d-0c0e98baacdf 1037 0 2024-12-13 13:28:35 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172.24.4.94 csi-node-driver-flq57 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali624ee51bc94 [] []}} ContainerID="e7d4556aae7ff807bc67a1d8cd054d09004771ed2441a05d4245a0b4ea24f9b2" Namespace="calico-system" Pod="csi-node-driver-flq57" WorkloadEndpoint="172.24.4.94-k8s-csi--node--driver--flq57-" Dec 13 13:29:06.054023 containerd[1449]: 2024-12-13 13:29:05.466 [INFO][3003] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e7d4556aae7ff807bc67a1d8cd054d09004771ed2441a05d4245a0b4ea24f9b2" Namespace="calico-system" Pod="csi-node-driver-flq57" WorkloadEndpoint="172.24.4.94-k8s-csi--node--driver--flq57-eth0" Dec 13 13:29:06.054023 containerd[1449]: 2024-12-13 13:29:05.533 [INFO][3033] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e7d4556aae7ff807bc67a1d8cd054d09004771ed2441a05d4245a0b4ea24f9b2" HandleID="k8s-pod-network.e7d4556aae7ff807bc67a1d8cd054d09004771ed2441a05d4245a0b4ea24f9b2" Workload="172.24.4.94-k8s-csi--node--driver--flq57-eth0" Dec 13 13:29:06.054023 containerd[1449]: 2024-12-13 13:29:05.596 [INFO][3033] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e7d4556aae7ff807bc67a1d8cd054d09004771ed2441a05d4245a0b4ea24f9b2" HandleID="k8s-pod-network.e7d4556aae7ff807bc67a1d8cd054d09004771ed2441a05d4245a0b4ea24f9b2" Workload="172.24.4.94-k8s-csi--node--driver--flq57-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319c30), Attrs:map[string]string{"namespace":"calico-system", "node":"172.24.4.94", "pod":"csi-node-driver-flq57", "timestamp":"2024-12-13 13:29:05.533361628 +0000 UTC"}, Hostname:"172.24.4.94", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:29:06.054023 containerd[1449]: 2024-12-13 13:29:05.596 [INFO][3033] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:29:06.054023 containerd[1449]: 2024-12-13 13:29:05.929 [INFO][3033] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:29:06.054023 containerd[1449]: 2024-12-13 13:29:05.929 [INFO][3033] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.94' Dec 13 13:29:06.054023 containerd[1449]: 2024-12-13 13:29:05.936 [INFO][3033] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e7d4556aae7ff807bc67a1d8cd054d09004771ed2441a05d4245a0b4ea24f9b2" host="172.24.4.94" Dec 13 13:29:06.054023 containerd[1449]: 2024-12-13 13:29:05.947 [INFO][3033] ipam/ipam.go 372: Looking up existing affinities for host host="172.24.4.94" Dec 13 13:29:06.054023 containerd[1449]: 2024-12-13 13:29:05.961 [INFO][3033] ipam/ipam.go 489: Trying affinity for 192.168.110.192/26 host="172.24.4.94" Dec 13 13:29:06.054023 containerd[1449]: 2024-12-13 13:29:05.967 [INFO][3033] ipam/ipam.go 155: Attempting to load block cidr=192.168.110.192/26 host="172.24.4.94" Dec 13 13:29:06.054023 containerd[1449]: 2024-12-13 13:29:05.973 [INFO][3033] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.110.192/26 host="172.24.4.94" Dec 13 13:29:06.054023 containerd[1449]: 2024-12-13 13:29:05.974 [INFO][3033] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.110.192/26 handle="k8s-pod-network.e7d4556aae7ff807bc67a1d8cd054d09004771ed2441a05d4245a0b4ea24f9b2" host="172.24.4.94" Dec 13 13:29:06.054023 containerd[1449]: 2024-12-13 13:29:05.986 [INFO][3033] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e7d4556aae7ff807bc67a1d8cd054d09004771ed2441a05d4245a0b4ea24f9b2 Dec 13 13:29:06.054023 containerd[1449]: 2024-12-13 13:29:05.997 [INFO][3033] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.110.192/26 handle="k8s-pod-network.e7d4556aae7ff807bc67a1d8cd054d09004771ed2441a05d4245a0b4ea24f9b2" host="172.24.4.94" Dec 13 13:29:06.054023 containerd[1449]: 2024-12-13 13:29:06.025 [INFO][3033] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.110.194/26] block=192.168.110.192/26 handle="k8s-pod-network.e7d4556aae7ff807bc67a1d8cd054d09004771ed2441a05d4245a0b4ea24f9b2" host="172.24.4.94" Dec 13 13:29:06.054023 containerd[1449]: 2024-12-13 13:29:06.025 [INFO][3033] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.110.194/26] handle="k8s-pod-network.e7d4556aae7ff807bc67a1d8cd054d09004771ed2441a05d4245a0b4ea24f9b2" host="172.24.4.94" Dec 13 13:29:06.054023 containerd[1449]: 2024-12-13 13:29:06.025 [INFO][3033] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:29:06.054023 containerd[1449]: 2024-12-13 13:29:06.025 [INFO][3033] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.110.194/26] IPv6=[] ContainerID="e7d4556aae7ff807bc67a1d8cd054d09004771ed2441a05d4245a0b4ea24f9b2" HandleID="k8s-pod-network.e7d4556aae7ff807bc67a1d8cd054d09004771ed2441a05d4245a0b4ea24f9b2" Workload="172.24.4.94-k8s-csi--node--driver--flq57-eth0" Dec 13 13:29:06.054745 containerd[1449]: 2024-12-13 13:29:06.030 [INFO][3003] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e7d4556aae7ff807bc67a1d8cd054d09004771ed2441a05d4245a0b4ea24f9b2" Namespace="calico-system" Pod="csi-node-driver-flq57" WorkloadEndpoint="172.24.4.94-k8s-csi--node--driver--flq57-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.94-k8s-csi--node--driver--flq57-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6b554734-fc9d-4b74-9b2d-0c0e98baacdf", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 28, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.94", ContainerID:"", Pod:"csi-node-driver-flq57", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.110.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali624ee51bc94", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:29:06.054745 containerd[1449]: 2024-12-13 13:29:06.030 [INFO][3003] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.110.194/32] ContainerID="e7d4556aae7ff807bc67a1d8cd054d09004771ed2441a05d4245a0b4ea24f9b2" Namespace="calico-system" Pod="csi-node-driver-flq57" WorkloadEndpoint="172.24.4.94-k8s-csi--node--driver--flq57-eth0" Dec 13 13:29:06.054745 containerd[1449]: 2024-12-13 13:29:06.030 [INFO][3003] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali624ee51bc94 ContainerID="e7d4556aae7ff807bc67a1d8cd054d09004771ed2441a05d4245a0b4ea24f9b2" Namespace="calico-system" Pod="csi-node-driver-flq57" WorkloadEndpoint="172.24.4.94-k8s-csi--node--driver--flq57-eth0" Dec 13 13:29:06.054745 containerd[1449]: 2024-12-13 13:29:06.035 [INFO][3003] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e7d4556aae7ff807bc67a1d8cd054d09004771ed2441a05d4245a0b4ea24f9b2" Namespace="calico-system" Pod="csi-node-driver-flq57" WorkloadEndpoint="172.24.4.94-k8s-csi--node--driver--flq57-eth0" Dec 13 13:29:06.054745 containerd[1449]: 2024-12-13 13:29:06.035 [INFO][3003] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e7d4556aae7ff807bc67a1d8cd054d09004771ed2441a05d4245a0b4ea24f9b2" Namespace="calico-system" Pod="csi-node-driver-flq57" WorkloadEndpoint="172.24.4.94-k8s-csi--node--driver--flq57-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.94-k8s-csi--node--driver--flq57-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6b554734-fc9d-4b74-9b2d-0c0e98baacdf", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 28, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.94", ContainerID:"e7d4556aae7ff807bc67a1d8cd054d09004771ed2441a05d4245a0b4ea24f9b2", Pod:"csi-node-driver-flq57", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.110.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali624ee51bc94", MAC:"ce:72:64:98:5c:87", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:29:06.054745 containerd[1449]: 2024-12-13 13:29:06.051 [INFO][3003] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e7d4556aae7ff807bc67a1d8cd054d09004771ed2441a05d4245a0b4ea24f9b2" Namespace="calico-system" Pod="csi-node-driver-flq57" WorkloadEndpoint="172.24.4.94-k8s-csi--node--driver--flq57-eth0" Dec 13 13:29:06.072053 systemd[1]: Started cri-containerd-9efab0e173ffa5836f9e0f404c272a24f8cca5580f42866ad3bd4f72fac4ae9d.scope - libcontainer container 9efab0e173ffa5836f9e0f404c272a24f8cca5580f42866ad3bd4f72fac4ae9d. Dec 13 13:29:06.088103 containerd[1449]: time="2024-12-13T13:29:06.087390210Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:29:06.088103 containerd[1449]: time="2024-12-13T13:29:06.087467886Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:29:06.088103 containerd[1449]: time="2024-12-13T13:29:06.087488775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:29:06.088103 containerd[1449]: time="2024-12-13T13:29:06.087579967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:29:06.128740 systemd[1]: Started cri-containerd-e7d4556aae7ff807bc67a1d8cd054d09004771ed2441a05d4245a0b4ea24f9b2.scope - libcontainer container e7d4556aae7ff807bc67a1d8cd054d09004771ed2441a05d4245a0b4ea24f9b2. Dec 13 13:29:06.163628 containerd[1449]: time="2024-12-13T13:29:06.163526362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-pf4jn,Uid:88a7e54e-548f-40c3-aece-f577d2fd980d,Namespace:default,Attempt:8,} returns sandbox id \"9efab0e173ffa5836f9e0f404c272a24f8cca5580f42866ad3bd4f72fac4ae9d\"" Dec 13 13:29:06.169011 containerd[1449]: time="2024-12-13T13:29:06.168985074Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 13:29:06.189156 containerd[1449]: time="2024-12-13T13:29:06.189036627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-flq57,Uid:6b554734-fc9d-4b74-9b2d-0c0e98baacdf,Namespace:calico-system,Attempt:11,} returns sandbox id \"e7d4556aae7ff807bc67a1d8cd054d09004771ed2441a05d4245a0b4ea24f9b2\"" Dec 13 13:29:06.237984 kubelet[1847]: E1213 13:29:06.236856 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:06.496808 systemd[1]: run-containerd-runc-k8s.io-e7d4556aae7ff807bc67a1d8cd054d09004771ed2441a05d4245a0b4ea24f9b2-runc.M4wffP.mount: Deactivated successfully. Dec 13 13:29:07.116026 systemd-networkd[1362]: cali624ee51bc94: Gained IPv6LL Dec 13 13:29:07.238787 kubelet[1847]: E1213 13:29:07.238723 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:07.306013 kernel: bpftool[3297]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 13:29:07.562075 systemd-networkd[1362]: calif657fe9ca38: Gained IPv6LL Dec 13 13:29:07.593840 systemd-networkd[1362]: vxlan.calico: Link UP Dec 13 13:29:07.593850 systemd-networkd[1362]: vxlan.calico: Gained carrier Dec 13 13:29:07.845674 systemd[1]: run-containerd-runc-k8s.io-e580090b51e6e1a5f2fe71f855762599b77558799fc42c0a02e5e8e4b5bde745-runc.NjcDb1.mount: Deactivated successfully. Dec 13 13:29:08.239786 kubelet[1847]: E1213 13:29:08.239651 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:08.714141 systemd-networkd[1362]: vxlan.calico: Gained IPv6LL Dec 13 13:29:09.240550 kubelet[1847]: E1213 13:29:09.240507 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:09.788473 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2250848775.mount: Deactivated successfully. Dec 13 13:29:10.241062 kubelet[1847]: E1213 13:29:10.240935 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:11.141069 containerd[1449]: time="2024-12-13T13:29:11.140903338Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:29:11.143292 containerd[1449]: time="2024-12-13T13:29:11.143123667Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71036027" Dec 13 13:29:11.144357 containerd[1449]: time="2024-12-13T13:29:11.144224610Z" level=info msg="ImageCreate event name:\"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:29:11.148919 containerd[1449]: time="2024-12-13T13:29:11.148145251Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:29:11.151504 containerd[1449]: time="2024-12-13T13:29:11.151048648Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\", size \"71035905\" in 4.981893714s" Dec 13 13:29:11.151504 containerd[1449]: time="2024-12-13T13:29:11.151132536Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 13:29:11.153510 containerd[1449]: time="2024-12-13T13:29:11.153478151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 13:29:11.154593 containerd[1449]: time="2024-12-13T13:29:11.154569586Z" level=info msg="CreateContainer within sandbox \"9efab0e173ffa5836f9e0f404c272a24f8cca5580f42866ad3bd4f72fac4ae9d\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 13:29:11.172090 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount796119915.mount: Deactivated successfully. Dec 13 13:29:11.182348 containerd[1449]: time="2024-12-13T13:29:11.182287338Z" level=info msg="CreateContainer within sandbox \"9efab0e173ffa5836f9e0f404c272a24f8cca5580f42866ad3bd4f72fac4ae9d\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"7537d73d65de27e80a9b294fe046c0cba388ce981ac2c164de30ede3ace4690d\"" Dec 13 13:29:11.183429 containerd[1449]: time="2024-12-13T13:29:11.183215476Z" level=info msg="StartContainer for \"7537d73d65de27e80a9b294fe046c0cba388ce981ac2c164de30ede3ace4690d\"" Dec 13 13:29:11.234097 systemd[1]: Started cri-containerd-7537d73d65de27e80a9b294fe046c0cba388ce981ac2c164de30ede3ace4690d.scope - libcontainer container 7537d73d65de27e80a9b294fe046c0cba388ce981ac2c164de30ede3ace4690d. Dec 13 13:29:11.241869 kubelet[1847]: E1213 13:29:11.241828 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:11.267007 containerd[1449]: time="2024-12-13T13:29:11.266950608Z" level=info msg="StartContainer for \"7537d73d65de27e80a9b294fe046c0cba388ce981ac2c164de30ede3ace4690d\" returns successfully" Dec 13 13:29:12.243134 kubelet[1847]: E1213 13:29:12.243033 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:13.179400 containerd[1449]: time="2024-12-13T13:29:13.179318069Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:29:13.180528 containerd[1449]: time="2024-12-13T13:29:13.180463414Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Dec 13 13:29:13.181846 containerd[1449]: time="2024-12-13T13:29:13.181795702Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:29:13.184918 containerd[1449]: time="2024-12-13T13:29:13.184852534Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:29:13.185617 containerd[1449]: time="2024-12-13T13:29:13.185570885Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.031855948s" Dec 13 13:29:13.185666 containerd[1449]: time="2024-12-13T13:29:13.185615169Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Dec 13 13:29:13.188041 containerd[1449]: time="2024-12-13T13:29:13.187977795Z" level=info msg="CreateContainer within sandbox \"e7d4556aae7ff807bc67a1d8cd054d09004771ed2441a05d4245a0b4ea24f9b2\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 13:29:13.223636 containerd[1449]: time="2024-12-13T13:29:13.223579429Z" level=info msg="CreateContainer within sandbox \"e7d4556aae7ff807bc67a1d8cd054d09004771ed2441a05d4245a0b4ea24f9b2\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ac7124dce34fc1466725909d6890b81626250c225a9cde6c044fdb1f1f201fc6\"" Dec 13 13:29:13.225846 containerd[1449]: time="2024-12-13T13:29:13.224308300Z" level=info msg="StartContainer for \"ac7124dce34fc1466725909d6890b81626250c225a9cde6c044fdb1f1f201fc6\"" Dec 13 13:29:13.243780 kubelet[1847]: E1213 13:29:13.243741 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:13.260695 systemd[1]: Started cri-containerd-ac7124dce34fc1466725909d6890b81626250c225a9cde6c044fdb1f1f201fc6.scope - libcontainer container ac7124dce34fc1466725909d6890b81626250c225a9cde6c044fdb1f1f201fc6. Dec 13 13:29:13.456050 containerd[1449]: time="2024-12-13T13:29:13.455275808Z" level=info msg="StartContainer for \"ac7124dce34fc1466725909d6890b81626250c225a9cde6c044fdb1f1f201fc6\" returns successfully" Dec 13 13:29:13.458712 containerd[1449]: time="2024-12-13T13:29:13.458617316Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 13:29:14.244600 kubelet[1847]: E1213 13:29:14.244545 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:15.197941 kubelet[1847]: E1213 13:29:15.197792 1847 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:15.245350 kubelet[1847]: E1213 13:29:15.245286 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:15.503429 containerd[1449]: time="2024-12-13T13:29:15.503303696Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:29:15.504917 containerd[1449]: time="2024-12-13T13:29:15.504800812Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Dec 13 13:29:15.505974 containerd[1449]: time="2024-12-13T13:29:15.505925307Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:29:15.508627 containerd[1449]: time="2024-12-13T13:29:15.508604186Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:29:15.509555 containerd[1449]: time="2024-12-13T13:29:15.509409151Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.050706153s" Dec 13 13:29:15.509555 containerd[1449]: time="2024-12-13T13:29:15.509451861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Dec 13 13:29:15.512337 containerd[1449]: time="2024-12-13T13:29:15.512187937Z" level=info msg="CreateContainer within sandbox \"e7d4556aae7ff807bc67a1d8cd054d09004771ed2441a05d4245a0b4ea24f9b2\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 13:29:15.536440 containerd[1449]: time="2024-12-13T13:29:15.536336571Z" level=info msg="CreateContainer within sandbox \"e7d4556aae7ff807bc67a1d8cd054d09004771ed2441a05d4245a0b4ea24f9b2\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"74a852ab980692b96e2e4da2e797e4bba794c88f2cce678917491e82ed120e15\"" Dec 13 13:29:15.538164 containerd[1449]: time="2024-12-13T13:29:15.537406293Z" level=info msg="StartContainer for \"74a852ab980692b96e2e4da2e797e4bba794c88f2cce678917491e82ed120e15\"" Dec 13 13:29:15.570143 systemd[1]: Started cri-containerd-74a852ab980692b96e2e4da2e797e4bba794c88f2cce678917491e82ed120e15.scope - libcontainer container 74a852ab980692b96e2e4da2e797e4bba794c88f2cce678917491e82ed120e15. Dec 13 13:29:15.612921 containerd[1449]: time="2024-12-13T13:29:15.612864991Z" level=info msg="StartContainer for \"74a852ab980692b96e2e4da2e797e4bba794c88f2cce678917491e82ed120e15\" returns successfully" Dec 13 13:29:16.245851 kubelet[1847]: E1213 13:29:16.245756 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:16.257839 kubelet[1847]: I1213 13:29:16.257692 1847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-pf4jn" podStartSLOduration=16.273338221 podStartE2EDuration="21.257612288s" podCreationTimestamp="2024-12-13 13:28:55 +0000 UTC" firstStartedPulling="2024-12-13 13:29:06.168194604 +0000 UTC m=+31.620294933" lastFinishedPulling="2024-12-13 13:29:11.152468631 +0000 UTC m=+36.604569000" observedRunningTime="2024-12-13 13:29:12.212655677 +0000 UTC m=+37.664756056" watchObservedRunningTime="2024-12-13 13:29:16.257612288 +0000 UTC m=+41.709712657" Dec 13 13:29:16.378735 kubelet[1847]: I1213 13:29:16.378348 1847 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 13:29:16.378735 kubelet[1847]: I1213 13:29:16.378399 1847 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 13:29:17.246102 kubelet[1847]: E1213 13:29:17.245991 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:18.246559 kubelet[1847]: E1213 13:29:18.246478 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:19.247538 kubelet[1847]: E1213 13:29:19.247411 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:20.247680 kubelet[1847]: E1213 13:29:20.247587 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:21.247861 kubelet[1847]: E1213 13:29:21.247793 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:22.248775 kubelet[1847]: E1213 13:29:22.248674 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:23.249963 kubelet[1847]: E1213 13:29:23.249795 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:24.250714 kubelet[1847]: E1213 13:29:24.250604 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:25.250832 kubelet[1847]: E1213 13:29:25.250755 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:26.251071 kubelet[1847]: E1213 13:29:26.250992 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:26.361034 kubelet[1847]: I1213 13:29:26.360927 1847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-flq57" podStartSLOduration=42.040660417 podStartE2EDuration="51.360869446s" podCreationTimestamp="2024-12-13 13:28:35 +0000 UTC" firstStartedPulling="2024-12-13 13:29:06.190525585 +0000 UTC m=+31.642625904" lastFinishedPulling="2024-12-13 13:29:15.510734603 +0000 UTC m=+40.962834933" observedRunningTime="2024-12-13 13:29:16.258745439 +0000 UTC m=+41.710845808" watchObservedRunningTime="2024-12-13 13:29:26.360869446 +0000 UTC m=+51.812969815" Dec 13 13:29:26.374128 systemd[1]: Created slice kubepods-besteffort-pod534d683f_543b_4119_80a3_4ea8a6d34ab4.slice - libcontainer container kubepods-besteffort-pod534d683f_543b_4119_80a3_4ea8a6d34ab4.slice. Dec 13 13:29:26.488343 kubelet[1847]: I1213 13:29:26.488181 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/534d683f-543b-4119-80a3-4ea8a6d34ab4-data\") pod \"nfs-server-provisioner-0\" (UID: \"534d683f-543b-4119-80a3-4ea8a6d34ab4\") " pod="default/nfs-server-provisioner-0" Dec 13 13:29:26.488343 kubelet[1847]: I1213 13:29:26.488283 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlcdp\" (UniqueName: \"kubernetes.io/projected/534d683f-543b-4119-80a3-4ea8a6d34ab4-kube-api-access-mlcdp\") pod \"nfs-server-provisioner-0\" (UID: \"534d683f-543b-4119-80a3-4ea8a6d34ab4\") " pod="default/nfs-server-provisioner-0" Dec 13 13:29:26.679933 containerd[1449]: time="2024-12-13T13:29:26.679749419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:534d683f-543b-4119-80a3-4ea8a6d34ab4,Namespace:default,Attempt:0,}" Dec 13 13:29:26.883368 systemd-networkd[1362]: cali60e51b789ff: Link UP Dec 13 13:29:26.884371 systemd-networkd[1362]: cali60e51b789ff: Gained carrier Dec 13 13:29:26.899209 containerd[1449]: 2024-12-13 13:29:26.780 [INFO][3584] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.94-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 534d683f-543b-4119-80a3-4ea8a6d34ab4 1294 0 2024-12-13 13:29:26 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 172.24.4.94 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="46f8da9e1c72b762abe3fa97dafa525cc173dc76c7e24558930159351459453c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.94-k8s-nfs--server--provisioner--0-" Dec 13 13:29:26.899209 containerd[1449]: 2024-12-13 13:29:26.780 [INFO][3584] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="46f8da9e1c72b762abe3fa97dafa525cc173dc76c7e24558930159351459453c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.94-k8s-nfs--server--provisioner--0-eth0" Dec 13 13:29:26.899209 containerd[1449]: 2024-12-13 13:29:26.816 [INFO][3596] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="46f8da9e1c72b762abe3fa97dafa525cc173dc76c7e24558930159351459453c" HandleID="k8s-pod-network.46f8da9e1c72b762abe3fa97dafa525cc173dc76c7e24558930159351459453c" Workload="172.24.4.94-k8s-nfs--server--provisioner--0-eth0" Dec 13 13:29:26.899209 containerd[1449]: 2024-12-13 13:29:26.834 [INFO][3596] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="46f8da9e1c72b762abe3fa97dafa525cc173dc76c7e24558930159351459453c" HandleID="k8s-pod-network.46f8da9e1c72b762abe3fa97dafa525cc173dc76c7e24558930159351459453c" Workload="172.24.4.94-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318960), Attrs:map[string]string{"namespace":"default", "node":"172.24.4.94", "pod":"nfs-server-provisioner-0", "timestamp":"2024-12-13 13:29:26.81640708 +0000 UTC"}, Hostname:"172.24.4.94", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:29:26.899209 containerd[1449]: 2024-12-13 13:29:26.834 [INFO][3596] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:29:26.899209 containerd[1449]: 2024-12-13 13:29:26.834 [INFO][3596] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:29:26.899209 containerd[1449]: 2024-12-13 13:29:26.834 [INFO][3596] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.94' Dec 13 13:29:26.899209 containerd[1449]: 2024-12-13 13:29:26.838 [INFO][3596] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.46f8da9e1c72b762abe3fa97dafa525cc173dc76c7e24558930159351459453c" host="172.24.4.94" Dec 13 13:29:26.899209 containerd[1449]: 2024-12-13 13:29:26.847 [INFO][3596] ipam/ipam.go 372: Looking up existing affinities for host host="172.24.4.94" Dec 13 13:29:26.899209 containerd[1449]: 2024-12-13 13:29:26.854 [INFO][3596] ipam/ipam.go 489: Trying affinity for 192.168.110.192/26 host="172.24.4.94" Dec 13 13:29:26.899209 containerd[1449]: 2024-12-13 13:29:26.857 [INFO][3596] ipam/ipam.go 155: Attempting to load block cidr=192.168.110.192/26 host="172.24.4.94" Dec 13 13:29:26.899209 containerd[1449]: 2024-12-13 13:29:26.860 [INFO][3596] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.110.192/26 host="172.24.4.94" Dec 13 13:29:26.899209 containerd[1449]: 2024-12-13 13:29:26.860 [INFO][3596] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.110.192/26 handle="k8s-pod-network.46f8da9e1c72b762abe3fa97dafa525cc173dc76c7e24558930159351459453c" host="172.24.4.94" Dec 13 13:29:26.899209 containerd[1449]: 2024-12-13 13:29:26.863 [INFO][3596] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.46f8da9e1c72b762abe3fa97dafa525cc173dc76c7e24558930159351459453c Dec 13 13:29:26.899209 containerd[1449]: 2024-12-13 13:29:26.869 [INFO][3596] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.110.192/26 handle="k8s-pod-network.46f8da9e1c72b762abe3fa97dafa525cc173dc76c7e24558930159351459453c" host="172.24.4.94" Dec 13 13:29:26.899209 containerd[1449]: 2024-12-13 13:29:26.877 [INFO][3596] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.110.195/26] block=192.168.110.192/26 handle="k8s-pod-network.46f8da9e1c72b762abe3fa97dafa525cc173dc76c7e24558930159351459453c" host="172.24.4.94" Dec 13 13:29:26.899209 containerd[1449]: 2024-12-13 13:29:26.877 [INFO][3596] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.110.195/26] handle="k8s-pod-network.46f8da9e1c72b762abe3fa97dafa525cc173dc76c7e24558930159351459453c" host="172.24.4.94" Dec 13 13:29:26.899209 containerd[1449]: 2024-12-13 13:29:26.877 [INFO][3596] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:29:26.899209 containerd[1449]: 2024-12-13 13:29:26.877 [INFO][3596] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.110.195/26] IPv6=[] ContainerID="46f8da9e1c72b762abe3fa97dafa525cc173dc76c7e24558930159351459453c" HandleID="k8s-pod-network.46f8da9e1c72b762abe3fa97dafa525cc173dc76c7e24558930159351459453c" Workload="172.24.4.94-k8s-nfs--server--provisioner--0-eth0" Dec 13 13:29:26.902235 containerd[1449]: 2024-12-13 13:29:26.880 [INFO][3584] cni-plugin/k8s.go 386: Populated endpoint ContainerID="46f8da9e1c72b762abe3fa97dafa525cc173dc76c7e24558930159351459453c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.94-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.94-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"534d683f-543b-4119-80a3-4ea8a6d34ab4", ResourceVersion:"1294", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 29, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.94", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.110.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:29:26.902235 containerd[1449]: 2024-12-13 13:29:26.880 [INFO][3584] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.110.195/32] ContainerID="46f8da9e1c72b762abe3fa97dafa525cc173dc76c7e24558930159351459453c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.94-k8s-nfs--server--provisioner--0-eth0" Dec 13 13:29:26.902235 containerd[1449]: 2024-12-13 13:29:26.880 [INFO][3584] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="46f8da9e1c72b762abe3fa97dafa525cc173dc76c7e24558930159351459453c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.94-k8s-nfs--server--provisioner--0-eth0" Dec 13 13:29:26.902235 containerd[1449]: 2024-12-13 13:29:26.884 [INFO][3584] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="46f8da9e1c72b762abe3fa97dafa525cc173dc76c7e24558930159351459453c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.94-k8s-nfs--server--provisioner--0-eth0" Dec 13 13:29:26.902625 containerd[1449]: 2024-12-13 13:29:26.884 [INFO][3584] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="46f8da9e1c72b762abe3fa97dafa525cc173dc76c7e24558930159351459453c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.94-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.94-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"534d683f-543b-4119-80a3-4ea8a6d34ab4", ResourceVersion:"1294", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 29, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.94", ContainerID:"46f8da9e1c72b762abe3fa97dafa525cc173dc76c7e24558930159351459453c", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.110.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"06:12:c8:3a:2b:0f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:29:26.902625 containerd[1449]: 2024-12-13 13:29:26.897 [INFO][3584] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="46f8da9e1c72b762abe3fa97dafa525cc173dc76c7e24558930159351459453c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.94-k8s-nfs--server--provisioner--0-eth0" Dec 13 13:29:26.944677 containerd[1449]: time="2024-12-13T13:29:26.943096389Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:29:26.944677 containerd[1449]: time="2024-12-13T13:29:26.943175047Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:29:26.944677 containerd[1449]: time="2024-12-13T13:29:26.943199012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:29:26.944677 containerd[1449]: time="2024-12-13T13:29:26.943309659Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:29:26.970101 systemd[1]: Started cri-containerd-46f8da9e1c72b762abe3fa97dafa525cc173dc76c7e24558930159351459453c.scope - libcontainer container 46f8da9e1c72b762abe3fa97dafa525cc173dc76c7e24558930159351459453c. Dec 13 13:29:27.012520 containerd[1449]: time="2024-12-13T13:29:27.012480104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:534d683f-543b-4119-80a3-4ea8a6d34ab4,Namespace:default,Attempt:0,} returns sandbox id \"46f8da9e1c72b762abe3fa97dafa525cc173dc76c7e24558930159351459453c\"" Dec 13 13:29:27.014293 containerd[1449]: time="2024-12-13T13:29:27.014241842Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 13:29:27.251675 kubelet[1847]: E1213 13:29:27.251373 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:28.252425 kubelet[1847]: E1213 13:29:28.252376 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:28.491608 systemd-networkd[1362]: cali60e51b789ff: Gained IPv6LL Dec 13 13:29:29.252827 kubelet[1847]: E1213 13:29:29.252761 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:30.254472 kubelet[1847]: E1213 13:29:30.254332 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:30.645713 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount216736080.mount: Deactivated successfully. Dec 13 13:29:31.254609 kubelet[1847]: E1213 13:29:31.254520 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:32.254768 kubelet[1847]: E1213 13:29:32.254681 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:33.255540 kubelet[1847]: E1213 13:29:33.255494 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:33.908917 containerd[1449]: time="2024-12-13T13:29:33.907977326Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:29:33.910507 containerd[1449]: time="2024-12-13T13:29:33.910466599Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039414" Dec 13 13:29:33.911937 containerd[1449]: time="2024-12-13T13:29:33.911867388Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:29:33.915658 containerd[1449]: time="2024-12-13T13:29:33.915603362Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:29:33.916762 containerd[1449]: time="2024-12-13T13:29:33.916648333Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 6.902357178s" Dec 13 13:29:33.916762 containerd[1449]: time="2024-12-13T13:29:33.916678390Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 13 13:29:33.919292 containerd[1449]: time="2024-12-13T13:29:33.919259715Z" level=info msg="CreateContainer within sandbox \"46f8da9e1c72b762abe3fa97dafa525cc173dc76c7e24558930159351459453c\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 13:29:33.949438 containerd[1449]: time="2024-12-13T13:29:33.949391730Z" level=info msg="CreateContainer within sandbox \"46f8da9e1c72b762abe3fa97dafa525cc173dc76c7e24558930159351459453c\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"4e6fc721cf7c2b644db00152ad1aa5f63d9d68ea4c39f229c924a67a5ad71916\"" Dec 13 13:29:33.950142 containerd[1449]: time="2024-12-13T13:29:33.950071175Z" level=info msg="StartContainer for \"4e6fc721cf7c2b644db00152ad1aa5f63d9d68ea4c39f229c924a67a5ad71916\"" Dec 13 13:29:33.983034 systemd[1]: Started cri-containerd-4e6fc721cf7c2b644db00152ad1aa5f63d9d68ea4c39f229c924a67a5ad71916.scope - libcontainer container 4e6fc721cf7c2b644db00152ad1aa5f63d9d68ea4c39f229c924a67a5ad71916. Dec 13 13:29:34.016493 containerd[1449]: time="2024-12-13T13:29:34.016447496Z" level=info msg="StartContainer for \"4e6fc721cf7c2b644db00152ad1aa5f63d9d68ea4c39f229c924a67a5ad71916\" returns successfully" Dec 13 13:29:34.256480 kubelet[1847]: E1213 13:29:34.256333 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:35.197262 kubelet[1847]: E1213 13:29:35.197192 1847 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:35.257011 kubelet[1847]: E1213 13:29:35.256945 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:35.274431 containerd[1449]: time="2024-12-13T13:29:35.274346000Z" level=info msg="StopPodSandbox for \"44e70081d2cdda96567481020ec4198920f6778b69518074c0e3574f13ad78fb\"" Dec 13 13:29:35.275321 containerd[1449]: time="2024-12-13T13:29:35.274552007Z" level=info msg="TearDown network for sandbox \"44e70081d2cdda96567481020ec4198920f6778b69518074c0e3574f13ad78fb\" successfully" Dec 13 13:29:35.275321 containerd[1449]: time="2024-12-13T13:29:35.274644300Z" level=info msg="StopPodSandbox for \"44e70081d2cdda96567481020ec4198920f6778b69518074c0e3574f13ad78fb\" returns successfully" Dec 13 13:29:35.286397 containerd[1449]: time="2024-12-13T13:29:35.286308404Z" level=info msg="RemovePodSandbox for \"44e70081d2cdda96567481020ec4198920f6778b69518074c0e3574f13ad78fb\"" Dec 13 13:29:35.327970 containerd[1449]: time="2024-12-13T13:29:35.327862773Z" level=info msg="Forcibly stopping sandbox \"44e70081d2cdda96567481020ec4198920f6778b69518074c0e3574f13ad78fb\"" Dec 13 13:29:35.391823 containerd[1449]: time="2024-12-13T13:29:35.365426540Z" level=info msg="TearDown network for sandbox \"44e70081d2cdda96567481020ec4198920f6778b69518074c0e3574f13ad78fb\" successfully" Dec 13 13:29:35.408183 containerd[1449]: time="2024-12-13T13:29:35.408128832Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"44e70081d2cdda96567481020ec4198920f6778b69518074c0e3574f13ad78fb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:35.408296 containerd[1449]: time="2024-12-13T13:29:35.408198193Z" level=info msg="RemovePodSandbox \"44e70081d2cdda96567481020ec4198920f6778b69518074c0e3574f13ad78fb\" returns successfully" Dec 13 13:29:35.408937 containerd[1449]: time="2024-12-13T13:29:35.408686459Z" level=info msg="StopPodSandbox for \"b528438cb8163ad2aef30dd5d012230125b22bacbc1996f26597aacff3a1201a\"" Dec 13 13:29:35.408937 containerd[1449]: time="2024-12-13T13:29:35.408771228Z" level=info msg="TearDown network for sandbox \"b528438cb8163ad2aef30dd5d012230125b22bacbc1996f26597aacff3a1201a\" successfully" Dec 13 13:29:35.408937 containerd[1449]: time="2024-12-13T13:29:35.408813628Z" level=info msg="StopPodSandbox for \"b528438cb8163ad2aef30dd5d012230125b22bacbc1996f26597aacff3a1201a\" returns successfully" Dec 13 13:29:35.409205 containerd[1449]: time="2024-12-13T13:29:35.409119252Z" level=info msg="RemovePodSandbox for \"b528438cb8163ad2aef30dd5d012230125b22bacbc1996f26597aacff3a1201a\"" Dec 13 13:29:35.409301 containerd[1449]: time="2024-12-13T13:29:35.409217226Z" level=info msg="Forcibly stopping sandbox \"b528438cb8163ad2aef30dd5d012230125b22bacbc1996f26597aacff3a1201a\"" Dec 13 13:29:35.409383 containerd[1449]: time="2024-12-13T13:29:35.409296424Z" level=info msg="TearDown network for sandbox \"b528438cb8163ad2aef30dd5d012230125b22bacbc1996f26597aacff3a1201a\" successfully" Dec 13 13:29:35.415992 containerd[1449]: time="2024-12-13T13:29:35.415932109Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b528438cb8163ad2aef30dd5d012230125b22bacbc1996f26597aacff3a1201a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:35.415992 containerd[1449]: time="2024-12-13T13:29:35.415983025Z" level=info msg="RemovePodSandbox \"b528438cb8163ad2aef30dd5d012230125b22bacbc1996f26597aacff3a1201a\" returns successfully" Dec 13 13:29:35.416491 containerd[1449]: time="2024-12-13T13:29:35.416471262Z" level=info msg="StopPodSandbox for \"80428a68e929b7a5fe0fe9197451793805d1ccc6b30657b54c5712a597bee652\"" Dec 13 13:29:35.416738 containerd[1449]: time="2024-12-13T13:29:35.416662931Z" level=info msg="TearDown network for sandbox \"80428a68e929b7a5fe0fe9197451793805d1ccc6b30657b54c5712a597bee652\" successfully" Dec 13 13:29:35.416738 containerd[1449]: time="2024-12-13T13:29:35.416680504Z" level=info msg="StopPodSandbox for \"80428a68e929b7a5fe0fe9197451793805d1ccc6b30657b54c5712a597bee652\" returns successfully" Dec 13 13:29:35.417071 containerd[1449]: time="2024-12-13T13:29:35.417042965Z" level=info msg="RemovePodSandbox for \"80428a68e929b7a5fe0fe9197451793805d1ccc6b30657b54c5712a597bee652\"" Dec 13 13:29:35.417131 containerd[1449]: time="2024-12-13T13:29:35.417073181Z" level=info msg="Forcibly stopping sandbox \"80428a68e929b7a5fe0fe9197451793805d1ccc6b30657b54c5712a597bee652\"" Dec 13 13:29:35.417180 containerd[1449]: time="2024-12-13T13:29:35.417144325Z" level=info msg="TearDown network for sandbox \"80428a68e929b7a5fe0fe9197451793805d1ccc6b30657b54c5712a597bee652\" successfully" Dec 13 13:29:35.423943 containerd[1449]: time="2024-12-13T13:29:35.423864969Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"80428a68e929b7a5fe0fe9197451793805d1ccc6b30657b54c5712a597bee652\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:35.424216 containerd[1449]: time="2024-12-13T13:29:35.423949318Z" level=info msg="RemovePodSandbox \"80428a68e929b7a5fe0fe9197451793805d1ccc6b30657b54c5712a597bee652\" returns successfully" Dec 13 13:29:35.424578 containerd[1449]: time="2024-12-13T13:29:35.424487839Z" level=info msg="StopPodSandbox for \"600f5c9de67c290a123d858023bf36d579c6cdd3fa0b85cf2a9e47c426e3979f\"" Dec 13 13:29:35.424809 containerd[1449]: time="2024-12-13T13:29:35.424723882Z" level=info msg="TearDown network for sandbox \"600f5c9de67c290a123d858023bf36d579c6cdd3fa0b85cf2a9e47c426e3979f\" successfully" Dec 13 13:29:35.424809 containerd[1449]: time="2024-12-13T13:29:35.424746054Z" level=info msg="StopPodSandbox for \"600f5c9de67c290a123d858023bf36d579c6cdd3fa0b85cf2a9e47c426e3979f\" returns successfully" Dec 13 13:29:35.425906 containerd[1449]: time="2024-12-13T13:29:35.425076784Z" level=info msg="RemovePodSandbox for \"600f5c9de67c290a123d858023bf36d579c6cdd3fa0b85cf2a9e47c426e3979f\"" Dec 13 13:29:35.425906 containerd[1449]: time="2024-12-13T13:29:35.425100398Z" level=info msg="Forcibly stopping sandbox \"600f5c9de67c290a123d858023bf36d579c6cdd3fa0b85cf2a9e47c426e3979f\"" Dec 13 13:29:35.425906 containerd[1449]: time="2024-12-13T13:29:35.425159149Z" level=info msg="TearDown network for sandbox \"600f5c9de67c290a123d858023bf36d579c6cdd3fa0b85cf2a9e47c426e3979f\" successfully" Dec 13 13:29:35.428006 containerd[1449]: time="2024-12-13T13:29:35.427979593Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"600f5c9de67c290a123d858023bf36d579c6cdd3fa0b85cf2a9e47c426e3979f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:35.428121 containerd[1449]: time="2024-12-13T13:29:35.428102464Z" level=info msg="RemovePodSandbox \"600f5c9de67c290a123d858023bf36d579c6cdd3fa0b85cf2a9e47c426e3979f\" returns successfully" Dec 13 13:29:35.428511 containerd[1449]: time="2024-12-13T13:29:35.428490311Z" level=info msg="StopPodSandbox for \"7569277e045706f329ab489063636603c9060725a50ae1285872cb139bd6a320\"" Dec 13 13:29:35.428786 containerd[1449]: time="2024-12-13T13:29:35.428767502Z" level=info msg="TearDown network for sandbox \"7569277e045706f329ab489063636603c9060725a50ae1285872cb139bd6a320\" successfully" Dec 13 13:29:35.428869 containerd[1449]: time="2024-12-13T13:29:35.428853774Z" level=info msg="StopPodSandbox for \"7569277e045706f329ab489063636603c9060725a50ae1285872cb139bd6a320\" returns successfully" Dec 13 13:29:35.430113 containerd[1449]: time="2024-12-13T13:29:35.429288230Z" level=info msg="RemovePodSandbox for \"7569277e045706f329ab489063636603c9060725a50ae1285872cb139bd6a320\"" Dec 13 13:29:35.430113 containerd[1449]: time="2024-12-13T13:29:35.429314018Z" level=info msg="Forcibly stopping sandbox \"7569277e045706f329ab489063636603c9060725a50ae1285872cb139bd6a320\"" Dec 13 13:29:35.430113 containerd[1449]: time="2024-12-13T13:29:35.429370484Z" level=info msg="TearDown network for sandbox \"7569277e045706f329ab489063636603c9060725a50ae1285872cb139bd6a320\" successfully" Dec 13 13:29:35.432358 containerd[1449]: time="2024-12-13T13:29:35.432334026Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7569277e045706f329ab489063636603c9060725a50ae1285872cb139bd6a320\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:35.432471 containerd[1449]: time="2024-12-13T13:29:35.432451917Z" level=info msg="RemovePodSandbox \"7569277e045706f329ab489063636603c9060725a50ae1285872cb139bd6a320\" returns successfully" Dec 13 13:29:35.432946 containerd[1449]: time="2024-12-13T13:29:35.432907232Z" level=info msg="StopPodSandbox for \"6bf4d027f35ad0d592900f73b004bb1954d4e8416164842fcfcb8a315c21cfe5\"" Dec 13 13:29:35.433155 containerd[1449]: time="2024-12-13T13:29:35.433133036Z" level=info msg="TearDown network for sandbox \"6bf4d027f35ad0d592900f73b004bb1954d4e8416164842fcfcb8a315c21cfe5\" successfully" Dec 13 13:29:35.433244 containerd[1449]: time="2024-12-13T13:29:35.433228506Z" level=info msg="StopPodSandbox for \"6bf4d027f35ad0d592900f73b004bb1954d4e8416164842fcfcb8a315c21cfe5\" returns successfully" Dec 13 13:29:35.433672 containerd[1449]: time="2024-12-13T13:29:35.433644466Z" level=info msg="RemovePodSandbox for \"6bf4d027f35ad0d592900f73b004bb1954d4e8416164842fcfcb8a315c21cfe5\"" Dec 13 13:29:35.433738 containerd[1449]: time="2024-12-13T13:29:35.433675043Z" level=info msg="Forcibly stopping sandbox \"6bf4d027f35ad0d592900f73b004bb1954d4e8416164842fcfcb8a315c21cfe5\"" Dec 13 13:29:35.433875 containerd[1449]: time="2024-12-13T13:29:35.433750495Z" level=info msg="TearDown network for sandbox \"6bf4d027f35ad0d592900f73b004bb1954d4e8416164842fcfcb8a315c21cfe5\" successfully" Dec 13 13:29:35.437263 containerd[1449]: time="2024-12-13T13:29:35.437207104Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6bf4d027f35ad0d592900f73b004bb1954d4e8416164842fcfcb8a315c21cfe5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:35.437263 containerd[1449]: time="2024-12-13T13:29:35.437255945Z" level=info msg="RemovePodSandbox \"6bf4d027f35ad0d592900f73b004bb1954d4e8416164842fcfcb8a315c21cfe5\" returns successfully" Dec 13 13:29:35.437964 containerd[1449]: time="2024-12-13T13:29:35.437736717Z" level=info msg="StopPodSandbox for \"3151e7323d43d89e68150771ae3be48f603cfd61d37faaa81b3ba32a6f392db6\"" Dec 13 13:29:35.437964 containerd[1449]: time="2024-12-13T13:29:35.437846824Z" level=info msg="TearDown network for sandbox \"3151e7323d43d89e68150771ae3be48f603cfd61d37faaa81b3ba32a6f392db6\" successfully" Dec 13 13:29:35.437964 containerd[1449]: time="2024-12-13T13:29:35.437860881Z" level=info msg="StopPodSandbox for \"3151e7323d43d89e68150771ae3be48f603cfd61d37faaa81b3ba32a6f392db6\" returns successfully" Dec 13 13:29:35.438403 containerd[1449]: time="2024-12-13T13:29:35.438238599Z" level=info msg="RemovePodSandbox for \"3151e7323d43d89e68150771ae3be48f603cfd61d37faaa81b3ba32a6f392db6\"" Dec 13 13:29:35.438403 containerd[1449]: time="2024-12-13T13:29:35.438265840Z" level=info msg="Forcibly stopping sandbox \"3151e7323d43d89e68150771ae3be48f603cfd61d37faaa81b3ba32a6f392db6\"" Dec 13 13:29:35.439250 containerd[1449]: time="2024-12-13T13:29:35.438373723Z" level=info msg="TearDown network for sandbox \"3151e7323d43d89e68150771ae3be48f603cfd61d37faaa81b3ba32a6f392db6\" successfully" Dec 13 13:29:35.442783 containerd[1449]: time="2024-12-13T13:29:35.442755007Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3151e7323d43d89e68150771ae3be48f603cfd61d37faaa81b3ba32a6f392db6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:35.443021 containerd[1449]: time="2024-12-13T13:29:35.442915788Z" level=info msg="RemovePodSandbox \"3151e7323d43d89e68150771ae3be48f603cfd61d37faaa81b3ba32a6f392db6\" returns successfully" Dec 13 13:29:35.443730 containerd[1449]: time="2024-12-13T13:29:35.443565027Z" level=info msg="StopPodSandbox for \"6668aa2d04ff7815328ab920bbcbee8449f6409a72a6ff9cdabfbcff8ecd813b\"" Dec 13 13:29:35.443730 containerd[1449]: time="2024-12-13T13:29:35.443659444Z" level=info msg="TearDown network for sandbox \"6668aa2d04ff7815328ab920bbcbee8449f6409a72a6ff9cdabfbcff8ecd813b\" successfully" Dec 13 13:29:35.443730 containerd[1449]: time="2024-12-13T13:29:35.443671216Z" level=info msg="StopPodSandbox for \"6668aa2d04ff7815328ab920bbcbee8449f6409a72a6ff9cdabfbcff8ecd813b\" returns successfully" Dec 13 13:29:35.443996 containerd[1449]: time="2024-12-13T13:29:35.443958957Z" level=info msg="RemovePodSandbox for \"6668aa2d04ff7815328ab920bbcbee8449f6409a72a6ff9cdabfbcff8ecd813b\"" Dec 13 13:29:35.444088 containerd[1449]: time="2024-12-13T13:29:35.443991578Z" level=info msg="Forcibly stopping sandbox \"6668aa2d04ff7815328ab920bbcbee8449f6409a72a6ff9cdabfbcff8ecd813b\"" Dec 13 13:29:35.444166 containerd[1449]: time="2024-12-13T13:29:35.444067340Z" level=info msg="TearDown network for sandbox \"6668aa2d04ff7815328ab920bbcbee8449f6409a72a6ff9cdabfbcff8ecd813b\" successfully" Dec 13 13:29:35.446916 containerd[1449]: time="2024-12-13T13:29:35.446860743Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6668aa2d04ff7815328ab920bbcbee8449f6409a72a6ff9cdabfbcff8ecd813b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:35.447230 containerd[1449]: time="2024-12-13T13:29:35.446922889Z" level=info msg="RemovePodSandbox \"6668aa2d04ff7815328ab920bbcbee8449f6409a72a6ff9cdabfbcff8ecd813b\" returns successfully" Dec 13 13:29:35.447661 containerd[1449]: time="2024-12-13T13:29:35.447352717Z" level=info msg="StopPodSandbox for \"99b9d16fb8fe14762daea9ce0f6475277613de5362a06a134624b846dd5a6c6b\"" Dec 13 13:29:35.447661 containerd[1449]: time="2024-12-13T13:29:35.447433759Z" level=info msg="TearDown network for sandbox \"99b9d16fb8fe14762daea9ce0f6475277613de5362a06a134624b846dd5a6c6b\" successfully" Dec 13 13:29:35.447661 containerd[1449]: time="2024-12-13T13:29:35.447446613Z" level=info msg="StopPodSandbox for \"99b9d16fb8fe14762daea9ce0f6475277613de5362a06a134624b846dd5a6c6b\" returns successfully" Dec 13 13:29:35.450932 containerd[1449]: time="2024-12-13T13:29:35.450241298Z" level=info msg="RemovePodSandbox for \"99b9d16fb8fe14762daea9ce0f6475277613de5362a06a134624b846dd5a6c6b\"" Dec 13 13:29:35.450932 containerd[1449]: time="2024-12-13T13:29:35.450274921Z" level=info msg="Forcibly stopping sandbox \"99b9d16fb8fe14762daea9ce0f6475277613de5362a06a134624b846dd5a6c6b\"" Dec 13 13:29:35.450932 containerd[1449]: time="2024-12-13T13:29:35.450375159Z" level=info msg="TearDown network for sandbox \"99b9d16fb8fe14762daea9ce0f6475277613de5362a06a134624b846dd5a6c6b\" successfully" Dec 13 13:29:35.453810 containerd[1449]: time="2024-12-13T13:29:35.453770213Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"99b9d16fb8fe14762daea9ce0f6475277613de5362a06a134624b846dd5a6c6b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:35.453962 containerd[1449]: time="2024-12-13T13:29:35.453822260Z" level=info msg="RemovePodSandbox \"99b9d16fb8fe14762daea9ce0f6475277613de5362a06a134624b846dd5a6c6b\" returns successfully" Dec 13 13:29:35.454310 containerd[1449]: time="2024-12-13T13:29:35.454285339Z" level=info msg="StopPodSandbox for \"10120d9980b653e4451fb292195c4c1014167a51da97b8f34ddfc37e9a44261f\"" Dec 13 13:29:35.454578 containerd[1449]: time="2024-12-13T13:29:35.454480796Z" level=info msg="TearDown network for sandbox \"10120d9980b653e4451fb292195c4c1014167a51da97b8f34ddfc37e9a44261f\" successfully" Dec 13 13:29:35.454578 containerd[1449]: time="2024-12-13T13:29:35.454502828Z" level=info msg="StopPodSandbox for \"10120d9980b653e4451fb292195c4c1014167a51da97b8f34ddfc37e9a44261f\" returns successfully" Dec 13 13:29:35.454966 containerd[1449]: time="2024-12-13T13:29:35.454923507Z" level=info msg="RemovePodSandbox for \"10120d9980b653e4451fb292195c4c1014167a51da97b8f34ddfc37e9a44261f\"" Dec 13 13:29:35.455025 containerd[1449]: time="2024-12-13T13:29:35.454971999Z" level=info msg="Forcibly stopping sandbox \"10120d9980b653e4451fb292195c4c1014167a51da97b8f34ddfc37e9a44261f\"" Dec 13 13:29:35.455097 containerd[1449]: time="2024-12-13T13:29:35.455046448Z" level=info msg="TearDown network for sandbox \"10120d9980b653e4451fb292195c4c1014167a51da97b8f34ddfc37e9a44261f\" successfully" Dec 13 13:29:35.458548 containerd[1449]: time="2024-12-13T13:29:35.458511532Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"10120d9980b653e4451fb292195c4c1014167a51da97b8f34ddfc37e9a44261f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:35.458640 containerd[1449]: time="2024-12-13T13:29:35.458558670Z" level=info msg="RemovePodSandbox \"10120d9980b653e4451fb292195c4c1014167a51da97b8f34ddfc37e9a44261f\" returns successfully" Dec 13 13:29:35.458908 containerd[1449]: time="2024-12-13T13:29:35.458869153Z" level=info msg="StopPodSandbox for \"3a30e1f6d6c1c84373fcffbd8a750b69cb596a27bb7acad6ccece778c3a0773b\"" Dec 13 13:29:35.459232 containerd[1449]: time="2024-12-13T13:29:35.459141524Z" level=info msg="TearDown network for sandbox \"3a30e1f6d6c1c84373fcffbd8a750b69cb596a27bb7acad6ccece778c3a0773b\" successfully" Dec 13 13:29:35.459232 containerd[1449]: time="2024-12-13T13:29:35.459164037Z" level=info msg="StopPodSandbox for \"3a30e1f6d6c1c84373fcffbd8a750b69cb596a27bb7acad6ccece778c3a0773b\" returns successfully" Dec 13 13:29:35.459995 containerd[1449]: time="2024-12-13T13:29:35.459438822Z" level=info msg="RemovePodSandbox for \"3a30e1f6d6c1c84373fcffbd8a750b69cb596a27bb7acad6ccece778c3a0773b\"" Dec 13 13:29:35.459995 containerd[1449]: time="2024-12-13T13:29:35.459464600Z" level=info msg="Forcibly stopping sandbox \"3a30e1f6d6c1c84373fcffbd8a750b69cb596a27bb7acad6ccece778c3a0773b\"" Dec 13 13:29:35.459995 containerd[1449]: time="2024-12-13T13:29:35.459541445Z" level=info msg="TearDown network for sandbox \"3a30e1f6d6c1c84373fcffbd8a750b69cb596a27bb7acad6ccece778c3a0773b\" successfully" Dec 13 13:29:35.463777 containerd[1449]: time="2024-12-13T13:29:35.463727482Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3a30e1f6d6c1c84373fcffbd8a750b69cb596a27bb7acad6ccece778c3a0773b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:35.463843 containerd[1449]: time="2024-12-13T13:29:35.463776725Z" level=info msg="RemovePodSandbox \"3a30e1f6d6c1c84373fcffbd8a750b69cb596a27bb7acad6ccece778c3a0773b\" returns successfully" Dec 13 13:29:35.464728 containerd[1449]: time="2024-12-13T13:29:35.464439749Z" level=info msg="StopPodSandbox for \"d7631a251585364a892d934d5b4172df7d392390f1512d95497ea59cdaa439f6\"" Dec 13 13:29:35.464728 containerd[1449]: time="2024-12-13T13:29:35.464521653Z" level=info msg="TearDown network for sandbox \"d7631a251585364a892d934d5b4172df7d392390f1512d95497ea59cdaa439f6\" successfully" Dec 13 13:29:35.464728 containerd[1449]: time="2024-12-13T13:29:35.464533816Z" level=info msg="StopPodSandbox for \"d7631a251585364a892d934d5b4172df7d392390f1512d95497ea59cdaa439f6\" returns successfully" Dec 13 13:29:35.465187 containerd[1449]: time="2024-12-13T13:29:35.465111159Z" level=info msg="RemovePodSandbox for \"d7631a251585364a892d934d5b4172df7d392390f1512d95497ea59cdaa439f6\"" Dec 13 13:29:35.465187 containerd[1449]: time="2024-12-13T13:29:35.465150443Z" level=info msg="Forcibly stopping sandbox \"d7631a251585364a892d934d5b4172df7d392390f1512d95497ea59cdaa439f6\"" Dec 13 13:29:35.465330 containerd[1449]: time="2024-12-13T13:29:35.465225564Z" level=info msg="TearDown network for sandbox \"d7631a251585364a892d934d5b4172df7d392390f1512d95497ea59cdaa439f6\" successfully" Dec 13 13:29:35.469744 containerd[1449]: time="2024-12-13T13:29:35.469704401Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d7631a251585364a892d934d5b4172df7d392390f1512d95497ea59cdaa439f6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:35.469844 containerd[1449]: time="2024-12-13T13:29:35.469752972Z" level=info msg="RemovePodSandbox \"d7631a251585364a892d934d5b4172df7d392390f1512d95497ea59cdaa439f6\" returns successfully" Dec 13 13:29:35.470397 containerd[1449]: time="2024-12-13T13:29:35.470141953Z" level=info msg="StopPodSandbox for \"e386a4390adf5c3f69b06794cc703409bc0f40a0334acdf9ce30a67e76443e2a\"" Dec 13 13:29:35.470397 containerd[1449]: time="2024-12-13T13:29:35.470220179Z" level=info msg="TearDown network for sandbox \"e386a4390adf5c3f69b06794cc703409bc0f40a0334acdf9ce30a67e76443e2a\" successfully" Dec 13 13:29:35.470397 containerd[1449]: time="2024-12-13T13:29:35.470231791Z" level=info msg="StopPodSandbox for \"e386a4390adf5c3f69b06794cc703409bc0f40a0334acdf9ce30a67e76443e2a\" returns successfully" Dec 13 13:29:35.470859 containerd[1449]: time="2024-12-13T13:29:35.470637302Z" level=info msg="RemovePodSandbox for \"e386a4390adf5c3f69b06794cc703409bc0f40a0334acdf9ce30a67e76443e2a\"" Dec 13 13:29:35.470859 containerd[1449]: time="2024-12-13T13:29:35.470660205Z" level=info msg="Forcibly stopping sandbox \"e386a4390adf5c3f69b06794cc703409bc0f40a0334acdf9ce30a67e76443e2a\"" Dec 13 13:29:35.470859 containerd[1449]: time="2024-12-13T13:29:35.470750915Z" level=info msg="TearDown network for sandbox \"e386a4390adf5c3f69b06794cc703409bc0f40a0334acdf9ce30a67e76443e2a\" successfully" Dec 13 13:29:35.474918 containerd[1449]: time="2024-12-13T13:29:35.474640896Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e386a4390adf5c3f69b06794cc703409bc0f40a0334acdf9ce30a67e76443e2a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:35.474918 containerd[1449]: time="2024-12-13T13:29:35.474684418Z" level=info msg="RemovePodSandbox \"e386a4390adf5c3f69b06794cc703409bc0f40a0334acdf9ce30a67e76443e2a\" returns successfully" Dec 13 13:29:35.475176 containerd[1449]: time="2024-12-13T13:29:35.475151064Z" level=info msg="StopPodSandbox for \"9ea676dc99018201564ed28aae61581b63ab75a5e21e331992630787bf9116f6\"" Dec 13 13:29:35.475347 containerd[1449]: time="2024-12-13T13:29:35.475324460Z" level=info msg="TearDown network for sandbox \"9ea676dc99018201564ed28aae61581b63ab75a5e21e331992630787bf9116f6\" successfully" Dec 13 13:29:35.475433 containerd[1449]: time="2024-12-13T13:29:35.475413267Z" level=info msg="StopPodSandbox for \"9ea676dc99018201564ed28aae61581b63ab75a5e21e331992630787bf9116f6\" returns successfully" Dec 13 13:29:35.476217 containerd[1449]: time="2024-12-13T13:29:35.476192439Z" level=info msg="RemovePodSandbox for \"9ea676dc99018201564ed28aae61581b63ab75a5e21e331992630787bf9116f6\"" Dec 13 13:29:35.476330 containerd[1449]: time="2024-12-13T13:29:35.476309679Z" level=info msg="Forcibly stopping sandbox \"9ea676dc99018201564ed28aae61581b63ab75a5e21e331992630787bf9116f6\"" Dec 13 13:29:35.476529 containerd[1449]: time="2024-12-13T13:29:35.476482794Z" level=info msg="TearDown network for sandbox \"9ea676dc99018201564ed28aae61581b63ab75a5e21e331992630787bf9116f6\" successfully" Dec 13 13:29:35.482380 containerd[1449]: time="2024-12-13T13:29:35.482322816Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9ea676dc99018201564ed28aae61581b63ab75a5e21e331992630787bf9116f6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:35.503571 containerd[1449]: time="2024-12-13T13:29:35.503510676Z" level=info msg="RemovePodSandbox \"9ea676dc99018201564ed28aae61581b63ab75a5e21e331992630787bf9116f6\" returns successfully" Dec 13 13:29:35.504221 containerd[1449]: time="2024-12-13T13:29:35.504157259Z" level=info msg="StopPodSandbox for \"bffb35d558fabfad7d5fc42f34272403a108038fd0e74c603ae553a22acacbef\"" Dec 13 13:29:35.504511 containerd[1449]: time="2024-12-13T13:29:35.504268237Z" level=info msg="TearDown network for sandbox \"bffb35d558fabfad7d5fc42f34272403a108038fd0e74c603ae553a22acacbef\" successfully" Dec 13 13:29:35.504511 containerd[1449]: time="2024-12-13T13:29:35.504285460Z" level=info msg="StopPodSandbox for \"bffb35d558fabfad7d5fc42f34272403a108038fd0e74c603ae553a22acacbef\" returns successfully" Dec 13 13:29:35.505862 containerd[1449]: time="2024-12-13T13:29:35.504701601Z" level=info msg="RemovePodSandbox for \"bffb35d558fabfad7d5fc42f34272403a108038fd0e74c603ae553a22acacbef\"" Dec 13 13:29:35.505862 containerd[1449]: time="2024-12-13T13:29:35.504730165Z" level=info msg="Forcibly stopping sandbox \"bffb35d558fabfad7d5fc42f34272403a108038fd0e74c603ae553a22acacbef\"" Dec 13 13:29:35.505862 containerd[1449]: time="2024-12-13T13:29:35.504804324Z" level=info msg="TearDown network for sandbox \"bffb35d558fabfad7d5fc42f34272403a108038fd0e74c603ae553a22acacbef\" successfully" Dec 13 13:29:35.508992 containerd[1449]: time="2024-12-13T13:29:35.508961737Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bffb35d558fabfad7d5fc42f34272403a108038fd0e74c603ae553a22acacbef\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:35.509219 containerd[1449]: time="2024-12-13T13:29:35.509163406Z" level=info msg="RemovePodSandbox \"bffb35d558fabfad7d5fc42f34272403a108038fd0e74c603ae553a22acacbef\" returns successfully" Dec 13 13:29:35.510047 containerd[1449]: time="2024-12-13T13:29:35.510024923Z" level=info msg="StopPodSandbox for \"9ff155ebc05695af7a55ca479fc18572324264e1fd3461519ef5c759d227ddb0\"" Dec 13 13:29:35.510314 containerd[1449]: time="2024-12-13T13:29:35.510296943Z" level=info msg="TearDown network for sandbox \"9ff155ebc05695af7a55ca479fc18572324264e1fd3461519ef5c759d227ddb0\" successfully" Dec 13 13:29:35.510386 containerd[1449]: time="2024-12-13T13:29:35.510371624Z" level=info msg="StopPodSandbox for \"9ff155ebc05695af7a55ca479fc18572324264e1fd3461519ef5c759d227ddb0\" returns successfully" Dec 13 13:29:35.510721 containerd[1449]: time="2024-12-13T13:29:35.510700311Z" level=info msg="RemovePodSandbox for \"9ff155ebc05695af7a55ca479fc18572324264e1fd3461519ef5c759d227ddb0\"" Dec 13 13:29:35.511058 containerd[1449]: time="2024-12-13T13:29:35.510866412Z" level=info msg="Forcibly stopping sandbox \"9ff155ebc05695af7a55ca479fc18572324264e1fd3461519ef5c759d227ddb0\"" Dec 13 13:29:35.511058 containerd[1449]: time="2024-12-13T13:29:35.510991999Z" level=info msg="TearDown network for sandbox \"9ff155ebc05695af7a55ca479fc18572324264e1fd3461519ef5c759d227ddb0\" successfully" Dec 13 13:29:35.514392 containerd[1449]: time="2024-12-13T13:29:35.514260824Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9ff155ebc05695af7a55ca479fc18572324264e1fd3461519ef5c759d227ddb0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:35.514392 containerd[1449]: time="2024-12-13T13:29:35.514299166Z" level=info msg="RemovePodSandbox \"9ff155ebc05695af7a55ca479fc18572324264e1fd3461519ef5c759d227ddb0\" returns successfully" Dec 13 13:29:35.515056 containerd[1449]: time="2024-12-13T13:29:35.514793694Z" level=info msg="StopPodSandbox for \"9d0aae2d2a669130f6b3d818adce49287f982e022e4481e543774a8e5a7e4f0e\"" Dec 13 13:29:35.515056 containerd[1449]: time="2024-12-13T13:29:35.514975867Z" level=info msg="TearDown network for sandbox \"9d0aae2d2a669130f6b3d818adce49287f982e022e4481e543774a8e5a7e4f0e\" successfully" Dec 13 13:29:35.515056 containerd[1449]: time="2024-12-13T13:29:35.514996916Z" level=info msg="StopPodSandbox for \"9d0aae2d2a669130f6b3d818adce49287f982e022e4481e543774a8e5a7e4f0e\" returns successfully" Dec 13 13:29:35.515974 containerd[1449]: time="2024-12-13T13:29:35.515366469Z" level=info msg="RemovePodSandbox for \"9d0aae2d2a669130f6b3d818adce49287f982e022e4481e543774a8e5a7e4f0e\"" Dec 13 13:29:35.515974 containerd[1449]: time="2024-12-13T13:29:35.515422755Z" level=info msg="Forcibly stopping sandbox \"9d0aae2d2a669130f6b3d818adce49287f982e022e4481e543774a8e5a7e4f0e\"" Dec 13 13:29:35.515974 containerd[1449]: time="2024-12-13T13:29:35.515514016Z" level=info msg="TearDown network for sandbox \"9d0aae2d2a669130f6b3d818adce49287f982e022e4481e543774a8e5a7e4f0e\" successfully" Dec 13 13:29:35.519481 containerd[1449]: time="2024-12-13T13:29:35.519434314Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9d0aae2d2a669130f6b3d818adce49287f982e022e4481e543774a8e5a7e4f0e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:35.519557 containerd[1449]: time="2024-12-13T13:29:35.519509035Z" level=info msg="RemovePodSandbox \"9d0aae2d2a669130f6b3d818adce49287f982e022e4481e543774a8e5a7e4f0e\" returns successfully" Dec 13 13:29:35.520056 containerd[1449]: time="2024-12-13T13:29:35.519821001Z" level=info msg="StopPodSandbox for \"a54aca91cadd65341fbf2200f2f3dd1ba0985ce903a04094f876773253f96c86\"" Dec 13 13:29:35.520056 containerd[1449]: time="2024-12-13T13:29:35.519929985Z" level=info msg="TearDown network for sandbox \"a54aca91cadd65341fbf2200f2f3dd1ba0985ce903a04094f876773253f96c86\" successfully" Dec 13 13:29:35.520056 containerd[1449]: time="2024-12-13T13:29:35.519947267Z" level=info msg="StopPodSandbox for \"a54aca91cadd65341fbf2200f2f3dd1ba0985ce903a04094f876773253f96c86\" returns successfully" Dec 13 13:29:35.520528 containerd[1449]: time="2024-12-13T13:29:35.520483003Z" level=info msg="RemovePodSandbox for \"a54aca91cadd65341fbf2200f2f3dd1ba0985ce903a04094f876773253f96c86\"" Dec 13 13:29:35.520528 containerd[1449]: time="2024-12-13T13:29:35.520513881Z" level=info msg="Forcibly stopping sandbox \"a54aca91cadd65341fbf2200f2f3dd1ba0985ce903a04094f876773253f96c86\"" Dec 13 13:29:35.520619 containerd[1449]: time="2024-12-13T13:29:35.520592118Z" level=info msg="TearDown network for sandbox \"a54aca91cadd65341fbf2200f2f3dd1ba0985ce903a04094f876773253f96c86\" successfully" Dec 13 13:29:35.547626 containerd[1449]: time="2024-12-13T13:29:35.547512037Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a54aca91cadd65341fbf2200f2f3dd1ba0985ce903a04094f876773253f96c86\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:35.547626 containerd[1449]: time="2024-12-13T13:29:35.547614058Z" level=info msg="RemovePodSandbox \"a54aca91cadd65341fbf2200f2f3dd1ba0985ce903a04094f876773253f96c86\" returns successfully" Dec 13 13:29:35.548934 containerd[1449]: time="2024-12-13T13:29:35.548460958Z" level=info msg="StopPodSandbox for \"c529d7492b650cde74b9c28a9ec55e4b1be32175dd6206b053ef2cbcd9005bfc\"" Dec 13 13:29:35.548934 containerd[1449]: time="2024-12-13T13:29:35.548643251Z" level=info msg="TearDown network for sandbox \"c529d7492b650cde74b9c28a9ec55e4b1be32175dd6206b053ef2cbcd9005bfc\" successfully" Dec 13 13:29:35.548934 containerd[1449]: time="2024-12-13T13:29:35.548678617Z" level=info msg="StopPodSandbox for \"c529d7492b650cde74b9c28a9ec55e4b1be32175dd6206b053ef2cbcd9005bfc\" returns successfully" Dec 13 13:29:35.549440 containerd[1449]: time="2024-12-13T13:29:35.549367981Z" level=info msg="RemovePodSandbox for \"c529d7492b650cde74b9c28a9ec55e4b1be32175dd6206b053ef2cbcd9005bfc\"" Dec 13 13:29:35.549440 containerd[1449]: time="2024-12-13T13:29:35.549423315Z" level=info msg="Forcibly stopping sandbox \"c529d7492b650cde74b9c28a9ec55e4b1be32175dd6206b053ef2cbcd9005bfc\"" Dec 13 13:29:35.549656 containerd[1449]: time="2024-12-13T13:29:35.549561764Z" level=info msg="TearDown network for sandbox \"c529d7492b650cde74b9c28a9ec55e4b1be32175dd6206b053ef2cbcd9005bfc\" successfully" Dec 13 13:29:35.553912 containerd[1449]: time="2024-12-13T13:29:35.553781755Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c529d7492b650cde74b9c28a9ec55e4b1be32175dd6206b053ef2cbcd9005bfc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:35.553912 containerd[1449]: time="2024-12-13T13:29:35.553864310Z" level=info msg="RemovePodSandbox \"c529d7492b650cde74b9c28a9ec55e4b1be32175dd6206b053ef2cbcd9005bfc\" returns successfully" Dec 13 13:29:36.257626 kubelet[1847]: E1213 13:29:36.257558 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:37.257938 kubelet[1847]: E1213 13:29:37.257818 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:37.900620 kubelet[1847]: I1213 13:29:37.900047 1847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=4.996154278 podStartE2EDuration="11.900025609s" podCreationTimestamp="2024-12-13 13:29:26 +0000 UTC" firstStartedPulling="2024-12-13 13:29:27.013656383 +0000 UTC m=+52.465756702" lastFinishedPulling="2024-12-13 13:29:33.917527714 +0000 UTC m=+59.369628033" observedRunningTime="2024-12-13 13:29:34.417281328 +0000 UTC m=+59.869381727" watchObservedRunningTime="2024-12-13 13:29:37.900025609 +0000 UTC m=+63.352125999" Dec 13 13:29:38.258564 kubelet[1847]: E1213 13:29:38.258348 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:39.259518 kubelet[1847]: E1213 13:29:39.259390 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:40.260639 kubelet[1847]: E1213 13:29:40.260555 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:41.260853 kubelet[1847]: E1213 13:29:41.260768 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:42.261219 kubelet[1847]: E1213 13:29:42.261090 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:43.262392 kubelet[1847]: E1213 13:29:43.262309 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:44.263428 kubelet[1847]: E1213 13:29:44.263332 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:45.264195 kubelet[1847]: E1213 13:29:45.264125 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:46.264389 kubelet[1847]: E1213 13:29:46.264303 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:47.265122 kubelet[1847]: E1213 13:29:47.265046 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:48.266454 kubelet[1847]: E1213 13:29:48.266311 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:49.267415 kubelet[1847]: E1213 13:29:49.267339 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:50.267640 kubelet[1847]: E1213 13:29:50.267553 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:51.268647 kubelet[1847]: E1213 13:29:51.268556 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:52.269535 kubelet[1847]: E1213 13:29:52.269424 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:53.269646 kubelet[1847]: E1213 13:29:53.269580 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:54.270200 kubelet[1847]: E1213 13:29:54.270035 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:55.197134 kubelet[1847]: E1213 13:29:55.197038 1847 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:55.270323 kubelet[1847]: E1213 13:29:55.270248 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:56.271099 kubelet[1847]: E1213 13:29:56.270997 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:57.271848 kubelet[1847]: E1213 13:29:57.271716 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:58.272157 kubelet[1847]: E1213 13:29:58.272056 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:58.956230 systemd[1]: Created slice kubepods-besteffort-poddd3096c7_060d_46fc_acdb_988da358da6e.slice - libcontainer container kubepods-besteffort-poddd3096c7_060d_46fc_acdb_988da358da6e.slice. Dec 13 13:29:59.094465 kubelet[1847]: I1213 13:29:59.094339 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-01e5bc18-f821-46e4-b23e-0dba5f5079b6\" (UniqueName: \"kubernetes.io/nfs/dd3096c7-060d-46fc-acdb-988da358da6e-pvc-01e5bc18-f821-46e4-b23e-0dba5f5079b6\") pod \"test-pod-1\" (UID: \"dd3096c7-060d-46fc-acdb-988da358da6e\") " pod="default/test-pod-1" Dec 13 13:29:59.094465 kubelet[1847]: I1213 13:29:59.094452 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjfbk\" (UniqueName: \"kubernetes.io/projected/dd3096c7-060d-46fc-acdb-988da358da6e-kube-api-access-hjfbk\") pod \"test-pod-1\" (UID: \"dd3096c7-060d-46fc-acdb-988da358da6e\") " pod="default/test-pod-1" Dec 13 13:29:59.273123 kubelet[1847]: E1213 13:29:59.272772 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:59.287210 kernel: FS-Cache: Loaded Dec 13 13:29:59.378154 kernel: RPC: Registered named UNIX socket transport module. Dec 13 13:29:59.378389 kernel: RPC: Registered udp transport module. Dec 13 13:29:59.378445 kernel: RPC: Registered tcp transport module. Dec 13 13:29:59.378489 kernel: RPC: Registered tcp-with-tls transport module. Dec 13 13:29:59.378534 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 13:29:59.748173 kernel: NFS: Registering the id_resolver key type Dec 13 13:29:59.749045 kernel: Key type id_resolver registered Dec 13 13:29:59.751661 kernel: Key type id_legacy registered Dec 13 13:29:59.871164 nfsidmap[3821]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' Dec 13 13:29:59.886319 nfsidmap[3822]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' Dec 13 13:30:00.164325 containerd[1449]: time="2024-12-13T13:30:00.164032571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:dd3096c7-060d-46fc-acdb-988da358da6e,Namespace:default,Attempt:0,}" Dec 13 13:30:00.273065 kubelet[1847]: E1213 13:30:00.272999 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:30:00.452133 systemd-networkd[1362]: cali5ec59c6bf6e: Link UP Dec 13 13:30:00.452624 systemd-networkd[1362]: cali5ec59c6bf6e: Gained carrier Dec 13 13:30:00.478460 containerd[1449]: 2024-12-13 13:30:00.302 [INFO][3824] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.94-k8s-test--pod--1-eth0 default dd3096c7-060d-46fc-acdb-988da358da6e 1397 0 2024-12-13 13:29:30 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.24.4.94 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="0f66108e9c183856d3608c2b74052c530bec8cfdf40bb33a54720495cbbd5ee5" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.94-k8s-test--pod--1-" Dec 13 13:30:00.478460 containerd[1449]: 2024-12-13 13:30:00.302 [INFO][3824] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0f66108e9c183856d3608c2b74052c530bec8cfdf40bb33a54720495cbbd5ee5" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.94-k8s-test--pod--1-eth0" Dec 13 13:30:00.478460 containerd[1449]: 2024-12-13 13:30:00.378 [INFO][3834] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0f66108e9c183856d3608c2b74052c530bec8cfdf40bb33a54720495cbbd5ee5" HandleID="k8s-pod-network.0f66108e9c183856d3608c2b74052c530bec8cfdf40bb33a54720495cbbd5ee5" Workload="172.24.4.94-k8s-test--pod--1-eth0" Dec 13 13:30:00.478460 containerd[1449]: 2024-12-13 13:30:00.395 [INFO][3834] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0f66108e9c183856d3608c2b74052c530bec8cfdf40bb33a54720495cbbd5ee5" HandleID="k8s-pod-network.0f66108e9c183856d3608c2b74052c530bec8cfdf40bb33a54720495cbbd5ee5" Workload="172.24.4.94-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011dce0), Attrs:map[string]string{"namespace":"default", "node":"172.24.4.94", "pod":"test-pod-1", "timestamp":"2024-12-13 13:30:00.378773722 +0000 UTC"}, Hostname:"172.24.4.94", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:30:00.478460 containerd[1449]: 2024-12-13 13:30:00.395 [INFO][3834] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:30:00.478460 containerd[1449]: 2024-12-13 13:30:00.395 [INFO][3834] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:30:00.478460 containerd[1449]: 2024-12-13 13:30:00.395 [INFO][3834] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.94' Dec 13 13:30:00.478460 containerd[1449]: 2024-12-13 13:30:00.399 [INFO][3834] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0f66108e9c183856d3608c2b74052c530bec8cfdf40bb33a54720495cbbd5ee5" host="172.24.4.94" Dec 13 13:30:00.478460 containerd[1449]: 2024-12-13 13:30:00.407 [INFO][3834] ipam/ipam.go 372: Looking up existing affinities for host host="172.24.4.94" Dec 13 13:30:00.478460 containerd[1449]: 2024-12-13 13:30:00.415 [INFO][3834] ipam/ipam.go 489: Trying affinity for 192.168.110.192/26 host="172.24.4.94" Dec 13 13:30:00.478460 containerd[1449]: 2024-12-13 13:30:00.419 [INFO][3834] ipam/ipam.go 155: Attempting to load block cidr=192.168.110.192/26 host="172.24.4.94" Dec 13 13:30:00.478460 containerd[1449]: 2024-12-13 13:30:00.422 [INFO][3834] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.110.192/26 host="172.24.4.94" Dec 13 13:30:00.478460 containerd[1449]: 2024-12-13 13:30:00.423 [INFO][3834] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.110.192/26 handle="k8s-pod-network.0f66108e9c183856d3608c2b74052c530bec8cfdf40bb33a54720495cbbd5ee5" host="172.24.4.94" Dec 13 13:30:00.478460 containerd[1449]: 2024-12-13 13:30:00.426 [INFO][3834] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0f66108e9c183856d3608c2b74052c530bec8cfdf40bb33a54720495cbbd5ee5 Dec 13 13:30:00.478460 containerd[1449]: 2024-12-13 13:30:00.432 [INFO][3834] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.110.192/26 handle="k8s-pod-network.0f66108e9c183856d3608c2b74052c530bec8cfdf40bb33a54720495cbbd5ee5" host="172.24.4.94" Dec 13 13:30:00.478460 containerd[1449]: 2024-12-13 13:30:00.443 [INFO][3834] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.110.196/26] block=192.168.110.192/26 handle="k8s-pod-network.0f66108e9c183856d3608c2b74052c530bec8cfdf40bb33a54720495cbbd5ee5" host="172.24.4.94" Dec 13 13:30:00.478460 containerd[1449]: 2024-12-13 13:30:00.443 [INFO][3834] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.110.196/26] handle="k8s-pod-network.0f66108e9c183856d3608c2b74052c530bec8cfdf40bb33a54720495cbbd5ee5" host="172.24.4.94" Dec 13 13:30:00.478460 containerd[1449]: 2024-12-13 13:30:00.443 [INFO][3834] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:30:00.478460 containerd[1449]: 2024-12-13 13:30:00.443 [INFO][3834] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.110.196/26] IPv6=[] ContainerID="0f66108e9c183856d3608c2b74052c530bec8cfdf40bb33a54720495cbbd5ee5" HandleID="k8s-pod-network.0f66108e9c183856d3608c2b74052c530bec8cfdf40bb33a54720495cbbd5ee5" Workload="172.24.4.94-k8s-test--pod--1-eth0" Dec 13 13:30:00.478460 containerd[1449]: 2024-12-13 13:30:00.447 [INFO][3824] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0f66108e9c183856d3608c2b74052c530bec8cfdf40bb33a54720495cbbd5ee5" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.94-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.94-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"dd3096c7-060d-46fc-acdb-988da358da6e", ResourceVersion:"1397", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 29, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.94", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.110.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:30:00.479190 containerd[1449]: 2024-12-13 13:30:00.447 [INFO][3824] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.110.196/32] ContainerID="0f66108e9c183856d3608c2b74052c530bec8cfdf40bb33a54720495cbbd5ee5" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.94-k8s-test--pod--1-eth0" Dec 13 13:30:00.479190 containerd[1449]: 2024-12-13 13:30:00.447 [INFO][3824] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="0f66108e9c183856d3608c2b74052c530bec8cfdf40bb33a54720495cbbd5ee5" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.94-k8s-test--pod--1-eth0" Dec 13 13:30:00.479190 containerd[1449]: 2024-12-13 13:30:00.452 [INFO][3824] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0f66108e9c183856d3608c2b74052c530bec8cfdf40bb33a54720495cbbd5ee5" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.94-k8s-test--pod--1-eth0" Dec 13 13:30:00.479190 containerd[1449]: 2024-12-13 13:30:00.453 [INFO][3824] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0f66108e9c183856d3608c2b74052c530bec8cfdf40bb33a54720495cbbd5ee5" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.94-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.94-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"dd3096c7-060d-46fc-acdb-988da358da6e", ResourceVersion:"1397", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 29, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.94", ContainerID:"0f66108e9c183856d3608c2b74052c530bec8cfdf40bb33a54720495cbbd5ee5", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.110.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"e6:b2:3a:65:60:6b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:30:00.479190 containerd[1449]: 2024-12-13 13:30:00.472 [INFO][3824] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0f66108e9c183856d3608c2b74052c530bec8cfdf40bb33a54720495cbbd5ee5" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.94-k8s-test--pod--1-eth0" Dec 13 13:30:00.523292 containerd[1449]: time="2024-12-13T13:30:00.522751035Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:30:00.523292 containerd[1449]: time="2024-12-13T13:30:00.522851403Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:30:00.523292 containerd[1449]: time="2024-12-13T13:30:00.522958675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:30:00.523292 containerd[1449]: time="2024-12-13T13:30:00.523091163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:30:00.559043 systemd[1]: run-containerd-runc-k8s.io-0f66108e9c183856d3608c2b74052c530bec8cfdf40bb33a54720495cbbd5ee5-runc.XyZqjt.mount: Deactivated successfully. Dec 13 13:30:00.571297 systemd[1]: Started cri-containerd-0f66108e9c183856d3608c2b74052c530bec8cfdf40bb33a54720495cbbd5ee5.scope - libcontainer container 0f66108e9c183856d3608c2b74052c530bec8cfdf40bb33a54720495cbbd5ee5. Dec 13 13:30:00.618553 containerd[1449]: time="2024-12-13T13:30:00.618487778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:dd3096c7-060d-46fc-acdb-988da358da6e,Namespace:default,Attempt:0,} returns sandbox id \"0f66108e9c183856d3608c2b74052c530bec8cfdf40bb33a54720495cbbd5ee5\"" Dec 13 13:30:00.621026 containerd[1449]: time="2024-12-13T13:30:00.620839270Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 13:30:01.093857 containerd[1449]: time="2024-12-13T13:30:01.093630821Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Dec 13 13:30:01.103655 containerd[1449]: time="2024-12-13T13:30:01.102544200Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\", size \"71035905\" in 481.410558ms" Dec 13 13:30:01.103655 containerd[1449]: time="2024-12-13T13:30:01.102632656Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 13:30:01.106758 containerd[1449]: time="2024-12-13T13:30:01.106408669Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:30:01.160451 containerd[1449]: time="2024-12-13T13:30:01.160177667Z" level=info msg="CreateContainer within sandbox \"0f66108e9c183856d3608c2b74052c530bec8cfdf40bb33a54720495cbbd5ee5\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 13:30:01.182218 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3630378992.mount: Deactivated successfully. Dec 13 13:30:01.188237 containerd[1449]: time="2024-12-13T13:30:01.188162761Z" level=info msg="CreateContainer within sandbox \"0f66108e9c183856d3608c2b74052c530bec8cfdf40bb33a54720495cbbd5ee5\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"61abe256374540a34cb514eed838cdaff81024d4b827c80fc9b6c6db0c429822\"" Dec 13 13:30:01.189726 containerd[1449]: time="2024-12-13T13:30:01.189366460Z" level=info msg="StartContainer for \"61abe256374540a34cb514eed838cdaff81024d4b827c80fc9b6c6db0c429822\"" Dec 13 13:30:01.235191 systemd[1]: Started cri-containerd-61abe256374540a34cb514eed838cdaff81024d4b827c80fc9b6c6db0c429822.scope - libcontainer container 61abe256374540a34cb514eed838cdaff81024d4b827c80fc9b6c6db0c429822. Dec 13 13:30:01.275831 kubelet[1847]: E1213 13:30:01.273839 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:30:01.279735 containerd[1449]: time="2024-12-13T13:30:01.279593967Z" level=info msg="StartContainer for \"61abe256374540a34cb514eed838cdaff81024d4b827c80fc9b6c6db0c429822\" returns successfully" Dec 13 13:30:01.527358 kubelet[1847]: I1213 13:30:01.527124 1847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=31.04313893 podStartE2EDuration="31.5270875s" podCreationTimestamp="2024-12-13 13:29:30 +0000 UTC" firstStartedPulling="2024-12-13 13:30:00.620549746 +0000 UTC m=+86.072650075" lastFinishedPulling="2024-12-13 13:30:01.104498286 +0000 UTC m=+86.556598645" observedRunningTime="2024-12-13 13:30:01.526515918 +0000 UTC m=+86.978616288" watchObservedRunningTime="2024-12-13 13:30:01.5270875 +0000 UTC m=+86.979187870" Dec 13 13:30:01.962251 systemd-networkd[1362]: cali5ec59c6bf6e: Gained IPv6LL Dec 13 13:30:02.275174 kubelet[1847]: E1213 13:30:02.274763 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:30:03.275586 kubelet[1847]: E1213 13:30:03.275489 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:30:04.276375 kubelet[1847]: E1213 13:30:04.276260 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:30:05.277510 kubelet[1847]: E1213 13:30:05.277364 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:30:06.277968 kubelet[1847]: E1213 13:30:06.277815 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:30:07.278200 kubelet[1847]: E1213 13:30:07.278075 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:30:08.279351 kubelet[1847]: E1213 13:30:08.279227 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:30:09.280243 kubelet[1847]: E1213 13:30:09.280140 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:30:10.281353 kubelet[1847]: E1213 13:30:10.281256 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:30:11.282525 kubelet[1847]: E1213 13:30:11.282429 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:30:12.283610 kubelet[1847]: E1213 13:30:12.283515 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:30:13.284261 kubelet[1847]: E1213 13:30:13.284142 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:30:14.284742 kubelet[1847]: E1213 13:30:14.284659 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"