Feb 13 22:29:55.029205 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 17:44:05 -00 2025 Feb 13 22:29:55.029259 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=ed9b5d8ea73d2e47b8decea8124089e04dd398ef43013c1b1a5809314044b1c3 Feb 13 22:29:55.029274 kernel: BIOS-provided physical RAM map: Feb 13 22:29:55.029290 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 22:29:55.029301 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 22:29:55.029311 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 22:29:55.029323 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Feb 13 22:29:55.029334 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Feb 13 22:29:55.029344 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Feb 13 22:29:55.029355 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Feb 13 22:29:55.029365 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 22:29:55.029376 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 22:29:55.029391 kernel: NX (Execute Disable) protection: active Feb 13 22:29:55.029402 kernel: APIC: Static calls initialized Feb 13 22:29:55.029415 kernel: SMBIOS 2.8 present. Feb 13 22:29:55.029426 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Feb 13 22:29:55.029438 kernel: Hypervisor detected: KVM Feb 13 22:29:55.029454 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 22:29:55.029466 kernel: kvm-clock: using sched offset of 4447560504 cycles Feb 13 22:29:55.029478 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 22:29:55.029490 kernel: tsc: Detected 2499.998 MHz processor Feb 13 22:29:55.029502 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 22:29:55.029514 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 22:29:55.029525 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Feb 13 22:29:55.029537 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 22:29:55.029559 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 22:29:55.029578 kernel: Using GB pages for direct mapping Feb 13 22:29:55.029590 kernel: ACPI: Early table checksum verification disabled Feb 13 22:29:55.029602 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Feb 13 22:29:55.029633 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 22:29:55.029645 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 22:29:55.029657 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 22:29:55.029669 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Feb 13 22:29:55.029680 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 22:29:55.029692 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 22:29:55.029710 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 22:29:55.029722 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 22:29:55.029734 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Feb 13 22:29:55.029745 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Feb 13 22:29:55.029757 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Feb 13 22:29:55.029775 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Feb 13 22:29:55.029788 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Feb 13 22:29:55.029805 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Feb 13 22:29:55.029817 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Feb 13 22:29:55.029829 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 22:29:55.029841 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 22:29:55.029853 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Feb 13 22:29:55.029865 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Feb 13 22:29:55.029877 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Feb 13 22:29:55.029894 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Feb 13 22:29:55.029906 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Feb 13 22:29:55.029918 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Feb 13 22:29:55.029930 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Feb 13 22:29:55.029942 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Feb 13 22:29:55.029953 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Feb 13 22:29:55.029965 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Feb 13 22:29:55.029977 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Feb 13 22:29:55.029989 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Feb 13 22:29:55.030001 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Feb 13 22:29:55.030018 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Feb 13 22:29:55.030030 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 13 22:29:55.030042 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Feb 13 22:29:55.030054 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Feb 13 22:29:55.030066 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Feb 13 22:29:55.030079 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Feb 13 22:29:55.030091 kernel: Zone ranges: Feb 13 22:29:55.030103 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 22:29:55.030115 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Feb 13 22:29:55.030132 kernel: Normal empty Feb 13 22:29:55.030144 kernel: Movable zone start for each node Feb 13 22:29:55.030156 kernel: Early memory node ranges Feb 13 22:29:55.030168 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 22:29:55.030180 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Feb 13 22:29:55.030192 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Feb 13 22:29:55.030204 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 22:29:55.030216 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 22:29:55.030228 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Feb 13 22:29:55.030240 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 22:29:55.030257 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 22:29:55.030270 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 22:29:55.030282 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 22:29:55.030294 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 22:29:55.030306 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 22:29:55.030318 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 22:29:55.030330 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 22:29:55.030342 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 22:29:55.030354 kernel: TSC deadline timer available Feb 13 22:29:55.030372 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Feb 13 22:29:55.030384 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 22:29:55.030396 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Feb 13 22:29:55.030408 kernel: Booting paravirtualized kernel on KVM Feb 13 22:29:55.030420 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 22:29:55.030432 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Feb 13 22:29:55.030445 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Feb 13 22:29:55.030457 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Feb 13 22:29:55.030468 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Feb 13 22:29:55.030485 kernel: kvm-guest: PV spinlocks enabled Feb 13 22:29:55.030498 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 22:29:55.030511 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=ed9b5d8ea73d2e47b8decea8124089e04dd398ef43013c1b1a5809314044b1c3 Feb 13 22:29:55.030524 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 22:29:55.030536 kernel: random: crng init done Feb 13 22:29:55.030557 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 22:29:55.030571 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 22:29:55.030584 kernel: Fallback order for Node 0: 0 Feb 13 22:29:55.030602 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Feb 13 22:29:55.034381 kernel: Policy zone: DMA32 Feb 13 22:29:55.034396 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 22:29:55.034409 kernel: software IO TLB: area num 16. Feb 13 22:29:55.034422 kernel: Memory: 1901528K/2096616K available (12288K kernel code, 2301K rwdata, 22736K rodata, 42976K init, 2216K bss, 194828K reserved, 0K cma-reserved) Feb 13 22:29:55.034435 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Feb 13 22:29:55.034447 kernel: Kernel/User page tables isolation: enabled Feb 13 22:29:55.034459 kernel: ftrace: allocating 37923 entries in 149 pages Feb 13 22:29:55.034471 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 22:29:55.034492 kernel: Dynamic Preempt: voluntary Feb 13 22:29:55.034505 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 22:29:55.034518 kernel: rcu: RCU event tracing is enabled. Feb 13 22:29:55.034530 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Feb 13 22:29:55.034543 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 22:29:55.034580 kernel: Rude variant of Tasks RCU enabled. Feb 13 22:29:55.034598 kernel: Tracing variant of Tasks RCU enabled. Feb 13 22:29:55.034628 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 22:29:55.034642 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Feb 13 22:29:55.034655 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Feb 13 22:29:55.034668 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 22:29:55.034687 kernel: Console: colour VGA+ 80x25 Feb 13 22:29:55.034700 kernel: printk: console [tty0] enabled Feb 13 22:29:55.034713 kernel: printk: console [ttyS0] enabled Feb 13 22:29:55.034726 kernel: ACPI: Core revision 20230628 Feb 13 22:29:55.034739 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 22:29:55.034752 kernel: x2apic enabled Feb 13 22:29:55.034770 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 22:29:55.034783 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Feb 13 22:29:55.034796 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Feb 13 22:29:55.034809 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 13 22:29:55.034822 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Feb 13 22:29:55.034834 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Feb 13 22:29:55.034847 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 22:29:55.034859 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 22:29:55.034872 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 22:29:55.034890 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 22:29:55.034903 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Feb 13 22:29:55.034915 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 22:29:55.034928 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 22:29:55.034941 kernel: MDS: Mitigation: Clear CPU buffers Feb 13 22:29:55.034953 kernel: MMIO Stale Data: Unknown: No mitigations Feb 13 22:29:55.034965 kernel: SRBDS: Unknown: Dependent on hypervisor status Feb 13 22:29:55.034978 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 22:29:55.034991 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 22:29:55.035004 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 22:29:55.035016 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 22:29:55.035034 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 13 22:29:55.035047 kernel: Freeing SMP alternatives memory: 32K Feb 13 22:29:55.035060 kernel: pid_max: default: 32768 minimum: 301 Feb 13 22:29:55.035072 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 22:29:55.035085 kernel: landlock: Up and running. Feb 13 22:29:55.035097 kernel: SELinux: Initializing. Feb 13 22:29:55.035110 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 22:29:55.035123 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 22:29:55.035136 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Feb 13 22:29:55.035149 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 22:29:55.035162 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 22:29:55.035180 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 22:29:55.035193 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Feb 13 22:29:55.035206 kernel: signal: max sigframe size: 1776 Feb 13 22:29:55.035219 kernel: rcu: Hierarchical SRCU implementation. Feb 13 22:29:55.035232 kernel: rcu: Max phase no-delay instances is 400. Feb 13 22:29:55.035245 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 22:29:55.035257 kernel: smp: Bringing up secondary CPUs ... Feb 13 22:29:55.035270 kernel: smpboot: x86: Booting SMP configuration: Feb 13 22:29:55.035282 kernel: .... node #0, CPUs: #1 Feb 13 22:29:55.035301 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Feb 13 22:29:55.035313 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 22:29:55.035326 kernel: smpboot: Max logical packages: 16 Feb 13 22:29:55.035339 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Feb 13 22:29:55.035352 kernel: devtmpfs: initialized Feb 13 22:29:55.035364 kernel: x86/mm: Memory block size: 128MB Feb 13 22:29:55.035377 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 22:29:55.035390 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Feb 13 22:29:55.035403 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 22:29:55.035420 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 22:29:55.035445 kernel: audit: initializing netlink subsys (disabled) Feb 13 22:29:55.035459 kernel: audit: type=2000 audit(1739485793.896:1): state=initialized audit_enabled=0 res=1 Feb 13 22:29:55.035472 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 22:29:55.035485 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 22:29:55.035497 kernel: cpuidle: using governor menu Feb 13 22:29:55.035510 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 22:29:55.035523 kernel: dca service started, version 1.12.1 Feb 13 22:29:55.035536 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Feb 13 22:29:55.035597 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Feb 13 22:29:55.035643 kernel: PCI: Using configuration type 1 for base access Feb 13 22:29:55.035657 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 22:29:55.035670 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 22:29:55.035683 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 22:29:55.035696 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 22:29:55.035708 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 22:29:55.035721 kernel: ACPI: Added _OSI(Module Device) Feb 13 22:29:55.035734 kernel: ACPI: Added _OSI(Processor Device) Feb 13 22:29:55.035756 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 22:29:55.035769 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 22:29:55.035781 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 22:29:55.035794 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 22:29:55.035807 kernel: ACPI: Interpreter enabled Feb 13 22:29:55.035820 kernel: ACPI: PM: (supports S0 S5) Feb 13 22:29:55.035832 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 22:29:55.035845 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 22:29:55.035858 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 22:29:55.035876 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Feb 13 22:29:55.035889 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 22:29:55.036179 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 22:29:55.036362 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 22:29:55.036527 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 22:29:55.036556 kernel: PCI host bridge to bus 0000:00 Feb 13 22:29:55.037780 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 22:29:55.037948 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 22:29:55.038099 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 22:29:55.038250 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Feb 13 22:29:55.038400 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Feb 13 22:29:55.038559 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Feb 13 22:29:55.038733 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 22:29:55.038933 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Feb 13 22:29:55.039134 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Feb 13 22:29:55.039303 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Feb 13 22:29:55.039468 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Feb 13 22:29:55.041454 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Feb 13 22:29:55.041694 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 22:29:55.041874 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Feb 13 22:29:55.042048 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Feb 13 22:29:55.042225 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Feb 13 22:29:55.042389 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Feb 13 22:29:55.042573 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Feb 13 22:29:55.044322 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Feb 13 22:29:55.044514 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Feb 13 22:29:55.045295 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Feb 13 22:29:55.045484 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Feb 13 22:29:55.046732 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Feb 13 22:29:55.046968 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Feb 13 22:29:55.047135 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Feb 13 22:29:55.047314 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Feb 13 22:29:55.047492 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Feb 13 22:29:55.048720 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Feb 13 22:29:55.048934 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Feb 13 22:29:55.049241 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Feb 13 22:29:55.049435 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Feb 13 22:29:55.050730 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Feb 13 22:29:55.050906 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Feb 13 22:29:55.051080 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Feb 13 22:29:55.051255 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 13 22:29:55.051436 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 13 22:29:55.052747 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Feb 13 22:29:55.052923 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Feb 13 22:29:55.053097 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Feb 13 22:29:55.053302 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Feb 13 22:29:55.053497 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Feb 13 22:29:55.054756 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Feb 13 22:29:55.054929 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Feb 13 22:29:55.055106 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Feb 13 22:29:55.055270 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Feb 13 22:29:55.055449 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Feb 13 22:29:55.056545 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Feb 13 22:29:55.056794 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Feb 13 22:29:55.056961 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Feb 13 22:29:55.057123 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Feb 13 22:29:55.057300 kernel: pci_bus 0000:02: extended config space not accessible Feb 13 22:29:55.057496 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Feb 13 22:29:55.057730 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Feb 13 22:29:55.057904 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Feb 13 22:29:55.058094 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Feb 13 22:29:55.058287 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Feb 13 22:29:55.058470 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Feb 13 22:29:55.058711 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Feb 13 22:29:55.058876 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Feb 13 22:29:55.059046 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Feb 13 22:29:55.059224 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Feb 13 22:29:55.059391 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Feb 13 22:29:55.059565 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Feb 13 22:29:55.059763 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Feb 13 22:29:55.059943 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Feb 13 22:29:55.060122 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Feb 13 22:29:55.060290 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Feb 13 22:29:55.060473 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Feb 13 22:29:55.060700 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Feb 13 22:29:55.060864 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Feb 13 22:29:55.061023 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Feb 13 22:29:55.061189 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Feb 13 22:29:55.061349 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Feb 13 22:29:55.061508 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Feb 13 22:29:55.061703 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Feb 13 22:29:55.061888 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Feb 13 22:29:55.062054 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Feb 13 22:29:55.062233 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Feb 13 22:29:55.062397 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Feb 13 22:29:55.062583 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Feb 13 22:29:55.062632 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 22:29:55.062652 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 22:29:55.062665 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 22:29:55.062686 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 22:29:55.062700 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Feb 13 22:29:55.062713 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Feb 13 22:29:55.062726 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Feb 13 22:29:55.062739 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Feb 13 22:29:55.062752 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Feb 13 22:29:55.062765 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Feb 13 22:29:55.062777 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Feb 13 22:29:55.062790 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Feb 13 22:29:55.062808 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Feb 13 22:29:55.062822 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Feb 13 22:29:55.062835 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Feb 13 22:29:55.062847 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Feb 13 22:29:55.062860 kernel: iommu: Default domain type: Translated Feb 13 22:29:55.062873 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 22:29:55.062886 kernel: PCI: Using ACPI for IRQ routing Feb 13 22:29:55.062899 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 22:29:55.062912 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 22:29:55.062930 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Feb 13 22:29:55.063108 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Feb 13 22:29:55.063286 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Feb 13 22:29:55.063451 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 22:29:55.063471 kernel: vgaarb: loaded Feb 13 22:29:55.063484 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 22:29:55.063497 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 22:29:55.063511 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 22:29:55.063530 kernel: pnp: PnP ACPI init Feb 13 22:29:55.063780 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Feb 13 22:29:55.063803 kernel: pnp: PnP ACPI: found 5 devices Feb 13 22:29:55.063817 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 22:29:55.063830 kernel: NET: Registered PF_INET protocol family Feb 13 22:29:55.063843 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 22:29:55.063856 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 13 22:29:55.063869 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 22:29:55.063883 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 22:29:55.063904 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 22:29:55.063917 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 13 22:29:55.063930 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 22:29:55.063943 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 22:29:55.063956 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 22:29:55.063969 kernel: NET: Registered PF_XDP protocol family Feb 13 22:29:55.064128 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Feb 13 22:29:55.064290 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 13 22:29:55.064459 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 13 22:29:55.064666 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Feb 13 22:29:55.064830 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Feb 13 22:29:55.064990 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Feb 13 22:29:55.065151 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Feb 13 22:29:55.065311 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Feb 13 22:29:55.065484 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Feb 13 22:29:55.065724 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Feb 13 22:29:55.065886 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Feb 13 22:29:55.066045 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Feb 13 22:29:55.066206 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Feb 13 22:29:55.066365 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Feb 13 22:29:55.066524 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Feb 13 22:29:55.066825 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Feb 13 22:29:55.067023 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Feb 13 22:29:55.067198 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Feb 13 22:29:55.067359 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Feb 13 22:29:55.067519 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Feb 13 22:29:55.067739 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Feb 13 22:29:55.067902 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Feb 13 22:29:55.068062 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Feb 13 22:29:55.068222 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Feb 13 22:29:55.068392 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Feb 13 22:29:55.068564 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Feb 13 22:29:55.068760 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Feb 13 22:29:55.068923 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Feb 13 22:29:55.069087 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Feb 13 22:29:55.069260 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Feb 13 22:29:55.069430 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Feb 13 22:29:55.069654 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Feb 13 22:29:55.069823 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Feb 13 22:29:55.069983 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Feb 13 22:29:55.070143 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Feb 13 22:29:55.070303 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Feb 13 22:29:55.070463 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Feb 13 22:29:55.070666 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Feb 13 22:29:55.070835 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Feb 13 22:29:55.071007 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Feb 13 22:29:55.071171 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Feb 13 22:29:55.071333 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Feb 13 22:29:55.071496 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Feb 13 22:29:55.071727 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Feb 13 22:29:55.071898 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Feb 13 22:29:55.072061 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Feb 13 22:29:55.072223 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Feb 13 22:29:55.072382 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Feb 13 22:29:55.072542 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Feb 13 22:29:55.072747 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Feb 13 22:29:55.072906 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 22:29:55.073055 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 22:29:55.073213 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 22:29:55.073361 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Feb 13 22:29:55.073513 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Feb 13 22:29:55.073724 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Feb 13 22:29:55.073893 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Feb 13 22:29:55.074047 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Feb 13 22:29:55.074200 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Feb 13 22:29:55.074373 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Feb 13 22:29:55.074536 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Feb 13 22:29:55.074747 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Feb 13 22:29:55.074903 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Feb 13 22:29:55.075070 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Feb 13 22:29:55.075224 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Feb 13 22:29:55.075377 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Feb 13 22:29:55.075569 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Feb 13 22:29:55.075773 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Feb 13 22:29:55.075926 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Feb 13 22:29:55.076104 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Feb 13 22:29:55.076256 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Feb 13 22:29:55.076407 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Feb 13 22:29:55.076580 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Feb 13 22:29:55.076771 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Feb 13 22:29:55.076925 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Feb 13 22:29:55.077087 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Feb 13 22:29:55.077240 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Feb 13 22:29:55.077394 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Feb 13 22:29:55.077572 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Feb 13 22:29:55.077780 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Feb 13 22:29:55.077942 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Feb 13 22:29:55.077963 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Feb 13 22:29:55.077977 kernel: PCI: CLS 0 bytes, default 64 Feb 13 22:29:55.077992 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 22:29:55.078006 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Feb 13 22:29:55.078020 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 22:29:55.078033 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Feb 13 22:29:55.078047 kernel: Initialise system trusted keyrings Feb 13 22:29:55.078068 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 13 22:29:55.078082 kernel: Key type asymmetric registered Feb 13 22:29:55.078095 kernel: Asymmetric key parser 'x509' registered Feb 13 22:29:55.078108 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 22:29:55.078122 kernel: io scheduler mq-deadline registered Feb 13 22:29:55.078135 kernel: io scheduler kyber registered Feb 13 22:29:55.078149 kernel: io scheduler bfq registered Feb 13 22:29:55.078311 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Feb 13 22:29:55.078474 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Feb 13 22:29:55.078690 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 22:29:55.078858 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Feb 13 22:29:55.079019 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Feb 13 22:29:55.079180 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 22:29:55.079347 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Feb 13 22:29:55.079510 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Feb 13 22:29:55.079744 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 22:29:55.079910 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Feb 13 22:29:55.080070 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Feb 13 22:29:55.080230 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 22:29:55.080392 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Feb 13 22:29:55.080563 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Feb 13 22:29:55.080765 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 22:29:55.080936 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Feb 13 22:29:55.081098 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Feb 13 22:29:55.081260 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 22:29:55.081425 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Feb 13 22:29:55.081602 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Feb 13 22:29:55.081820 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 22:29:55.081983 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Feb 13 22:29:55.082143 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Feb 13 22:29:55.082303 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 22:29:55.082324 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 22:29:55.082339 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Feb 13 22:29:55.082360 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Feb 13 22:29:55.082374 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 22:29:55.082388 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 22:29:55.082402 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 22:29:55.082416 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 22:29:55.082429 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 22:29:55.082443 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 22:29:55.082657 kernel: rtc_cmos 00:03: RTC can wake from S4 Feb 13 22:29:55.082827 kernel: rtc_cmos 00:03: registered as rtc0 Feb 13 22:29:55.082981 kernel: rtc_cmos 00:03: setting system clock to 2025-02-13T22:29:54 UTC (1739485794) Feb 13 22:29:55.083133 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Feb 13 22:29:55.083153 kernel: intel_pstate: CPU model not supported Feb 13 22:29:55.083167 kernel: NET: Registered PF_INET6 protocol family Feb 13 22:29:55.083180 kernel: Segment Routing with IPv6 Feb 13 22:29:55.083194 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 22:29:55.083207 kernel: NET: Registered PF_PACKET protocol family Feb 13 22:29:55.083221 kernel: Key type dns_resolver registered Feb 13 22:29:55.083241 kernel: IPI shorthand broadcast: enabled Feb 13 22:29:55.083255 kernel: sched_clock: Marking stable (1311005523, 237535253)->(1673904845, -125364069) Feb 13 22:29:55.083269 kernel: registered taskstats version 1 Feb 13 22:29:55.083283 kernel: Loading compiled-in X.509 certificates Feb 13 22:29:55.083296 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 0cc219a306b9e46e583adebba1820decbdc4307b' Feb 13 22:29:55.083310 kernel: Key type .fscrypt registered Feb 13 22:29:55.083323 kernel: Key type fscrypt-provisioning registered Feb 13 22:29:55.083336 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 22:29:55.083350 kernel: ima: Allocated hash algorithm: sha1 Feb 13 22:29:55.083369 kernel: ima: No architecture policies found Feb 13 22:29:55.083383 kernel: clk: Disabling unused clocks Feb 13 22:29:55.083396 kernel: Freeing unused kernel image (initmem) memory: 42976K Feb 13 22:29:55.083410 kernel: Write protecting the kernel read-only data: 36864k Feb 13 22:29:55.083423 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Feb 13 22:29:55.083437 kernel: Run /init as init process Feb 13 22:29:55.083450 kernel: with arguments: Feb 13 22:29:55.083463 kernel: /init Feb 13 22:29:55.083476 kernel: with environment: Feb 13 22:29:55.083495 kernel: HOME=/ Feb 13 22:29:55.083508 kernel: TERM=linux Feb 13 22:29:55.083522 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 22:29:55.083557 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 22:29:55.083578 systemd[1]: Detected virtualization kvm. Feb 13 22:29:55.083592 systemd[1]: Detected architecture x86-64. Feb 13 22:29:55.083649 systemd[1]: Running in initrd. Feb 13 22:29:55.083674 systemd[1]: No hostname configured, using default hostname. Feb 13 22:29:55.083688 systemd[1]: Hostname set to . Feb 13 22:29:55.083703 systemd[1]: Initializing machine ID from VM UUID. Feb 13 22:29:55.083717 systemd[1]: Queued start job for default target initrd.target. Feb 13 22:29:55.083731 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 22:29:55.083745 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 22:29:55.083760 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 22:29:55.083774 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 22:29:55.083794 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 22:29:55.083809 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 22:29:55.083825 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 22:29:55.083840 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 22:29:55.083854 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 22:29:55.083868 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 22:29:55.083882 systemd[1]: Reached target paths.target - Path Units. Feb 13 22:29:55.083902 systemd[1]: Reached target slices.target - Slice Units. Feb 13 22:29:55.083917 systemd[1]: Reached target swap.target - Swaps. Feb 13 22:29:55.083931 systemd[1]: Reached target timers.target - Timer Units. Feb 13 22:29:55.083945 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 22:29:55.083960 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 22:29:55.083974 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 22:29:55.083988 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 22:29:55.084002 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 22:29:55.084017 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 22:29:55.084037 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 22:29:55.084051 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 22:29:55.084066 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 22:29:55.084080 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 22:29:55.084094 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 22:29:55.084108 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 22:29:55.084122 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 22:29:55.084136 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 22:29:55.084158 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 22:29:55.084217 systemd-journald[202]: Collecting audit messages is disabled. Feb 13 22:29:55.084250 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 22:29:55.084265 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 22:29:55.084285 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 22:29:55.084301 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 22:29:55.084316 systemd-journald[202]: Journal started Feb 13 22:29:55.084347 systemd-journald[202]: Runtime Journal (/run/log/journal/3e4ddd50b0bc44388f9b62d3e3cfcfe2) is 4.7M, max 38.0M, 33.2M free. Feb 13 22:29:55.051693 systemd-modules-load[203]: Inserted module 'overlay' Feb 13 22:29:55.139787 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 22:29:55.139823 kernel: Bridge firewalling registered Feb 13 22:29:55.139842 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 22:29:55.100396 systemd-modules-load[203]: Inserted module 'br_netfilter' Feb 13 22:29:55.150320 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 22:29:55.151505 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 22:29:55.153039 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 22:29:55.165952 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 22:29:55.167777 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 22:29:55.171794 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 22:29:55.175450 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 22:29:55.202039 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 22:29:55.204343 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 22:29:55.206519 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 22:29:55.208512 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 22:29:55.215900 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 22:29:55.220830 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 22:29:55.235346 dracut-cmdline[236]: dracut-dracut-053 Feb 13 22:29:55.242220 dracut-cmdline[236]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=ed9b5d8ea73d2e47b8decea8124089e04dd398ef43013c1b1a5809314044b1c3 Feb 13 22:29:55.275835 systemd-resolved[238]: Positive Trust Anchors: Feb 13 22:29:55.276809 systemd-resolved[238]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 22:29:55.276860 systemd-resolved[238]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 22:29:55.285412 systemd-resolved[238]: Defaulting to hostname 'linux'. Feb 13 22:29:55.288483 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 22:29:55.289676 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 22:29:55.347641 kernel: SCSI subsystem initialized Feb 13 22:29:55.359665 kernel: Loading iSCSI transport class v2.0-870. Feb 13 22:29:55.372649 kernel: iscsi: registered transport (tcp) Feb 13 22:29:55.400092 kernel: iscsi: registered transport (qla4xxx) Feb 13 22:29:55.400168 kernel: QLogic iSCSI HBA Driver Feb 13 22:29:55.456450 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 22:29:55.466885 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 22:29:55.500739 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 22:29:55.500821 kernel: device-mapper: uevent: version 1.0.3 Feb 13 22:29:55.503139 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 22:29:55.573687 kernel: raid6: sse2x4 gen() 13939 MB/s Feb 13 22:29:55.573782 kernel: raid6: sse2x2 gen() 9635 MB/s Feb 13 22:29:55.588267 kernel: raid6: sse2x1 gen() 10135 MB/s Feb 13 22:29:55.588334 kernel: raid6: using algorithm sse2x4 gen() 13939 MB/s Feb 13 22:29:55.607457 kernel: raid6: .... xor() 7578 MB/s, rmw enabled Feb 13 22:29:55.607576 kernel: raid6: using ssse3x2 recovery algorithm Feb 13 22:29:55.634672 kernel: xor: automatically using best checksumming function avx Feb 13 22:29:55.835648 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 22:29:55.850575 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 22:29:55.865880 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 22:29:55.881439 systemd-udevd[421]: Using default interface naming scheme 'v255'. Feb 13 22:29:55.888720 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 22:29:55.898792 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 22:29:55.919004 dracut-pre-trigger[429]: rd.md=0: removing MD RAID activation Feb 13 22:29:55.958806 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 22:29:55.964824 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 22:29:56.069927 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 22:29:56.078170 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 22:29:56.111421 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 22:29:56.113646 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 22:29:56.115323 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 22:29:56.118048 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 22:29:56.125908 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 22:29:56.158344 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 22:29:56.188629 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Feb 13 22:29:56.264929 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 22:29:56.264958 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Feb 13 22:29:56.265180 kernel: AVX version of gcm_enc/dec engaged. Feb 13 22:29:56.265213 kernel: AES CTR mode by8 optimization enabled Feb 13 22:29:56.265233 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 22:29:56.265252 kernel: GPT:17805311 != 125829119 Feb 13 22:29:56.265270 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 22:29:56.265287 kernel: GPT:17805311 != 125829119 Feb 13 22:29:56.265305 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 22:29:56.265323 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 22:29:56.234873 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 22:29:56.235063 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 22:29:56.236015 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 22:29:56.236780 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 22:29:56.236948 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 22:29:56.237724 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 22:29:56.253937 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 22:29:56.308634 kernel: ACPI: bus type USB registered Feb 13 22:29:56.308694 kernel: usbcore: registered new interface driver usbfs Feb 13 22:29:56.308715 kernel: usbcore: registered new interface driver hub Feb 13 22:29:56.308733 kernel: usbcore: registered new device driver usb Feb 13 22:29:56.338647 kernel: libata version 3.00 loaded. Feb 13 22:29:56.352067 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 22:29:56.434394 kernel: ahci 0000:00:1f.2: version 3.0 Feb 13 22:29:56.434778 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Feb 13 22:29:56.434803 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (472) Feb 13 22:29:56.434821 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Feb 13 22:29:56.435021 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Feb 13 22:29:56.435217 kernel: scsi host0: ahci Feb 13 22:29:56.435432 kernel: scsi host1: ahci Feb 13 22:29:56.436448 kernel: BTRFS: device fsid e9c87d9f-3864-4b45-9be4-80a5397f1fc6 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (480) Feb 13 22:29:56.436480 kernel: scsi host2: ahci Feb 13 22:29:56.436730 kernel: scsi host3: ahci Feb 13 22:29:56.436939 kernel: scsi host4: ahci Feb 13 22:29:56.437126 kernel: scsi host5: ahci Feb 13 22:29:56.437318 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Feb 13 22:29:56.437339 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Feb 13 22:29:56.437357 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Feb 13 22:29:56.437382 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Feb 13 22:29:56.437401 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Feb 13 22:29:56.437419 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Feb 13 22:29:56.440789 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 22:29:56.453275 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 22:29:56.460378 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 22:29:56.466228 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 22:29:56.467070 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 22:29:56.480857 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 22:29:56.484408 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 22:29:56.489771 disk-uuid[564]: Primary Header is updated. Feb 13 22:29:56.489771 disk-uuid[564]: Secondary Entries is updated. Feb 13 22:29:56.489771 disk-uuid[564]: Secondary Header is updated. Feb 13 22:29:56.495660 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 22:29:56.521755 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 22:29:56.688175 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 22:29:56.688259 kernel: ata1: SATA link down (SStatus 0 SControl 300) Feb 13 22:29:56.688627 kernel: ata3: SATA link down (SStatus 0 SControl 300) Feb 13 22:29:56.691594 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 22:29:56.695051 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 22:29:56.695100 kernel: ata2: SATA link down (SStatus 0 SControl 300) Feb 13 22:29:56.704816 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Feb 13 22:29:56.723911 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Feb 13 22:29:56.724143 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Feb 13 22:29:56.724347 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Feb 13 22:29:56.724562 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Feb 13 22:29:56.724790 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Feb 13 22:29:56.725005 kernel: hub 1-0:1.0: USB hub found Feb 13 22:29:56.725240 kernel: hub 1-0:1.0: 4 ports detected Feb 13 22:29:56.725441 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Feb 13 22:29:56.725764 kernel: hub 2-0:1.0: USB hub found Feb 13 22:29:56.725981 kernel: hub 2-0:1.0: 4 ports detected Feb 13 22:29:56.962701 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Feb 13 22:29:57.103665 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 22:29:57.110995 kernel: usbcore: registered new interface driver usbhid Feb 13 22:29:57.111041 kernel: usbhid: USB HID core driver Feb 13 22:29:57.119282 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Feb 13 22:29:57.119327 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Feb 13 22:29:57.508673 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 22:29:57.510275 disk-uuid[565]: The operation has completed successfully. Feb 13 22:29:57.566478 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 22:29:57.566683 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 22:29:57.591861 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 22:29:57.597701 sh[584]: Success Feb 13 22:29:57.614635 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Feb 13 22:29:57.675583 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 22:29:57.683165 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 22:29:57.687128 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 22:29:57.717675 kernel: BTRFS info (device dm-0): first mount of filesystem e9c87d9f-3864-4b45-9be4-80a5397f1fc6 Feb 13 22:29:57.717753 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 22:29:57.719870 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 22:29:57.722033 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 22:29:57.723779 kernel: BTRFS info (device dm-0): using free space tree Feb 13 22:29:57.734059 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 22:29:57.736361 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 22:29:57.742814 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 22:29:57.745855 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 22:29:57.763854 kernel: BTRFS info (device vda6): first mount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 22:29:57.763912 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 22:29:57.763933 kernel: BTRFS info (device vda6): using free space tree Feb 13 22:29:57.769636 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 22:29:57.784953 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 22:29:57.789636 kernel: BTRFS info (device vda6): last unmount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 22:29:57.796668 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 22:29:57.803837 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 22:29:57.927117 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 22:29:57.938908 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 22:29:57.945265 ignition[682]: Ignition 2.20.0 Feb 13 22:29:57.945305 ignition[682]: Stage: fetch-offline Feb 13 22:29:57.952112 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 22:29:57.945402 ignition[682]: no configs at "/usr/lib/ignition/base.d" Feb 13 22:29:57.945421 ignition[682]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 22:29:57.945661 ignition[682]: parsed url from cmdline: "" Feb 13 22:29:57.945669 ignition[682]: no config URL provided Feb 13 22:29:57.945678 ignition[682]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 22:29:57.945695 ignition[682]: no config at "/usr/lib/ignition/user.ign" Feb 13 22:29:57.945712 ignition[682]: failed to fetch config: resource requires networking Feb 13 22:29:57.945990 ignition[682]: Ignition finished successfully Feb 13 22:29:57.979042 systemd-networkd[772]: lo: Link UP Feb 13 22:29:57.979062 systemd-networkd[772]: lo: Gained carrier Feb 13 22:29:57.981409 systemd-networkd[772]: Enumeration completed Feb 13 22:29:57.982060 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 22:29:57.982066 systemd-networkd[772]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 22:29:57.983439 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 22:29:57.983807 systemd-networkd[772]: eth0: Link UP Feb 13 22:29:57.983813 systemd-networkd[772]: eth0: Gained carrier Feb 13 22:29:57.983825 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 22:29:57.984893 systemd[1]: Reached target network.target - Network. Feb 13 22:29:57.997879 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 22:29:58.012334 ignition[776]: Ignition 2.20.0 Feb 13 22:29:58.012360 ignition[776]: Stage: fetch Feb 13 22:29:58.012780 ignition[776]: no configs at "/usr/lib/ignition/base.d" Feb 13 22:29:58.012801 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 22:29:58.012928 ignition[776]: parsed url from cmdline: "" Feb 13 22:29:58.012935 ignition[776]: no config URL provided Feb 13 22:29:58.012945 ignition[776]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 22:29:58.012961 ignition[776]: no config at "/usr/lib/ignition/user.ign" Feb 13 22:29:58.013139 ignition[776]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Feb 13 22:29:58.013446 ignition[776]: GET error: Get "http://169.254.169.254/openstack/latest/user_data": dial tcp 169.254.169.254:80: connect: network is unreachable Feb 13 22:29:58.019471 systemd-networkd[772]: eth0: DHCPv4 address 10.244.31.90/30, gateway 10.244.31.89 acquired from 10.244.31.89 Feb 13 22:29:58.013483 ignition[776]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Feb 13 22:29:58.013518 ignition[776]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Feb 13 22:29:58.214260 ignition[776]: GET http://169.254.169.254/openstack/latest/user_data: attempt #2 Feb 13 22:29:58.231299 ignition[776]: GET result: OK Feb 13 22:29:58.231415 ignition[776]: parsing config with SHA512: 9139c68168c97867485cd34602a6f3322267342e8b41fb4b8a5d8b74e3624d775903a6eb73e249e4b359391e848ffdfefeef18ea3b7f4353b20fcff780dcaeee Feb 13 22:29:58.235999 unknown[776]: fetched base config from "system" Feb 13 22:29:58.236016 unknown[776]: fetched base config from "system" Feb 13 22:29:58.236351 ignition[776]: fetch: fetch complete Feb 13 22:29:58.236025 unknown[776]: fetched user config from "openstack" Feb 13 22:29:58.236360 ignition[776]: fetch: fetch passed Feb 13 22:29:58.238847 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 22:29:58.236435 ignition[776]: Ignition finished successfully Feb 13 22:29:58.249961 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 22:29:58.265542 ignition[783]: Ignition 2.20.0 Feb 13 22:29:58.265568 ignition[783]: Stage: kargs Feb 13 22:29:58.265825 ignition[783]: no configs at "/usr/lib/ignition/base.d" Feb 13 22:29:58.268021 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 22:29:58.265846 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 22:29:58.266717 ignition[783]: kargs: kargs passed Feb 13 22:29:58.266789 ignition[783]: Ignition finished successfully Feb 13 22:29:58.274831 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 22:29:58.292845 ignition[790]: Ignition 2.20.0 Feb 13 22:29:58.292868 ignition[790]: Stage: disks Feb 13 22:29:58.293087 ignition[790]: no configs at "/usr/lib/ignition/base.d" Feb 13 22:29:58.296078 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 22:29:58.293107 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 22:29:58.297778 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 22:29:58.294008 ignition[790]: disks: disks passed Feb 13 22:29:58.298560 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 22:29:58.294078 ignition[790]: Ignition finished successfully Feb 13 22:29:58.300210 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 22:29:58.301853 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 22:29:58.303374 systemd[1]: Reached target basic.target - Basic System. Feb 13 22:29:58.310834 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 22:29:58.332252 systemd-fsck[799]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Feb 13 22:29:58.335473 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 22:29:58.342052 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 22:29:58.464632 kernel: EXT4-fs (vda9): mounted filesystem c5993b0e-9201-4b44-aa01-79dc9d6c9fc9 r/w with ordered data mode. Quota mode: none. Feb 13 22:29:58.465812 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 22:29:58.467139 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 22:29:58.472716 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 22:29:58.475727 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 22:29:58.478213 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 22:29:58.481073 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Feb 13 22:29:58.483039 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 22:29:58.483311 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 22:29:58.498898 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (807) Feb 13 22:29:58.498932 kernel: BTRFS info (device vda6): first mount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 22:29:58.498952 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 22:29:58.498977 kernel: BTRFS info (device vda6): using free space tree Feb 13 22:29:58.498997 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 22:29:58.501198 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 22:29:58.504158 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 22:29:58.510792 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 22:29:58.600653 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 22:29:58.611162 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Feb 13 22:29:58.618202 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 22:29:58.624571 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 22:29:58.733443 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 22:29:58.737768 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 22:29:58.741795 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 22:29:58.754865 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 22:29:58.759793 kernel: BTRFS info (device vda6): last unmount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 22:29:58.779349 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 22:29:58.792631 ignition[926]: INFO : Ignition 2.20.0 Feb 13 22:29:58.792631 ignition[926]: INFO : Stage: mount Feb 13 22:29:58.792631 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 22:29:58.792631 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 22:29:58.796767 ignition[926]: INFO : mount: mount passed Feb 13 22:29:58.796767 ignition[926]: INFO : Ignition finished successfully Feb 13 22:29:58.796304 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 22:29:59.638944 systemd-networkd[772]: eth0: Gained IPv6LL Feb 13 22:30:01.148972 systemd-networkd[772]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:7d6:24:19ff:fef4:1f5a/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:7d6:24:19ff:fef4:1f5a/64 assigned by NDisc. Feb 13 22:30:01.148989 systemd-networkd[772]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Feb 13 22:30:05.676904 coreos-metadata[809]: Feb 13 22:30:05.676 WARN failed to locate config-drive, using the metadata service API instead Feb 13 22:30:05.701016 coreos-metadata[809]: Feb 13 22:30:05.700 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Feb 13 22:30:05.714821 coreos-metadata[809]: Feb 13 22:30:05.714 INFO Fetch successful Feb 13 22:30:05.715916 coreos-metadata[809]: Feb 13 22:30:05.715 INFO wrote hostname srv-hlas6.gb1.brightbox.com to /sysroot/etc/hostname Feb 13 22:30:05.718704 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Feb 13 22:30:05.718885 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Feb 13 22:30:05.727759 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 22:30:05.742821 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 22:30:05.758122 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (942) Feb 13 22:30:05.758178 kernel: BTRFS info (device vda6): first mount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 22:30:05.761877 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 22:30:05.761920 kernel: BTRFS info (device vda6): using free space tree Feb 13 22:30:05.768822 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 22:30:05.770972 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 22:30:05.795591 ignition[960]: INFO : Ignition 2.20.0 Feb 13 22:30:05.797633 ignition[960]: INFO : Stage: files Feb 13 22:30:05.797633 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 22:30:05.797633 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 22:30:05.802647 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Feb 13 22:30:05.802647 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 22:30:05.802647 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 22:30:05.805628 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 22:30:05.806664 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 22:30:05.806664 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 22:30:05.806301 unknown[960]: wrote ssh authorized keys file for user: core Feb 13 22:30:05.809856 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 22:30:05.809856 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 22:30:05.809856 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 22:30:05.809856 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 22:30:05.809856 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 22:30:05.809856 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 22:30:05.809856 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 22:30:05.824523 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 22:30:05.824523 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 22:30:05.824523 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Feb 13 22:30:06.387070 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 13 22:30:08.047637 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 22:30:08.047637 ignition[960]: INFO : files: op(8): [started] processing unit "containerd.service" Feb 13 22:30:08.050664 ignition[960]: INFO : files: op(8): op(9): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 22:30:08.050664 ignition[960]: INFO : files: op(8): op(9): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 22:30:08.050664 ignition[960]: INFO : files: op(8): [finished] processing unit "containerd.service" Feb 13 22:30:08.054587 ignition[960]: INFO : files: createResultFile: createFiles: op(a): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 22:30:08.054587 ignition[960]: INFO : files: createResultFile: createFiles: op(a): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 22:30:08.054587 ignition[960]: INFO : files: files passed Feb 13 22:30:08.054587 ignition[960]: INFO : Ignition finished successfully Feb 13 22:30:08.054541 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 22:30:08.078430 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 22:30:08.081933 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 22:30:08.084474 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 22:30:08.085525 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 22:30:08.112342 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 22:30:08.112342 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 22:30:08.114951 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 22:30:08.118555 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 22:30:08.119881 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 22:30:08.127884 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 22:30:08.157686 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 22:30:08.163139 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 22:30:08.167809 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 22:30:08.169183 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 22:30:08.170937 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 22:30:08.177904 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 22:30:08.210596 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 22:30:08.217832 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 22:30:08.234004 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 22:30:08.236055 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 22:30:08.238142 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 22:30:08.239030 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 22:30:08.239293 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 22:30:08.241130 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 22:30:08.242171 systemd[1]: Stopped target basic.target - Basic System. Feb 13 22:30:08.243841 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 22:30:08.245301 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 22:30:08.249417 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 22:30:08.251265 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 22:30:08.253009 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 22:30:08.256078 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 22:30:08.257135 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 22:30:08.258042 systemd[1]: Stopped target swap.target - Swaps. Feb 13 22:30:08.259484 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 22:30:08.259791 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 22:30:08.261499 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 22:30:08.262672 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 22:30:08.263555 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 22:30:08.263755 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 22:30:08.265234 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 22:30:08.265513 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 22:30:08.267265 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 22:30:08.267545 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 22:30:08.270597 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 22:30:08.270801 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 22:30:08.280461 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 22:30:08.281231 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 22:30:08.281527 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 22:30:08.285212 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 22:30:08.288121 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 22:30:08.288724 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 22:30:08.291743 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 22:30:08.292485 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 22:30:08.302148 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 22:30:08.302407 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 22:30:08.311712 ignition[1013]: INFO : Ignition 2.20.0 Feb 13 22:30:08.312869 ignition[1013]: INFO : Stage: umount Feb 13 22:30:08.313917 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 22:30:08.315966 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 22:30:08.315966 ignition[1013]: INFO : umount: umount passed Feb 13 22:30:08.315966 ignition[1013]: INFO : Ignition finished successfully Feb 13 22:30:08.319575 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 22:30:08.319791 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 22:30:08.322842 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 22:30:08.322924 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 22:30:08.325299 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 22:30:08.325395 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 22:30:08.333796 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 22:30:08.334795 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 22:30:08.336495 systemd[1]: Stopped target network.target - Network. Feb 13 22:30:08.337918 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 22:30:08.338037 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 22:30:08.339880 systemd[1]: Stopped target paths.target - Path Units. Feb 13 22:30:08.342265 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 22:30:08.342452 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 22:30:08.344167 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 22:30:08.344844 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 22:30:08.346675 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 22:30:08.346761 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 22:30:08.348273 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 22:30:08.348348 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 22:30:08.349870 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 22:30:08.349946 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 22:30:08.351460 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 22:30:08.351530 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 22:30:08.352647 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 22:30:08.354486 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 22:30:08.357689 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 22:30:08.358787 systemd-networkd[772]: eth0: DHCPv6 lease lost Feb 13 22:30:08.367296 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 22:30:08.367507 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 22:30:08.368721 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 22:30:08.368786 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 22:30:08.377785 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 22:30:08.378739 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 22:30:08.378840 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 22:30:08.380518 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 22:30:08.387975 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 22:30:08.388163 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 22:30:08.395023 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 22:30:08.395280 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 22:30:08.398584 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 22:30:08.398703 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 22:30:08.400886 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 22:30:08.400949 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 22:30:08.402460 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 22:30:08.402540 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 22:30:08.404789 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 22:30:08.404862 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 22:30:08.407787 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 22:30:08.407863 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 22:30:08.414816 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 22:30:08.415990 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 22:30:08.416077 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 22:30:08.419377 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 22:30:08.419456 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 22:30:08.420207 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 22:30:08.420285 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 22:30:08.422722 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 22:30:08.422790 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 22:30:08.425640 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 22:30:08.425713 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 22:30:08.427141 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 22:30:08.427292 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 22:30:08.428474 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 22:30:08.428600 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 22:30:08.446327 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 22:30:08.446534 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 22:30:08.448683 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 22:30:08.449464 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 22:30:08.449553 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 22:30:08.456889 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 22:30:08.469375 systemd[1]: Switching root. Feb 13 22:30:08.509836 systemd-journald[202]: Journal stopped Feb 13 22:30:10.046811 systemd-journald[202]: Received SIGTERM from PID 1 (systemd). Feb 13 22:30:10.046942 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 22:30:10.046968 kernel: SELinux: policy capability open_perms=1 Feb 13 22:30:10.046995 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 22:30:10.047020 kernel: SELinux: policy capability always_check_network=0 Feb 13 22:30:10.047052 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 22:30:10.047073 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 22:30:10.047091 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 22:30:10.047126 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 22:30:10.047147 kernel: audit: type=1403 audit(1739485808.814:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 22:30:10.047172 systemd[1]: Successfully loaded SELinux policy in 50.986ms. Feb 13 22:30:10.047196 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.743ms. Feb 13 22:30:10.047217 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 22:30:10.047239 systemd[1]: Detected virtualization kvm. Feb 13 22:30:10.047258 systemd[1]: Detected architecture x86-64. Feb 13 22:30:10.047277 systemd[1]: Detected first boot. Feb 13 22:30:10.047322 systemd[1]: Hostname set to . Feb 13 22:30:10.047344 systemd[1]: Initializing machine ID from VM UUID. Feb 13 22:30:10.047365 zram_generator::config[1072]: No configuration found. Feb 13 22:30:10.047385 systemd[1]: Populated /etc with preset unit settings. Feb 13 22:30:10.047405 systemd[1]: Queued start job for default target multi-user.target. Feb 13 22:30:10.047425 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 22:30:10.047446 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 22:30:10.047467 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 22:30:10.047500 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 22:30:10.047522 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 22:30:10.047541 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 22:30:10.047561 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 22:30:10.047580 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 22:30:10.047599 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 22:30:10.056786 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 22:30:10.056819 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 22:30:10.056852 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 22:30:10.056892 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 22:30:10.056915 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 22:30:10.056961 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 22:30:10.056984 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 22:30:10.057011 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 22:30:10.057032 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 22:30:10.057052 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 22:30:10.057089 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 22:30:10.057112 systemd[1]: Reached target slices.target - Slice Units. Feb 13 22:30:10.057142 systemd[1]: Reached target swap.target - Swaps. Feb 13 22:30:10.057163 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 22:30:10.057191 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 22:30:10.057212 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 22:30:10.057232 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 22:30:10.057252 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 22:30:10.057301 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 22:30:10.057339 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 22:30:10.057362 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 22:30:10.057383 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 22:30:10.057403 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 22:30:10.057423 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 22:30:10.057442 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 22:30:10.057462 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 22:30:10.057482 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 22:30:10.057514 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 22:30:10.057536 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 22:30:10.057557 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 22:30:10.057577 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 22:30:10.062666 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 22:30:10.062726 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 22:30:10.062771 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 22:30:10.062794 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 22:30:10.062815 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 22:30:10.062842 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 22:30:10.062865 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 22:30:10.062885 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 13 22:30:10.062906 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Feb 13 22:30:10.062927 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 22:30:10.062960 kernel: fuse: init (API version 7.39) Feb 13 22:30:10.062983 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 22:30:10.063002 kernel: loop: module loaded Feb 13 22:30:10.063029 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 22:30:10.063050 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 22:30:10.063111 systemd-journald[1190]: Collecting audit messages is disabled. Feb 13 22:30:10.063166 kernel: ACPI: bus type drm_connector registered Feb 13 22:30:10.063188 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 22:30:10.063223 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 22:30:10.063246 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 22:30:10.063267 systemd-journald[1190]: Journal started Feb 13 22:30:10.063311 systemd-journald[1190]: Runtime Journal (/run/log/journal/3e4ddd50b0bc44388f9b62d3e3cfcfe2) is 4.7M, max 38.0M, 33.2M free. Feb 13 22:30:10.067686 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 22:30:10.071149 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 22:30:10.072401 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 22:30:10.073344 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 22:30:10.074308 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 22:30:10.075328 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 22:30:10.076564 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 22:30:10.077886 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 22:30:10.079153 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 22:30:10.079530 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 22:30:10.081111 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 22:30:10.081470 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 22:30:10.082881 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 22:30:10.083113 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 22:30:10.084329 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 22:30:10.084554 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 22:30:10.086270 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 22:30:10.086626 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 22:30:10.087993 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 22:30:10.088388 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 22:30:10.089862 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 22:30:10.091115 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 22:30:10.092669 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 22:30:10.105981 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 22:30:10.112787 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 22:30:10.120697 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 22:30:10.124361 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 22:30:10.130813 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 22:30:10.150883 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 22:30:10.152265 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 22:30:10.155941 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 22:30:10.159764 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 22:30:10.174776 systemd-journald[1190]: Time spent on flushing to /var/log/journal/3e4ddd50b0bc44388f9b62d3e3cfcfe2 is 35.696ms for 1107 entries. Feb 13 22:30:10.174776 systemd-journald[1190]: System Journal (/var/log/journal/3e4ddd50b0bc44388f9b62d3e3cfcfe2) is 8.0M, max 584.8M, 576.8M free. Feb 13 22:30:10.244570 systemd-journald[1190]: Received client request to flush runtime journal. Feb 13 22:30:10.172894 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 22:30:10.177740 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 22:30:10.182678 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 22:30:10.195073 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 22:30:10.217840 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 22:30:10.221246 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 22:30:10.225025 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 22:30:10.251909 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 22:30:10.270269 systemd-tmpfiles[1228]: ACLs are not supported, ignoring. Feb 13 22:30:10.270910 systemd-tmpfiles[1228]: ACLs are not supported, ignoring. Feb 13 22:30:10.279602 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 22:30:10.292731 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 22:30:10.344703 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 22:30:10.354984 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 22:30:10.371409 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 22:30:10.387880 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 22:30:10.397766 udevadm[1247]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 22:30:10.416625 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Feb 13 22:30:10.416659 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Feb 13 22:30:10.423877 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 22:30:10.939291 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 22:30:10.950937 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 22:30:10.994622 systemd-udevd[1256]: Using default interface naming scheme 'v255'. Feb 13 22:30:11.024458 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 22:30:11.036830 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 22:30:11.058797 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 22:30:11.124805 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 22:30:11.154320 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Feb 13 22:30:11.210864 systemd-networkd[1261]: lo: Link UP Feb 13 22:30:11.211435 systemd-networkd[1261]: lo: Gained carrier Feb 13 22:30:11.213635 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1266) Feb 13 22:30:11.214369 systemd-networkd[1261]: Enumeration completed Feb 13 22:30:11.215202 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 22:30:11.218424 systemd-networkd[1261]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 22:30:11.218431 systemd-networkd[1261]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 22:30:11.220498 systemd-networkd[1261]: eth0: Link UP Feb 13 22:30:11.220602 systemd-networkd[1261]: eth0: Gained carrier Feb 13 22:30:11.220719 systemd-networkd[1261]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 22:30:11.223813 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 22:30:11.236707 systemd-networkd[1261]: eth0: DHCPv4 address 10.244.31.90/30, gateway 10.244.31.89 acquired from 10.244.31.89 Feb 13 22:30:11.290920 systemd-networkd[1261]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 22:30:11.339442 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 22:30:11.352643 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 22:30:11.355636 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 22:30:11.361644 kernel: ACPI: button: Power Button [PWRF] Feb 13 22:30:11.388639 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Feb 13 22:30:11.396352 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Feb 13 22:30:11.396779 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Feb 13 22:30:11.410670 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Feb 13 22:30:11.461800 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 22:30:11.630709 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 22:30:11.663163 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 22:30:11.670843 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 22:30:11.688781 lvm[1296]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 22:30:11.722392 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 22:30:11.724375 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 22:30:11.732033 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 22:30:11.740756 lvm[1299]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 22:30:11.771217 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 22:30:11.773131 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 22:30:11.774371 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 22:30:11.774521 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 22:30:11.775417 systemd[1]: Reached target machines.target - Containers. Feb 13 22:30:11.777985 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 22:30:11.784860 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 22:30:11.788834 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 22:30:11.791993 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 22:30:11.794000 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 22:30:11.804849 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 22:30:11.811860 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 22:30:11.823532 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 22:30:11.826832 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 22:30:11.851643 kernel: loop0: detected capacity change from 0 to 210664 Feb 13 22:30:11.863035 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 22:30:11.870215 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 22:30:11.888806 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 22:30:11.929671 kernel: loop1: detected capacity change from 0 to 138184 Feb 13 22:30:11.974681 kernel: loop2: detected capacity change from 0 to 140992 Feb 13 22:30:12.020926 kernel: loop3: detected capacity change from 0 to 8 Feb 13 22:30:12.047849 kernel: loop4: detected capacity change from 0 to 210664 Feb 13 22:30:12.062634 kernel: loop5: detected capacity change from 0 to 138184 Feb 13 22:30:12.081643 kernel: loop6: detected capacity change from 0 to 140992 Feb 13 22:30:12.102088 kernel: loop7: detected capacity change from 0 to 8 Feb 13 22:30:12.103556 (sd-merge)[1320]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Feb 13 22:30:12.104911 (sd-merge)[1320]: Merged extensions into '/usr'. Feb 13 22:30:12.114032 systemd[1]: Reloading requested from client PID 1307 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 22:30:12.114072 systemd[1]: Reloading... Feb 13 22:30:12.220635 zram_generator::config[1348]: No configuration found. Feb 13 22:30:12.392888 ldconfig[1303]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 22:30:12.471654 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 22:30:12.554422 systemd[1]: Reloading finished in 439 ms. Feb 13 22:30:12.573489 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 22:30:12.578707 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 22:30:12.592887 systemd[1]: Starting ensure-sysext.service... Feb 13 22:30:12.597813 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 22:30:12.606052 systemd[1]: Reloading requested from client PID 1411 ('systemctl') (unit ensure-sysext.service)... Feb 13 22:30:12.606076 systemd[1]: Reloading... Feb 13 22:30:12.655931 systemd-tmpfiles[1412]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 22:30:12.656556 systemd-tmpfiles[1412]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 22:30:12.660885 systemd-tmpfiles[1412]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 22:30:12.661473 systemd-tmpfiles[1412]: ACLs are not supported, ignoring. Feb 13 22:30:12.661720 systemd-tmpfiles[1412]: ACLs are not supported, ignoring. Feb 13 22:30:12.667628 systemd-tmpfiles[1412]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 22:30:12.667811 systemd-tmpfiles[1412]: Skipping /boot Feb 13 22:30:12.685949 systemd-tmpfiles[1412]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 22:30:12.686146 systemd-tmpfiles[1412]: Skipping /boot Feb 13 22:30:12.717638 zram_generator::config[1440]: No configuration found. Feb 13 22:30:12.907785 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 22:30:12.992168 systemd[1]: Reloading finished in 384 ms. Feb 13 22:30:13.024343 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 22:30:13.036918 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 22:30:13.041817 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 22:30:13.047866 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 22:30:13.064138 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 22:30:13.068842 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 22:30:13.083391 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 22:30:13.084655 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 22:30:13.090928 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 22:30:13.101960 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 22:30:13.110930 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 22:30:13.111872 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 22:30:13.112024 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 22:30:13.120540 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 22:30:13.123348 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 22:30:13.124965 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 22:30:13.125183 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 22:30:13.129215 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 22:30:13.132354 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 22:30:13.145703 systemd-networkd[1261]: eth0: Gained IPv6LL Feb 13 22:30:13.153059 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 22:30:13.156212 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 22:30:13.158482 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 22:30:13.159625 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 22:30:13.167489 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 22:30:13.169880 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 22:30:13.178748 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 22:30:13.180883 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 22:30:13.192844 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 22:30:13.197976 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 22:30:13.200526 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 22:30:13.201324 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 22:30:13.201390 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 22:30:13.202039 systemd[1]: Finished ensure-sysext.service. Feb 13 22:30:13.210773 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 22:30:13.216303 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 22:30:13.218969 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 22:30:13.225217 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 22:30:13.235669 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 22:30:13.246897 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 22:30:13.248704 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 22:30:13.250995 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 22:30:13.258121 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 22:30:13.259502 augenrules[1555]: No rules Feb 13 22:30:13.269215 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 22:30:13.269600 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 22:30:13.277542 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 22:30:13.285246 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 22:30:13.300915 systemd-resolved[1513]: Positive Trust Anchors: Feb 13 22:30:13.300952 systemd-resolved[1513]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 22:30:13.300998 systemd-resolved[1513]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 22:30:13.307573 systemd-resolved[1513]: Using system hostname 'srv-hlas6.gb1.brightbox.com'. Feb 13 22:30:13.310724 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 22:30:13.311722 systemd[1]: Reached target network.target - Network. Feb 13 22:30:13.312389 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 22:30:13.314886 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 22:30:13.369478 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 22:30:13.370983 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 22:30:13.371931 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 22:30:13.372828 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 22:30:13.373682 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 22:30:13.374504 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 22:30:13.374559 systemd[1]: Reached target paths.target - Path Units. Feb 13 22:30:13.375219 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 22:30:13.376247 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 22:30:13.377142 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 22:30:13.377942 systemd[1]: Reached target timers.target - Timer Units. Feb 13 22:30:13.380954 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 22:30:13.384854 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 22:30:13.388929 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 22:30:13.392011 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 22:30:13.392801 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 22:30:13.393553 systemd[1]: Reached target basic.target - Basic System. Feb 13 22:30:13.394523 systemd[1]: System is tainted: cgroupsv1 Feb 13 22:30:13.394593 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 22:30:13.394663 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 22:30:13.398079 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 22:30:13.401820 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 22:30:13.407811 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 22:30:13.414736 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 22:30:13.425933 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 22:30:13.428702 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 22:30:13.435742 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 22:30:13.440382 jq[1575]: false Feb 13 22:30:13.448929 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 22:30:13.466911 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 22:30:13.473794 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 22:30:13.486525 dbus-daemon[1573]: [system] SELinux support is enabled Feb 13 22:30:13.487822 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 22:30:13.493488 extend-filesystems[1578]: Found loop4 Feb 13 22:30:13.493488 extend-filesystems[1578]: Found loop5 Feb 13 22:30:13.493488 extend-filesystems[1578]: Found loop6 Feb 13 22:30:13.493488 extend-filesystems[1578]: Found loop7 Feb 13 22:30:13.493488 extend-filesystems[1578]: Found vda Feb 13 22:30:13.493488 extend-filesystems[1578]: Found vda1 Feb 13 22:30:13.493488 extend-filesystems[1578]: Found vda2 Feb 13 22:30:13.493488 extend-filesystems[1578]: Found vda3 Feb 13 22:30:13.493488 extend-filesystems[1578]: Found usr Feb 13 22:30:13.493488 extend-filesystems[1578]: Found vda4 Feb 13 22:30:13.493488 extend-filesystems[1578]: Found vda6 Feb 13 22:30:13.493488 extend-filesystems[1578]: Found vda7 Feb 13 22:30:13.500738 dbus-daemon[1573]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1261 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 22:30:13.500895 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 22:30:13.571953 extend-filesystems[1578]: Found vda9 Feb 13 22:30:13.571953 extend-filesystems[1578]: Checking size of /dev/vda9 Feb 13 22:30:13.504542 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 22:30:13.522444 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 22:30:13.531792 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 22:30:13.581464 update_engine[1596]: I20250213 22:30:13.546648 1596 main.cc:92] Flatcar Update Engine starting Feb 13 22:30:13.581464 update_engine[1596]: I20250213 22:30:13.548637 1596 update_check_scheduler.cc:74] Next update check in 8m37s Feb 13 22:30:13.602723 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Feb 13 22:30:13.537487 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 22:30:13.603078 extend-filesystems[1578]: Resized partition /dev/vda9 Feb 13 22:30:13.557853 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 22:30:13.612967 jq[1601]: true Feb 13 22:30:13.613259 extend-filesystems[1614]: resize2fs 1.47.1 (20-May-2024) Feb 13 22:30:13.558267 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 22:30:13.573517 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 22:30:13.584998 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 22:30:13.590034 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 22:30:13.592928 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 22:30:13.593263 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 22:30:14.580689 systemd-timesyncd[1551]: Contacted time server 51.89.151.183:123 (0.flatcar.pool.ntp.org). Feb 13 22:30:14.580772 systemd-timesyncd[1551]: Initial clock synchronization to Thu 2025-02-13 22:30:14.580480 UTC. Feb 13 22:30:14.598516 dbus-daemon[1573]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 22:30:14.581601 systemd-resolved[1513]: Clock change detected. Flushing caches. Feb 13 22:30:14.596384 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 22:30:14.596445 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 22:30:14.598493 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 22:30:14.598532 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 22:30:14.602213 systemd[1]: Started update-engine.service - Update Engine. Feb 13 22:30:14.607945 (ntainerd)[1615]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 22:30:14.616159 jq[1618]: true Feb 13 22:30:14.621435 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 22:30:14.623303 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 22:30:14.626378 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 22:30:14.691244 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1259) Feb 13 22:30:14.798945 systemd-logind[1591]: Watching system buttons on /dev/input/event2 (Power Button) Feb 13 22:30:14.799021 systemd-logind[1591]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 22:30:14.801490 systemd-logind[1591]: New seat seat0. Feb 13 22:30:14.817995 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 22:30:14.862241 locksmithd[1628]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 22:30:14.934098 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Feb 13 22:30:14.956105 extend-filesystems[1614]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 22:30:14.956105 extend-filesystems[1614]: old_desc_blocks = 1, new_desc_blocks = 8 Feb 13 22:30:14.956105 extend-filesystems[1614]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Feb 13 22:30:14.969119 extend-filesystems[1578]: Resized filesystem in /dev/vda9 Feb 13 22:30:14.959044 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 22:30:14.979464 bash[1650]: Updated "/home/core/.ssh/authorized_keys" Feb 13 22:30:14.959510 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 22:30:14.974371 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 22:30:14.984253 sshd_keygen[1602]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 22:30:14.993686 systemd[1]: Starting sshkeys.service... Feb 13 22:30:15.012362 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 22:30:15.020781 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 22:30:15.056645 dbus-daemon[1573]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 22:30:15.058788 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 22:30:15.061774 dbus-daemon[1573]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1627 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 22:30:15.074647 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 22:30:15.104977 polkitd[1670]: Started polkitd version 121 Feb 13 22:30:15.110426 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 22:30:15.121590 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 22:30:15.140553 polkitd[1670]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 22:30:15.140669 polkitd[1670]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 22:30:15.147725 polkitd[1670]: Finished loading, compiling and executing 2 rules Feb 13 22:30:15.149509 dbus-daemon[1573]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 22:30:15.149741 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 22:30:15.149979 polkitd[1670]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 22:30:15.158822 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 22:30:15.159465 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 22:30:15.171656 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 22:30:15.190294 systemd-hostnamed[1627]: Hostname set to (static) Feb 13 22:30:15.203887 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 22:30:15.209381 systemd-networkd[1261]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:7d6:24:19ff:fef4:1f5a/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:7d6:24:19ff:fef4:1f5a/64 assigned by NDisc. Feb 13 22:30:15.209392 systemd-networkd[1261]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Feb 13 22:30:15.217752 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 22:30:15.223298 containerd[1615]: time="2025-02-13T22:30:15.222526620Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 22:30:15.227491 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 22:30:15.237569 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 22:30:15.262289 containerd[1615]: time="2025-02-13T22:30:15.262231828Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 22:30:15.265032 containerd[1615]: time="2025-02-13T22:30:15.264991530Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 22:30:15.265206 containerd[1615]: time="2025-02-13T22:30:15.265151610Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 22:30:15.265327 containerd[1615]: time="2025-02-13T22:30:15.265301314Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 22:30:15.265671 containerd[1615]: time="2025-02-13T22:30:15.265642761Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 22:30:15.265789 containerd[1615]: time="2025-02-13T22:30:15.265764747Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 22:30:15.265982 containerd[1615]: time="2025-02-13T22:30:15.265953909Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 22:30:15.266103 containerd[1615]: time="2025-02-13T22:30:15.266075776Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 22:30:15.266787 containerd[1615]: time="2025-02-13T22:30:15.266469633Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 22:30:15.266787 containerd[1615]: time="2025-02-13T22:30:15.266500760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 22:30:15.266787 containerd[1615]: time="2025-02-13T22:30:15.266523807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 22:30:15.266787 containerd[1615]: time="2025-02-13T22:30:15.266540053Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 22:30:15.266787 containerd[1615]: time="2025-02-13T22:30:15.266661475Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 22:30:15.267055 containerd[1615]: time="2025-02-13T22:30:15.267028668Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 22:30:15.267677 containerd[1615]: time="2025-02-13T22:30:15.267221995Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 22:30:15.267677 containerd[1615]: time="2025-02-13T22:30:15.267252833Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 22:30:15.267677 containerd[1615]: time="2025-02-13T22:30:15.267388094Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 22:30:15.267677 containerd[1615]: time="2025-02-13T22:30:15.267470167Z" level=info msg="metadata content store policy set" policy=shared Feb 13 22:30:15.274702 containerd[1615]: time="2025-02-13T22:30:15.273451676Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 22:30:15.274702 containerd[1615]: time="2025-02-13T22:30:15.273560107Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 22:30:15.274702 containerd[1615]: time="2025-02-13T22:30:15.273589637Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 22:30:15.274702 containerd[1615]: time="2025-02-13T22:30:15.273612907Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 22:30:15.274702 containerd[1615]: time="2025-02-13T22:30:15.273635486Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 22:30:15.274702 containerd[1615]: time="2025-02-13T22:30:15.273848365Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 22:30:15.274702 containerd[1615]: time="2025-02-13T22:30:15.274275700Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 22:30:15.274702 containerd[1615]: time="2025-02-13T22:30:15.274445606Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 22:30:15.274702 containerd[1615]: time="2025-02-13T22:30:15.274471615Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 22:30:15.274702 containerd[1615]: time="2025-02-13T22:30:15.274492909Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 22:30:15.274702 containerd[1615]: time="2025-02-13T22:30:15.274513371Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 22:30:15.274702 containerd[1615]: time="2025-02-13T22:30:15.274535895Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 22:30:15.274702 containerd[1615]: time="2025-02-13T22:30:15.274555189Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 22:30:15.274702 containerd[1615]: time="2025-02-13T22:30:15.274583993Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 22:30:15.275316 containerd[1615]: time="2025-02-13T22:30:15.274617067Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 22:30:15.275316 containerd[1615]: time="2025-02-13T22:30:15.274643868Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 22:30:15.275316 containerd[1615]: time="2025-02-13T22:30:15.274678929Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 22:30:15.275316 containerd[1615]: time="2025-02-13T22:30:15.274705222Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 22:30:15.275316 containerd[1615]: time="2025-02-13T22:30:15.274741695Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 22:30:15.275316 containerd[1615]: time="2025-02-13T22:30:15.274765269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 22:30:15.275316 containerd[1615]: time="2025-02-13T22:30:15.274790656Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 22:30:15.275316 containerd[1615]: time="2025-02-13T22:30:15.274827589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 22:30:15.275316 containerd[1615]: time="2025-02-13T22:30:15.274846938Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 22:30:15.275316 containerd[1615]: time="2025-02-13T22:30:15.274866873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 22:30:15.275316 containerd[1615]: time="2025-02-13T22:30:15.274886018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 22:30:15.275316 containerd[1615]: time="2025-02-13T22:30:15.274909194Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 22:30:15.275316 containerd[1615]: time="2025-02-13T22:30:15.274937998Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 22:30:15.275316 containerd[1615]: time="2025-02-13T22:30:15.274960858Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 22:30:15.275849 containerd[1615]: time="2025-02-13T22:30:15.274986261Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 22:30:15.275849 containerd[1615]: time="2025-02-13T22:30:15.275015983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 22:30:15.275849 containerd[1615]: time="2025-02-13T22:30:15.275044025Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 22:30:15.275849 containerd[1615]: time="2025-02-13T22:30:15.275067038Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 22:30:15.275849 containerd[1615]: time="2025-02-13T22:30:15.275102423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 22:30:15.275849 containerd[1615]: time="2025-02-13T22:30:15.275127381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 22:30:15.275849 containerd[1615]: time="2025-02-13T22:30:15.275145508Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 22:30:15.275849 containerd[1615]: time="2025-02-13T22:30:15.275294263Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 22:30:15.275849 containerd[1615]: time="2025-02-13T22:30:15.275324383Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 22:30:15.275849 containerd[1615]: time="2025-02-13T22:30:15.275341816Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 22:30:15.275849 containerd[1615]: time="2025-02-13T22:30:15.275360734Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 22:30:15.275849 containerd[1615]: time="2025-02-13T22:30:15.275377110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 22:30:15.275849 containerd[1615]: time="2025-02-13T22:30:15.275402253Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 22:30:15.275849 containerd[1615]: time="2025-02-13T22:30:15.275427265Z" level=info msg="NRI interface is disabled by configuration." Feb 13 22:30:15.276369 containerd[1615]: time="2025-02-13T22:30:15.275446745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 22:30:15.276413 containerd[1615]: time="2025-02-13T22:30:15.275936129Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 22:30:15.276413 containerd[1615]: time="2025-02-13T22:30:15.276013118Z" level=info msg="Connect containerd service" Feb 13 22:30:15.276413 containerd[1615]: time="2025-02-13T22:30:15.276098136Z" level=info msg="using legacy CRI server" Feb 13 22:30:15.276413 containerd[1615]: time="2025-02-13T22:30:15.276114113Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 22:30:15.276413 containerd[1615]: time="2025-02-13T22:30:15.276290824Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 22:30:15.278161 containerd[1615]: time="2025-02-13T22:30:15.277085958Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 22:30:15.278161 containerd[1615]: time="2025-02-13T22:30:15.277648971Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 22:30:15.278161 containerd[1615]: time="2025-02-13T22:30:15.277724609Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 22:30:15.278161 containerd[1615]: time="2025-02-13T22:30:15.277924672Z" level=info msg="Start subscribing containerd event" Feb 13 22:30:15.278161 containerd[1615]: time="2025-02-13T22:30:15.277982946Z" level=info msg="Start recovering state" Feb 13 22:30:15.278161 containerd[1615]: time="2025-02-13T22:30:15.278094380Z" level=info msg="Start event monitor" Feb 13 22:30:15.278161 containerd[1615]: time="2025-02-13T22:30:15.278139570Z" level=info msg="Start snapshots syncer" Feb 13 22:30:15.278161 containerd[1615]: time="2025-02-13T22:30:15.278157812Z" level=info msg="Start cni network conf syncer for default" Feb 13 22:30:15.280988 containerd[1615]: time="2025-02-13T22:30:15.278172017Z" level=info msg="Start streaming server" Feb 13 22:30:15.280988 containerd[1615]: time="2025-02-13T22:30:15.278306970Z" level=info msg="containerd successfully booted in 0.058737s" Feb 13 22:30:15.278453 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 22:30:15.894405 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 22:30:15.901517 (kubelet)[1706]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 22:30:16.562268 kubelet[1706]: E0213 22:30:16.562133 1706 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 22:30:16.566290 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 22:30:16.566648 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 22:30:20.290260 login[1692]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 22:30:20.291394 login[1691]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 22:30:20.306924 systemd-logind[1591]: New session 1 of user core. Feb 13 22:30:20.310037 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 22:30:20.320668 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 22:30:20.324548 systemd-logind[1591]: New session 2 of user core. Feb 13 22:30:20.341114 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 22:30:20.351684 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 22:30:20.358646 (systemd)[1726]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 22:30:20.491012 systemd[1726]: Queued start job for default target default.target. Feb 13 22:30:20.491572 systemd[1726]: Created slice app.slice - User Application Slice. Feb 13 22:30:20.491604 systemd[1726]: Reached target paths.target - Paths. Feb 13 22:30:20.491626 systemd[1726]: Reached target timers.target - Timers. Feb 13 22:30:20.503332 systemd[1726]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 22:30:20.511838 systemd[1726]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 22:30:20.511918 systemd[1726]: Reached target sockets.target - Sockets. Feb 13 22:30:20.511942 systemd[1726]: Reached target basic.target - Basic System. Feb 13 22:30:20.512005 systemd[1726]: Reached target default.target - Main User Target. Feb 13 22:30:20.512056 systemd[1726]: Startup finished in 144ms. Feb 13 22:30:20.512756 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 22:30:20.518732 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 22:30:20.522394 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 22:30:21.462271 coreos-metadata[1572]: Feb 13 22:30:21.462 WARN failed to locate config-drive, using the metadata service API instead Feb 13 22:30:21.489491 coreos-metadata[1572]: Feb 13 22:30:21.489 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Feb 13 22:30:21.495958 coreos-metadata[1572]: Feb 13 22:30:21.495 INFO Fetch failed with 404: resource not found Feb 13 22:30:21.496029 coreos-metadata[1572]: Feb 13 22:30:21.496 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Feb 13 22:30:21.496878 coreos-metadata[1572]: Feb 13 22:30:21.496 INFO Fetch successful Feb 13 22:30:21.497007 coreos-metadata[1572]: Feb 13 22:30:21.496 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Feb 13 22:30:21.510123 coreos-metadata[1572]: Feb 13 22:30:21.510 INFO Fetch successful Feb 13 22:30:21.510310 coreos-metadata[1572]: Feb 13 22:30:21.510 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Feb 13 22:30:21.525323 coreos-metadata[1572]: Feb 13 22:30:21.525 INFO Fetch successful Feb 13 22:30:21.525460 coreos-metadata[1572]: Feb 13 22:30:21.525 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Feb 13 22:30:21.543347 coreos-metadata[1572]: Feb 13 22:30:21.543 INFO Fetch successful Feb 13 22:30:21.543492 coreos-metadata[1572]: Feb 13 22:30:21.543 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Feb 13 22:30:21.564779 coreos-metadata[1572]: Feb 13 22:30:21.564 INFO Fetch successful Feb 13 22:30:21.589988 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 22:30:21.591938 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 22:30:22.130617 coreos-metadata[1665]: Feb 13 22:30:22.130 WARN failed to locate config-drive, using the metadata service API instead Feb 13 22:30:22.152808 coreos-metadata[1665]: Feb 13 22:30:22.152 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Feb 13 22:30:22.174792 coreos-metadata[1665]: Feb 13 22:30:22.174 INFO Fetch successful Feb 13 22:30:22.174989 coreos-metadata[1665]: Feb 13 22:30:22.174 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 22:30:22.202936 coreos-metadata[1665]: Feb 13 22:30:22.202 INFO Fetch successful Feb 13 22:30:22.205108 unknown[1665]: wrote ssh authorized keys file for user: core Feb 13 22:30:22.240688 update-ssh-keys[1773]: Updated "/home/core/.ssh/authorized_keys" Feb 13 22:30:22.241734 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 22:30:22.247162 systemd[1]: Finished sshkeys.service. Feb 13 22:30:22.250666 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 22:30:22.250859 systemd[1]: Startup finished in 15.549s (kernel) + 12.523s (userspace) = 28.072s. Feb 13 22:30:23.484791 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 22:30:23.501630 systemd[1]: Started sshd@0-10.244.31.90:22-147.75.109.163:60880.service - OpenSSH per-connection server daemon (147.75.109.163:60880). Feb 13 22:30:24.407035 sshd[1780]: Accepted publickey for core from 147.75.109.163 port 60880 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 22:30:24.408951 sshd-session[1780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 22:30:24.416472 systemd-logind[1591]: New session 3 of user core. Feb 13 22:30:24.423684 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 22:30:25.170556 systemd[1]: Started sshd@1-10.244.31.90:22-147.75.109.163:60888.service - OpenSSH per-connection server daemon (147.75.109.163:60888). Feb 13 22:30:26.109864 sshd[1785]: Accepted publickey for core from 147.75.109.163 port 60888 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 22:30:26.111747 sshd-session[1785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 22:30:26.118277 systemd-logind[1591]: New session 4 of user core. Feb 13 22:30:26.126737 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 22:30:26.589076 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 22:30:26.595445 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 22:30:26.732480 sshd[1788]: Connection closed by 147.75.109.163 port 60888 Feb 13 22:30:26.734372 sshd-session[1785]: pam_unix(sshd:session): session closed for user core Feb 13 22:30:26.741876 systemd[1]: sshd@1-10.244.31.90:22-147.75.109.163:60888.service: Deactivated successfully. Feb 13 22:30:26.751974 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 22:30:26.756294 systemd-logind[1591]: Session 4 logged out. Waiting for processes to exit. Feb 13 22:30:26.760460 systemd-logind[1591]: Removed session 4. Feb 13 22:30:26.772429 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 22:30:26.783877 (kubelet)[1805]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 22:30:26.852234 kubelet[1805]: E0213 22:30:26.851014 1805 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 22:30:26.855096 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 22:30:26.855414 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 22:30:26.882554 systemd[1]: Started sshd@2-10.244.31.90:22-147.75.109.163:60902.service - OpenSSH per-connection server daemon (147.75.109.163:60902). Feb 13 22:30:27.775876 sshd[1814]: Accepted publickey for core from 147.75.109.163 port 60902 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 22:30:27.778128 sshd-session[1814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 22:30:27.784891 systemd-logind[1591]: New session 5 of user core. Feb 13 22:30:27.795893 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 22:30:28.388616 sshd[1817]: Connection closed by 147.75.109.163 port 60902 Feb 13 22:30:28.389511 sshd-session[1814]: pam_unix(sshd:session): session closed for user core Feb 13 22:30:28.394244 systemd[1]: sshd@2-10.244.31.90:22-147.75.109.163:60902.service: Deactivated successfully. Feb 13 22:30:28.397420 systemd-logind[1591]: Session 5 logged out. Waiting for processes to exit. Feb 13 22:30:28.398789 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 22:30:28.399866 systemd-logind[1591]: Removed session 5. Feb 13 22:30:28.547629 systemd[1]: Started sshd@3-10.244.31.90:22-147.75.109.163:60916.service - OpenSSH per-connection server daemon (147.75.109.163:60916). Feb 13 22:30:29.439122 sshd[1822]: Accepted publickey for core from 147.75.109.163 port 60916 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 22:30:29.441041 sshd-session[1822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 22:30:29.448493 systemd-logind[1591]: New session 6 of user core. Feb 13 22:30:29.455681 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 22:30:30.060241 sshd[1825]: Connection closed by 147.75.109.163 port 60916 Feb 13 22:30:30.061096 sshd-session[1822]: pam_unix(sshd:session): session closed for user core Feb 13 22:30:30.065000 systemd[1]: sshd@3-10.244.31.90:22-147.75.109.163:60916.service: Deactivated successfully. Feb 13 22:30:30.068902 systemd-logind[1591]: Session 6 logged out. Waiting for processes to exit. Feb 13 22:30:30.069797 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 22:30:30.071031 systemd-logind[1591]: Removed session 6. Feb 13 22:30:30.209532 systemd[1]: Started sshd@4-10.244.31.90:22-147.75.109.163:51396.service - OpenSSH per-connection server daemon (147.75.109.163:51396). Feb 13 22:30:31.105996 sshd[1830]: Accepted publickey for core from 147.75.109.163 port 51396 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 22:30:31.107808 sshd-session[1830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 22:30:31.114976 systemd-logind[1591]: New session 7 of user core. Feb 13 22:30:31.124695 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 22:30:31.592058 sudo[1834]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 22:30:31.593750 sudo[1834]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 22:30:31.607360 sudo[1834]: pam_unix(sudo:session): session closed for user root Feb 13 22:30:31.751586 sshd[1833]: Connection closed by 147.75.109.163 port 51396 Feb 13 22:30:31.750595 sshd-session[1830]: pam_unix(sshd:session): session closed for user core Feb 13 22:30:31.755868 systemd[1]: sshd@4-10.244.31.90:22-147.75.109.163:51396.service: Deactivated successfully. Feb 13 22:30:31.759274 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 22:30:31.760171 systemd-logind[1591]: Session 7 logged out. Waiting for processes to exit. Feb 13 22:30:31.761902 systemd-logind[1591]: Removed session 7. Feb 13 22:30:31.903580 systemd[1]: Started sshd@5-10.244.31.90:22-147.75.109.163:51402.service - OpenSSH per-connection server daemon (147.75.109.163:51402). Feb 13 22:30:32.792396 sshd[1839]: Accepted publickey for core from 147.75.109.163 port 51402 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 22:30:32.794249 sshd-session[1839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 22:30:32.800239 systemd-logind[1591]: New session 8 of user core. Feb 13 22:30:32.808603 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 22:30:33.269059 sudo[1844]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 22:30:33.269592 sudo[1844]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 22:30:33.274672 sudo[1844]: pam_unix(sudo:session): session closed for user root Feb 13 22:30:33.282439 sudo[1843]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 22:30:33.283291 sudo[1843]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 22:30:33.300729 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 22:30:33.344382 augenrules[1866]: No rules Feb 13 22:30:33.346139 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 22:30:33.346577 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 22:30:33.349413 sudo[1843]: pam_unix(sudo:session): session closed for user root Feb 13 22:30:33.492456 sshd[1842]: Connection closed by 147.75.109.163 port 51402 Feb 13 22:30:33.493594 sshd-session[1839]: pam_unix(sshd:session): session closed for user core Feb 13 22:30:33.498688 systemd[1]: sshd@5-10.244.31.90:22-147.75.109.163:51402.service: Deactivated successfully. Feb 13 22:30:33.501826 systemd-logind[1591]: Session 8 logged out. Waiting for processes to exit. Feb 13 22:30:33.502703 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 22:30:33.504579 systemd-logind[1591]: Removed session 8. Feb 13 22:30:33.651674 systemd[1]: Started sshd@6-10.244.31.90:22-147.75.109.163:51418.service - OpenSSH per-connection server daemon (147.75.109.163:51418). Feb 13 22:30:34.540923 sshd[1875]: Accepted publickey for core from 147.75.109.163 port 51418 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 22:30:34.542854 sshd-session[1875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 22:30:34.549023 systemd-logind[1591]: New session 9 of user core. Feb 13 22:30:34.556718 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 22:30:35.014510 sudo[1879]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 22:30:35.014961 sudo[1879]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 22:30:35.839894 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 22:30:35.856579 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 22:30:35.894132 systemd[1]: Reloading requested from client PID 1917 ('systemctl') (unit session-9.scope)... Feb 13 22:30:35.894160 systemd[1]: Reloading... Feb 13 22:30:36.033636 zram_generator::config[1957]: No configuration found. Feb 13 22:30:36.224059 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 22:30:36.324236 systemd[1]: Reloading finished in 429 ms. Feb 13 22:30:36.379091 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 22:30:36.379512 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 22:30:36.380078 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 22:30:36.386568 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 22:30:36.512387 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 22:30:36.521736 (kubelet)[2032]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 22:30:36.638137 kubelet[2032]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 22:30:36.639247 kubelet[2032]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 22:30:36.639247 kubelet[2032]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 22:30:36.639247 kubelet[2032]: I0213 22:30:36.638808 2032 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 22:30:37.349239 kubelet[2032]: I0213 22:30:37.347675 2032 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 22:30:37.349239 kubelet[2032]: I0213 22:30:37.347762 2032 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 22:30:37.349239 kubelet[2032]: I0213 22:30:37.348226 2032 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 22:30:37.369151 kubelet[2032]: I0213 22:30:37.369099 2032 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 22:30:37.385815 kubelet[2032]: I0213 22:30:37.385783 2032 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 22:30:37.389141 kubelet[2032]: I0213 22:30:37.388571 2032 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 22:30:37.389141 kubelet[2032]: I0213 22:30:37.388634 2032 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.244.31.90","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 22:30:37.389141 kubelet[2032]: I0213 22:30:37.388917 2032 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 22:30:37.389141 kubelet[2032]: I0213 22:30:37.388935 2032 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 22:30:37.389601 kubelet[2032]: I0213 22:30:37.389178 2032 state_mem.go:36] "Initialized new in-memory state store" Feb 13 22:30:37.390545 kubelet[2032]: I0213 22:30:37.390095 2032 kubelet.go:400] "Attempting to sync node with API server" Feb 13 22:30:37.390545 kubelet[2032]: I0213 22:30:37.390123 2032 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 22:30:37.390545 kubelet[2032]: I0213 22:30:37.390161 2032 kubelet.go:312] "Adding apiserver pod source" Feb 13 22:30:37.390545 kubelet[2032]: I0213 22:30:37.390238 2032 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 22:30:37.393201 kubelet[2032]: E0213 22:30:37.392597 2032 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:30:37.393201 kubelet[2032]: E0213 22:30:37.392680 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:30:37.395008 kubelet[2032]: I0213 22:30:37.394983 2032 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 22:30:37.396792 kubelet[2032]: I0213 22:30:37.396770 2032 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 22:30:37.397012 kubelet[2032]: W0213 22:30:37.396988 2032 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 22:30:37.398338 kubelet[2032]: I0213 22:30:37.398318 2032 server.go:1264] "Started kubelet" Feb 13 22:30:37.402209 kubelet[2032]: I0213 22:30:37.402012 2032 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 22:30:37.415345 kubelet[2032]: E0213 22:30:37.415145 2032 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.244.31.90.1823e52725f2aa85 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.244.31.90,UID:10.244.31.90,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.244.31.90,},FirstTimestamp:2025-02-13 22:30:37.398289029 +0000 UTC m=+0.871685570,LastTimestamp:2025-02-13 22:30:37.398289029 +0000 UTC m=+0.871685570,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.244.31.90,}" Feb 13 22:30:37.417997 kubelet[2032]: I0213 22:30:37.417932 2032 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 22:30:37.418368 kubelet[2032]: I0213 22:30:37.418214 2032 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 22:30:37.419007 kubelet[2032]: I0213 22:30:37.418985 2032 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 22:30:37.419430 kubelet[2032]: I0213 22:30:37.419332 2032 server.go:455] "Adding debug handlers to kubelet server" Feb 13 22:30:37.423794 kubelet[2032]: I0213 22:30:37.423722 2032 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 22:30:37.424573 kubelet[2032]: I0213 22:30:37.424457 2032 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 22:30:37.424666 kubelet[2032]: I0213 22:30:37.424586 2032 reconciler.go:26] "Reconciler: start to sync state" Feb 13 22:30:37.426091 kubelet[2032]: E0213 22:30:37.425835 2032 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.244.31.90\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 13 22:30:37.428383 kubelet[2032]: I0213 22:30:37.428355 2032 factory.go:221] Registration of the systemd container factory successfully Feb 13 22:30:37.429279 kubelet[2032]: I0213 22:30:37.429250 2032 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 22:30:37.430455 kubelet[2032]: E0213 22:30:37.430390 2032 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 22:30:37.433218 kubelet[2032]: I0213 22:30:37.432307 2032 factory.go:221] Registration of the containerd container factory successfully Feb 13 22:30:37.441724 kubelet[2032]: W0213 22:30:37.441686 2032 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.244.31.90" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 22:30:37.441915 kubelet[2032]: E0213 22:30:37.441891 2032 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.244.31.90" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 22:30:37.442141 kubelet[2032]: W0213 22:30:37.442115 2032 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 13 22:30:37.442274 kubelet[2032]: E0213 22:30:37.442254 2032 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 13 22:30:37.449458 kubelet[2032]: W0213 22:30:37.449425 2032 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 22:30:37.451243 kubelet[2032]: E0213 22:30:37.450282 2032 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 22:30:37.486727 kubelet[2032]: I0213 22:30:37.486694 2032 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 22:30:37.487178 kubelet[2032]: I0213 22:30:37.487157 2032 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 22:30:37.487496 kubelet[2032]: I0213 22:30:37.487474 2032 state_mem.go:36] "Initialized new in-memory state store" Feb 13 22:30:37.492883 kubelet[2032]: I0213 22:30:37.492846 2032 policy_none.go:49] "None policy: Start" Feb 13 22:30:37.496897 kubelet[2032]: I0213 22:30:37.496873 2032 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 22:30:37.497098 kubelet[2032]: I0213 22:30:37.497080 2032 state_mem.go:35] "Initializing new in-memory state store" Feb 13 22:30:37.505941 kubelet[2032]: I0213 22:30:37.505910 2032 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 22:30:37.507202 kubelet[2032]: I0213 22:30:37.506306 2032 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 22:30:37.507202 kubelet[2032]: I0213 22:30:37.506499 2032 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 22:30:37.517172 kubelet[2032]: E0213 22:30:37.517118 2032 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.244.31.90\" not found" Feb 13 22:30:37.526206 kubelet[2032]: I0213 22:30:37.525938 2032 kubelet_node_status.go:73] "Attempting to register node" node="10.244.31.90" Feb 13 22:30:37.533700 kubelet[2032]: I0213 22:30:37.533654 2032 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 22:30:37.536650 kubelet[2032]: I0213 22:30:37.535218 2032 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 22:30:37.536650 kubelet[2032]: I0213 22:30:37.535267 2032 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 22:30:37.536650 kubelet[2032]: I0213 22:30:37.535294 2032 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 22:30:37.536650 kubelet[2032]: E0213 22:30:37.535373 2032 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 13 22:30:37.537998 kubelet[2032]: I0213 22:30:37.537959 2032 kubelet_node_status.go:76] "Successfully registered node" node="10.244.31.90" Feb 13 22:30:37.560030 kubelet[2032]: E0213 22:30:37.559996 2032 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.31.90\" not found" Feb 13 22:30:37.597625 sudo[1879]: pam_unix(sudo:session): session closed for user root Feb 13 22:30:37.661339 kubelet[2032]: E0213 22:30:37.661103 2032 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.31.90\" not found" Feb 13 22:30:37.741255 sshd[1878]: Connection closed by 147.75.109.163 port 51418 Feb 13 22:30:37.742291 sshd-session[1875]: pam_unix(sshd:session): session closed for user core Feb 13 22:30:37.746626 systemd[1]: sshd@6-10.244.31.90:22-147.75.109.163:51418.service: Deactivated successfully. Feb 13 22:30:37.751728 systemd-logind[1591]: Session 9 logged out. Waiting for processes to exit. Feb 13 22:30:37.752594 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 22:30:37.755644 systemd-logind[1591]: Removed session 9. Feb 13 22:30:37.762108 kubelet[2032]: E0213 22:30:37.762039 2032 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.31.90\" not found" Feb 13 22:30:37.862863 kubelet[2032]: E0213 22:30:37.862789 2032 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.31.90\" not found" Feb 13 22:30:37.963994 kubelet[2032]: E0213 22:30:37.963817 2032 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.31.90\" not found" Feb 13 22:30:38.064995 kubelet[2032]: E0213 22:30:38.064915 2032 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.31.90\" not found" Feb 13 22:30:38.165910 kubelet[2032]: E0213 22:30:38.165836 2032 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.31.90\" not found" Feb 13 22:30:38.266738 kubelet[2032]: E0213 22:30:38.266559 2032 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.31.90\" not found" Feb 13 22:30:38.352595 kubelet[2032]: I0213 22:30:38.352520 2032 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 13 22:30:38.352859 kubelet[2032]: W0213 22:30:38.352816 2032 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 22:30:38.367784 kubelet[2032]: E0213 22:30:38.367714 2032 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.31.90\" not found" Feb 13 22:30:38.393346 kubelet[2032]: E0213 22:30:38.393274 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:30:38.468409 kubelet[2032]: E0213 22:30:38.468338 2032 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.31.90\" not found" Feb 13 22:30:38.570347 kubelet[2032]: I0213 22:30:38.570047 2032 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 13 22:30:38.571108 containerd[1615]: time="2025-02-13T22:30:38.571007626Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 22:30:38.572335 kubelet[2032]: I0213 22:30:38.571856 2032 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 13 22:30:39.393529 kubelet[2032]: E0213 22:30:39.393462 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:30:39.394837 kubelet[2032]: I0213 22:30:39.394533 2032 apiserver.go:52] "Watching apiserver" Feb 13 22:30:39.401556 kubelet[2032]: I0213 22:30:39.401514 2032 topology_manager.go:215] "Topology Admit Handler" podUID="7f9049be-875d-430c-b746-09f3b9fd002c" podNamespace="calico-system" podName="calico-node-qgqjd" Feb 13 22:30:39.401715 kubelet[2032]: I0213 22:30:39.401690 2032 topology_manager.go:215] "Topology Admit Handler" podUID="30f07a2e-791e-4a29-bff8-bd4d882c17c8" podNamespace="calico-system" podName="csi-node-driver-xwf8m" Feb 13 22:30:39.403099 kubelet[2032]: I0213 22:30:39.401832 2032 topology_manager.go:215] "Topology Admit Handler" podUID="8c4612ac-21c2-46c6-b240-086008f5ef2e" podNamespace="kube-system" podName="kube-proxy-zb6gr" Feb 13 22:30:39.403099 kubelet[2032]: E0213 22:30:39.402409 2032 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xwf8m" podUID="30f07a2e-791e-4a29-bff8-bd4d882c17c8" Feb 13 22:30:39.425057 kubelet[2032]: I0213 22:30:39.425014 2032 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 22:30:39.435661 kubelet[2032]: I0213 22:30:39.435592 2032 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8c4612ac-21c2-46c6-b240-086008f5ef2e-xtables-lock\") pod \"kube-proxy-zb6gr\" (UID: \"8c4612ac-21c2-46c6-b240-086008f5ef2e\") " pod="kube-system/kube-proxy-zb6gr" Feb 13 22:30:39.435661 kubelet[2032]: I0213 22:30:39.435653 2032 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7f9049be-875d-430c-b746-09f3b9fd002c-lib-modules\") pod \"calico-node-qgqjd\" (UID: \"7f9049be-875d-430c-b746-09f3b9fd002c\") " pod="calico-system/calico-node-qgqjd" Feb 13 22:30:39.435797 kubelet[2032]: I0213 22:30:39.435684 2032 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f9049be-875d-430c-b746-09f3b9fd002c-tigera-ca-bundle\") pod \"calico-node-qgqjd\" (UID: \"7f9049be-875d-430c-b746-09f3b9fd002c\") " pod="calico-system/calico-node-qgqjd" Feb 13 22:30:39.435797 kubelet[2032]: I0213 22:30:39.435710 2032 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7f9049be-875d-430c-b746-09f3b9fd002c-flexvol-driver-host\") pod \"calico-node-qgqjd\" (UID: \"7f9049be-875d-430c-b746-09f3b9fd002c\") " pod="calico-system/calico-node-qgqjd" Feb 13 22:30:39.435797 kubelet[2032]: I0213 22:30:39.435740 2032 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/30f07a2e-791e-4a29-bff8-bd4d882c17c8-varrun\") pod \"csi-node-driver-xwf8m\" (UID: \"30f07a2e-791e-4a29-bff8-bd4d882c17c8\") " pod="calico-system/csi-node-driver-xwf8m" Feb 13 22:30:39.435797 kubelet[2032]: I0213 22:30:39.435768 2032 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/30f07a2e-791e-4a29-bff8-bd4d882c17c8-kubelet-dir\") pod \"csi-node-driver-xwf8m\" (UID: \"30f07a2e-791e-4a29-bff8-bd4d882c17c8\") " pod="calico-system/csi-node-driver-xwf8m" Feb 13 22:30:39.435797 kubelet[2032]: I0213 22:30:39.435793 2032 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/30f07a2e-791e-4a29-bff8-bd4d882c17c8-registration-dir\") pod \"csi-node-driver-xwf8m\" (UID: \"30f07a2e-791e-4a29-bff8-bd4d882c17c8\") " pod="calico-system/csi-node-driver-xwf8m" Feb 13 22:30:39.436035 kubelet[2032]: I0213 22:30:39.435816 2032 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7f9049be-875d-430c-b746-09f3b9fd002c-cni-bin-dir\") pod \"calico-node-qgqjd\" (UID: \"7f9049be-875d-430c-b746-09f3b9fd002c\") " pod="calico-system/calico-node-qgqjd" Feb 13 22:30:39.436035 kubelet[2032]: I0213 22:30:39.435842 2032 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7f9049be-875d-430c-b746-09f3b9fd002c-cni-log-dir\") pod \"calico-node-qgqjd\" (UID: \"7f9049be-875d-430c-b746-09f3b9fd002c\") " pod="calico-system/calico-node-qgqjd" Feb 13 22:30:39.436035 kubelet[2032]: I0213 22:30:39.435869 2032 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgt87\" (UniqueName: \"kubernetes.io/projected/8c4612ac-21c2-46c6-b240-086008f5ef2e-kube-api-access-jgt87\") pod \"kube-proxy-zb6gr\" (UID: \"8c4612ac-21c2-46c6-b240-086008f5ef2e\") " pod="kube-system/kube-proxy-zb6gr" Feb 13 22:30:39.436035 kubelet[2032]: I0213 22:30:39.435892 2032 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7f9049be-875d-430c-b746-09f3b9fd002c-policysync\") pod \"calico-node-qgqjd\" (UID: \"7f9049be-875d-430c-b746-09f3b9fd002c\") " pod="calico-system/calico-node-qgqjd" Feb 13 22:30:39.436035 kubelet[2032]: I0213 22:30:39.435916 2032 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7f9049be-875d-430c-b746-09f3b9fd002c-node-certs\") pod \"calico-node-qgqjd\" (UID: \"7f9049be-875d-430c-b746-09f3b9fd002c\") " pod="calico-system/calico-node-qgqjd" Feb 13 22:30:39.436255 kubelet[2032]: I0213 22:30:39.435940 2032 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8c4612ac-21c2-46c6-b240-086008f5ef2e-lib-modules\") pod \"kube-proxy-zb6gr\" (UID: \"8c4612ac-21c2-46c6-b240-086008f5ef2e\") " pod="kube-system/kube-proxy-zb6gr" Feb 13 22:30:39.436255 kubelet[2032]: I0213 22:30:39.435964 2032 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7f9049be-875d-430c-b746-09f3b9fd002c-var-run-calico\") pod \"calico-node-qgqjd\" (UID: \"7f9049be-875d-430c-b746-09f3b9fd002c\") " pod="calico-system/calico-node-qgqjd" Feb 13 22:30:39.436255 kubelet[2032]: I0213 22:30:39.435988 2032 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7f9049be-875d-430c-b746-09f3b9fd002c-var-lib-calico\") pod \"calico-node-qgqjd\" (UID: \"7f9049be-875d-430c-b746-09f3b9fd002c\") " pod="calico-system/calico-node-qgqjd" Feb 13 22:30:39.436255 kubelet[2032]: I0213 22:30:39.436013 2032 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nphqw\" (UniqueName: \"kubernetes.io/projected/7f9049be-875d-430c-b746-09f3b9fd002c-kube-api-access-nphqw\") pod \"calico-node-qgqjd\" (UID: \"7f9049be-875d-430c-b746-09f3b9fd002c\") " pod="calico-system/calico-node-qgqjd" Feb 13 22:30:39.436255 kubelet[2032]: I0213 22:30:39.436037 2032 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/30f07a2e-791e-4a29-bff8-bd4d882c17c8-socket-dir\") pod \"csi-node-driver-xwf8m\" (UID: \"30f07a2e-791e-4a29-bff8-bd4d882c17c8\") " pod="calico-system/csi-node-driver-xwf8m" Feb 13 22:30:39.436464 kubelet[2032]: I0213 22:30:39.436063 2032 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6q6t\" (UniqueName: \"kubernetes.io/projected/30f07a2e-791e-4a29-bff8-bd4d882c17c8-kube-api-access-t6q6t\") pod \"csi-node-driver-xwf8m\" (UID: \"30f07a2e-791e-4a29-bff8-bd4d882c17c8\") " pod="calico-system/csi-node-driver-xwf8m" Feb 13 22:30:39.436464 kubelet[2032]: I0213 22:30:39.436087 2032 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8c4612ac-21c2-46c6-b240-086008f5ef2e-kube-proxy\") pod \"kube-proxy-zb6gr\" (UID: \"8c4612ac-21c2-46c6-b240-086008f5ef2e\") " pod="kube-system/kube-proxy-zb6gr" Feb 13 22:30:39.436464 kubelet[2032]: I0213 22:30:39.436112 2032 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7f9049be-875d-430c-b746-09f3b9fd002c-xtables-lock\") pod \"calico-node-qgqjd\" (UID: \"7f9049be-875d-430c-b746-09f3b9fd002c\") " pod="calico-system/calico-node-qgqjd" Feb 13 22:30:39.436464 kubelet[2032]: I0213 22:30:39.436137 2032 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7f9049be-875d-430c-b746-09f3b9fd002c-cni-net-dir\") pod \"calico-node-qgqjd\" (UID: \"7f9049be-875d-430c-b746-09f3b9fd002c\") " pod="calico-system/calico-node-qgqjd" Feb 13 22:30:39.542899 kubelet[2032]: E0213 22:30:39.542461 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:39.542899 kubelet[2032]: W0213 22:30:39.542498 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:39.542899 kubelet[2032]: E0213 22:30:39.542550 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:39.542899 kubelet[2032]: E0213 22:30:39.542807 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:39.542899 kubelet[2032]: W0213 22:30:39.542821 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:39.542899 kubelet[2032]: E0213 22:30:39.542838 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:39.543672 kubelet[2032]: E0213 22:30:39.543562 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:39.543672 kubelet[2032]: W0213 22:30:39.543587 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:39.543672 kubelet[2032]: E0213 22:30:39.543608 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:39.544331 kubelet[2032]: E0213 22:30:39.543935 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:39.544331 kubelet[2032]: W0213 22:30:39.544163 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:39.544331 kubelet[2032]: E0213 22:30:39.544290 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:39.544965 kubelet[2032]: E0213 22:30:39.544865 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:39.544965 kubelet[2032]: W0213 22:30:39.544886 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:39.544965 kubelet[2032]: E0213 22:30:39.544926 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:39.553411 kubelet[2032]: E0213 22:30:39.553358 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:39.553605 kubelet[2032]: W0213 22:30:39.553503 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:39.553605 kubelet[2032]: E0213 22:30:39.553529 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:39.568300 kubelet[2032]: E0213 22:30:39.566633 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:39.568300 kubelet[2032]: W0213 22:30:39.568233 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:39.568300 kubelet[2032]: E0213 22:30:39.568262 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:39.569858 kubelet[2032]: E0213 22:30:39.569750 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:39.569858 kubelet[2032]: W0213 22:30:39.569769 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:39.569858 kubelet[2032]: E0213 22:30:39.569785 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:39.574404 kubelet[2032]: E0213 22:30:39.574284 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:39.574404 kubelet[2032]: W0213 22:30:39.574326 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:39.574404 kubelet[2032]: E0213 22:30:39.574345 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:39.710372 containerd[1615]: time="2025-02-13T22:30:39.707927956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qgqjd,Uid:7f9049be-875d-430c-b746-09f3b9fd002c,Namespace:calico-system,Attempt:0,}" Feb 13 22:30:39.712117 containerd[1615]: time="2025-02-13T22:30:39.711495082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zb6gr,Uid:8c4612ac-21c2-46c6-b240-086008f5ef2e,Namespace:kube-system,Attempt:0,}" Feb 13 22:30:40.394239 kubelet[2032]: E0213 22:30:40.394104 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:30:40.655512 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1573263379.mount: Deactivated successfully. Feb 13 22:30:40.665725 containerd[1615]: time="2025-02-13T22:30:40.665661080Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 22:30:40.668003 containerd[1615]: time="2025-02-13T22:30:40.667323461Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 22:30:40.669083 containerd[1615]: time="2025-02-13T22:30:40.669023904Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Feb 13 22:30:40.669796 containerd[1615]: time="2025-02-13T22:30:40.669743174Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 22:30:40.670807 containerd[1615]: time="2025-02-13T22:30:40.670730068Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 22:30:40.675002 containerd[1615]: time="2025-02-13T22:30:40.674943522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 22:30:40.676802 containerd[1615]: time="2025-02-13T22:30:40.676312706Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 968.155658ms" Feb 13 22:30:40.679136 containerd[1615]: time="2025-02-13T22:30:40.679078202Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 967.498217ms" Feb 13 22:30:40.803581 containerd[1615]: time="2025-02-13T22:30:40.803106621Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 22:30:40.803581 containerd[1615]: time="2025-02-13T22:30:40.803235595Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 22:30:40.803581 containerd[1615]: time="2025-02-13T22:30:40.803262558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 22:30:40.803581 containerd[1615]: time="2025-02-13T22:30:40.803410631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 22:30:40.803581 containerd[1615]: time="2025-02-13T22:30:40.803112525Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 22:30:40.803581 containerd[1615]: time="2025-02-13T22:30:40.803225695Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 22:30:40.803581 containerd[1615]: time="2025-02-13T22:30:40.803250859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 22:30:40.806091 containerd[1615]: time="2025-02-13T22:30:40.805587806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 22:30:40.947315 containerd[1615]: time="2025-02-13T22:30:40.947159906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zb6gr,Uid:8c4612ac-21c2-46c6-b240-086008f5ef2e,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc99f06a3e9e7ae385ed0cca2c690d4958902d8b4149908512416499fe5a0677\"" Feb 13 22:30:40.953201 containerd[1615]: time="2025-02-13T22:30:40.951417091Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 22:30:40.956457 containerd[1615]: time="2025-02-13T22:30:40.956419021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qgqjd,Uid:7f9049be-875d-430c-b746-09f3b9fd002c,Namespace:calico-system,Attempt:0,} returns sandbox id \"1d4efe1fd33d6745fded2417cd79e0be1ddb671567e19841c75e683ff4ab1485\"" Feb 13 22:30:41.394709 kubelet[2032]: E0213 22:30:41.394446 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:30:41.536779 kubelet[2032]: E0213 22:30:41.536667 2032 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xwf8m" podUID="30f07a2e-791e-4a29-bff8-bd4d882c17c8" Feb 13 22:30:42.395790 kubelet[2032]: E0213 22:30:42.395693 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:30:42.496003 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount90671611.mount: Deactivated successfully. Feb 13 22:30:43.168248 containerd[1615]: time="2025-02-13T22:30:43.167258990Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:30:43.169008 containerd[1615]: time="2025-02-13T22:30:43.168969646Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=29057866" Feb 13 22:30:43.170453 containerd[1615]: time="2025-02-13T22:30:43.170392714Z" level=info msg="ImageCreate event name:\"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:30:43.173399 containerd[1615]: time="2025-02-13T22:30:43.173342868Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:30:43.174376 containerd[1615]: time="2025-02-13T22:30:43.174341578Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"29056877\" in 2.222880272s" Feb 13 22:30:43.174642 containerd[1615]: time="2025-02-13T22:30:43.174471591Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\"" Feb 13 22:30:43.176985 containerd[1615]: time="2025-02-13T22:30:43.176923380Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 22:30:43.178567 containerd[1615]: time="2025-02-13T22:30:43.178533024Z" level=info msg="CreateContainer within sandbox \"dc99f06a3e9e7ae385ed0cca2c690d4958902d8b4149908512416499fe5a0677\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 22:30:43.205585 containerd[1615]: time="2025-02-13T22:30:43.205475729Z" level=info msg="CreateContainer within sandbox \"dc99f06a3e9e7ae385ed0cca2c690d4958902d8b4149908512416499fe5a0677\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4690332b95d26f40cfb7343bc9c650dcc77733b640bcbc2f820200593412c6cf\"" Feb 13 22:30:43.206668 containerd[1615]: time="2025-02-13T22:30:43.206634899Z" level=info msg="StartContainer for \"4690332b95d26f40cfb7343bc9c650dcc77733b640bcbc2f820200593412c6cf\"" Feb 13 22:30:43.315165 containerd[1615]: time="2025-02-13T22:30:43.315114072Z" level=info msg="StartContainer for \"4690332b95d26f40cfb7343bc9c650dcc77733b640bcbc2f820200593412c6cf\" returns successfully" Feb 13 22:30:43.396847 kubelet[2032]: E0213 22:30:43.396788 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:30:43.539473 kubelet[2032]: E0213 22:30:43.537805 2032 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xwf8m" podUID="30f07a2e-791e-4a29-bff8-bd4d882c17c8" Feb 13 22:30:43.657242 kubelet[2032]: E0213 22:30:43.655815 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:43.657242 kubelet[2032]: W0213 22:30:43.655849 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:43.657242 kubelet[2032]: E0213 22:30:43.655877 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:43.657589 kubelet[2032]: E0213 22:30:43.657567 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:43.658058 kubelet[2032]: W0213 22:30:43.657659 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:43.658058 kubelet[2032]: E0213 22:30:43.657684 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:43.658296 kubelet[2032]: E0213 22:30:43.658275 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:43.658402 kubelet[2032]: W0213 22:30:43.658382 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:43.658598 kubelet[2032]: E0213 22:30:43.658470 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:43.658855 kubelet[2032]: E0213 22:30:43.658836 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:43.659063 kubelet[2032]: W0213 22:30:43.658936 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:43.659063 kubelet[2032]: E0213 22:30:43.658961 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:43.659450 kubelet[2032]: E0213 22:30:43.659431 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:43.659727 kubelet[2032]: W0213 22:30:43.659543 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:43.659727 kubelet[2032]: E0213 22:30:43.659568 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:43.660132 kubelet[2032]: E0213 22:30:43.660001 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:43.660132 kubelet[2032]: W0213 22:30:43.660019 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:43.660132 kubelet[2032]: E0213 22:30:43.660035 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:43.660589 kubelet[2032]: E0213 22:30:43.660452 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:43.660589 kubelet[2032]: W0213 22:30:43.660469 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:43.660589 kubelet[2032]: E0213 22:30:43.660485 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:43.661063 kubelet[2032]: E0213 22:30:43.660924 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:43.661063 kubelet[2032]: W0213 22:30:43.660942 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:43.661063 kubelet[2032]: E0213 22:30:43.660959 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:43.661544 kubelet[2032]: E0213 22:30:43.661414 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:43.661544 kubelet[2032]: W0213 22:30:43.661432 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:43.661544 kubelet[2032]: E0213 22:30:43.661448 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:43.662036 kubelet[2032]: E0213 22:30:43.661932 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:43.662036 kubelet[2032]: W0213 22:30:43.661954 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:43.662036 kubelet[2032]: E0213 22:30:43.661970 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:43.662563 kubelet[2032]: E0213 22:30:43.662407 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:43.662563 kubelet[2032]: W0213 22:30:43.662425 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:43.662563 kubelet[2032]: E0213 22:30:43.662441 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:43.663108 kubelet[2032]: E0213 22:30:43.662873 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:43.663108 kubelet[2032]: W0213 22:30:43.662892 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:43.663108 kubelet[2032]: E0213 22:30:43.662907 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:43.663479 kubelet[2032]: E0213 22:30:43.663347 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:43.663479 kubelet[2032]: W0213 22:30:43.663366 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:43.663479 kubelet[2032]: E0213 22:30:43.663381 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:43.663921 kubelet[2032]: E0213 22:30:43.663775 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:43.663921 kubelet[2032]: W0213 22:30:43.663794 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:43.663921 kubelet[2032]: E0213 22:30:43.663815 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:43.664381 kubelet[2032]: E0213 22:30:43.664236 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:43.664381 kubelet[2032]: W0213 22:30:43.664255 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:43.664381 kubelet[2032]: E0213 22:30:43.664271 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:43.664914 kubelet[2032]: E0213 22:30:43.664687 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:43.664914 kubelet[2032]: W0213 22:30:43.664705 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:43.664914 kubelet[2032]: E0213 22:30:43.664797 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:43.665384 kubelet[2032]: E0213 22:30:43.665239 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:43.665384 kubelet[2032]: W0213 22:30:43.665260 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:43.665384 kubelet[2032]: E0213 22:30:43.665276 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:43.665813 kubelet[2032]: E0213 22:30:43.665676 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:43.665813 kubelet[2032]: W0213 22:30:43.665694 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:43.665813 kubelet[2032]: E0213 22:30:43.665709 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:43.666218 kubelet[2032]: E0213 22:30:43.666076 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:43.666218 kubelet[2032]: W0213 22:30:43.666094 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:43.666218 kubelet[2032]: E0213 22:30:43.666109 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:43.666688 kubelet[2032]: E0213 22:30:43.666526 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:43.666688 kubelet[2032]: W0213 22:30:43.666544 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:43.666688 kubelet[2032]: E0213 22:30:43.666559 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:43.667270 kubelet[2032]: E0213 22:30:43.667072 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:43.667270 kubelet[2032]: W0213 22:30:43.667090 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:43.667270 kubelet[2032]: E0213 22:30:43.667107 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:43.667742 kubelet[2032]: E0213 22:30:43.667577 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:43.667742 kubelet[2032]: W0213 22:30:43.667595 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:43.667742 kubelet[2032]: E0213 22:30:43.667619 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:43.668200 kubelet[2032]: E0213 22:30:43.668042 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:43.668200 kubelet[2032]: W0213 22:30:43.668060 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:43.668200 kubelet[2032]: E0213 22:30:43.668083 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:43.668651 kubelet[2032]: E0213 22:30:43.668513 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:43.668651 kubelet[2032]: W0213 22:30:43.668531 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:43.668651 kubelet[2032]: E0213 22:30:43.668556 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:43.669237 kubelet[2032]: E0213 22:30:43.669003 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:43.669237 kubelet[2032]: W0213 22:30:43.669020 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:43.669237 kubelet[2032]: E0213 22:30:43.669091 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:43.669725 kubelet[2032]: E0213 22:30:43.669573 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:43.669725 kubelet[2032]: W0213 22:30:43.669591 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:43.669725 kubelet[2032]: E0213 22:30:43.669615 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:43.669951 kubelet[2032]: E0213 22:30:43.669932 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:43.670038 kubelet[2032]: W0213 22:30:43.670019 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:43.670162 kubelet[2032]: E0213 22:30:43.670128 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:43.671201 kubelet[2032]: E0213 22:30:43.671152 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:43.671266 kubelet[2032]: W0213 22:30:43.671208 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:43.671266 kubelet[2032]: E0213 22:30:43.671240 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:43.671762 kubelet[2032]: E0213 22:30:43.671610 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:43.671762 kubelet[2032]: W0213 22:30:43.671631 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:43.671762 kubelet[2032]: E0213 22:30:43.671647 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:43.672437 kubelet[2032]: E0213 22:30:43.672068 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:43.672437 kubelet[2032]: W0213 22:30:43.672085 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:43.672437 kubelet[2032]: E0213 22:30:43.672102 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:43.672726 kubelet[2032]: E0213 22:30:43.672701 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:43.672825 kubelet[2032]: W0213 22:30:43.672805 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:43.673003 kubelet[2032]: E0213 22:30:43.672956 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:43.673486 kubelet[2032]: E0213 22:30:43.673450 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:43.673486 kubelet[2032]: W0213 22:30:43.673478 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:43.673597 kubelet[2032]: E0213 22:30:43.673496 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:44.398580 kubelet[2032]: E0213 22:30:44.398501 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:30:44.572484 kubelet[2032]: E0213 22:30:44.572435 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:44.572484 kubelet[2032]: W0213 22:30:44.572468 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:44.572484 kubelet[2032]: E0213 22:30:44.572493 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:44.572788 kubelet[2032]: E0213 22:30:44.572768 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:44.572845 kubelet[2032]: W0213 22:30:44.572790 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:44.572845 kubelet[2032]: E0213 22:30:44.572806 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:44.573095 kubelet[2032]: E0213 22:30:44.573065 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:44.573095 kubelet[2032]: W0213 22:30:44.573089 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:44.573242 kubelet[2032]: E0213 22:30:44.573105 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:44.573421 kubelet[2032]: E0213 22:30:44.573390 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:44.573421 kubelet[2032]: W0213 22:30:44.573414 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:44.573538 kubelet[2032]: E0213 22:30:44.573430 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:44.573712 kubelet[2032]: E0213 22:30:44.573692 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:44.573712 kubelet[2032]: W0213 22:30:44.573711 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:44.573828 kubelet[2032]: E0213 22:30:44.573727 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:44.573999 kubelet[2032]: E0213 22:30:44.573980 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:44.573999 kubelet[2032]: W0213 22:30:44.573999 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:44.574098 kubelet[2032]: E0213 22:30:44.574014 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:44.574273 kubelet[2032]: E0213 22:30:44.574254 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:44.574273 kubelet[2032]: W0213 22:30:44.574273 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:44.574405 kubelet[2032]: E0213 22:30:44.574288 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:44.574578 kubelet[2032]: E0213 22:30:44.574559 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:44.574578 kubelet[2032]: W0213 22:30:44.574578 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:44.574701 kubelet[2032]: E0213 22:30:44.574594 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:44.574889 kubelet[2032]: E0213 22:30:44.574871 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:44.574889 kubelet[2032]: W0213 22:30:44.574889 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:44.574994 kubelet[2032]: E0213 22:30:44.574904 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:44.575141 kubelet[2032]: E0213 22:30:44.575123 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:44.575141 kubelet[2032]: W0213 22:30:44.575141 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:44.575279 kubelet[2032]: E0213 22:30:44.575156 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:44.575439 kubelet[2032]: E0213 22:30:44.575420 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:44.575439 kubelet[2032]: W0213 22:30:44.575439 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:44.575558 kubelet[2032]: E0213 22:30:44.575454 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:44.575696 kubelet[2032]: E0213 22:30:44.575677 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:44.575696 kubelet[2032]: W0213 22:30:44.575696 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:44.575799 kubelet[2032]: E0213 22:30:44.575711 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:44.575976 kubelet[2032]: E0213 22:30:44.575958 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:44.575976 kubelet[2032]: W0213 22:30:44.575976 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:44.576089 kubelet[2032]: E0213 22:30:44.575990 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:44.576270 kubelet[2032]: E0213 22:30:44.576251 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:44.576270 kubelet[2032]: W0213 22:30:44.576270 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:44.576413 kubelet[2032]: E0213 22:30:44.576287 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:44.576548 kubelet[2032]: E0213 22:30:44.576529 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:44.576548 kubelet[2032]: W0213 22:30:44.576548 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:44.576651 kubelet[2032]: E0213 22:30:44.576563 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:44.576805 kubelet[2032]: E0213 22:30:44.576786 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:44.576805 kubelet[2032]: W0213 22:30:44.576804 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:44.576930 kubelet[2032]: E0213 22:30:44.576819 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:44.577115 kubelet[2032]: E0213 22:30:44.577095 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:44.577115 kubelet[2032]: W0213 22:30:44.577115 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:44.577241 kubelet[2032]: E0213 22:30:44.577129 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:44.577401 kubelet[2032]: E0213 22:30:44.577382 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:44.577401 kubelet[2032]: W0213 22:30:44.577401 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:44.577522 kubelet[2032]: E0213 22:30:44.577416 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:44.577675 kubelet[2032]: E0213 22:30:44.577656 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:44.577675 kubelet[2032]: W0213 22:30:44.577675 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:44.577794 kubelet[2032]: E0213 22:30:44.577689 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:44.577927 kubelet[2032]: E0213 22:30:44.577908 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:44.577927 kubelet[2032]: W0213 22:30:44.577927 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:44.578024 kubelet[2032]: E0213 22:30:44.577941 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:44.578305 kubelet[2032]: E0213 22:30:44.578275 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:44.578376 kubelet[2032]: W0213 22:30:44.578307 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:44.578376 kubelet[2032]: E0213 22:30:44.578323 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:44.578600 kubelet[2032]: E0213 22:30:44.578579 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:44.578600 kubelet[2032]: W0213 22:30:44.578598 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:44.578715 kubelet[2032]: E0213 22:30:44.578621 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:44.578907 kubelet[2032]: E0213 22:30:44.578888 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:44.578907 kubelet[2032]: W0213 22:30:44.578907 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:44.579012 kubelet[2032]: E0213 22:30:44.578939 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:44.579228 kubelet[2032]: E0213 22:30:44.579208 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:44.579228 kubelet[2032]: W0213 22:30:44.579227 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:44.579355 kubelet[2032]: E0213 22:30:44.579259 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:44.579531 kubelet[2032]: E0213 22:30:44.579512 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:44.579600 kubelet[2032]: W0213 22:30:44.579532 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:44.579600 kubelet[2032]: E0213 22:30:44.579566 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:44.579853 kubelet[2032]: E0213 22:30:44.579835 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:44.579853 kubelet[2032]: W0213 22:30:44.579853 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:44.580002 kubelet[2032]: E0213 22:30:44.579979 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:44.580408 kubelet[2032]: E0213 22:30:44.580385 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:44.580408 kubelet[2032]: W0213 22:30:44.580406 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:44.580514 kubelet[2032]: E0213 22:30:44.580430 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:44.580691 kubelet[2032]: E0213 22:30:44.580672 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:44.580691 kubelet[2032]: W0213 22:30:44.580691 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:44.580805 kubelet[2032]: E0213 22:30:44.580723 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:44.580985 kubelet[2032]: E0213 22:30:44.580966 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:44.580985 kubelet[2032]: W0213 22:30:44.580984 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:44.581091 kubelet[2032]: E0213 22:30:44.581019 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:44.581333 kubelet[2032]: E0213 22:30:44.581314 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:44.581333 kubelet[2032]: W0213 22:30:44.581333 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:44.581434 kubelet[2032]: E0213 22:30:44.581366 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:44.581877 kubelet[2032]: E0213 22:30:44.581847 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:44.581877 kubelet[2032]: W0213 22:30:44.581871 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:44.581985 kubelet[2032]: E0213 22:30:44.581895 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:44.582232 kubelet[2032]: E0213 22:30:44.582146 2032 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 22:30:44.582232 kubelet[2032]: W0213 22:30:44.582171 2032 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 22:30:44.582396 kubelet[2032]: E0213 22:30:44.582240 2032 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 22:30:44.869600 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1851252086.mount: Deactivated successfully. Feb 13 22:30:45.029441 containerd[1615]: time="2025-02-13T22:30:45.028365134Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:30:45.031381 containerd[1615]: time="2025-02-13T22:30:45.031303000Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Feb 13 22:30:45.032248 containerd[1615]: time="2025-02-13T22:30:45.032159649Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:30:45.035876 containerd[1615]: time="2025-02-13T22:30:45.035834191Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:30:45.037546 containerd[1615]: time="2025-02-13T22:30:45.037360568Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.860367634s" Feb 13 22:30:45.037546 containerd[1615]: time="2025-02-13T22:30:45.037425126Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 13 22:30:45.041427 containerd[1615]: time="2025-02-13T22:30:45.041054498Z" level=info msg="CreateContainer within sandbox \"1d4efe1fd33d6745fded2417cd79e0be1ddb671567e19841c75e683ff4ab1485\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 22:30:45.057174 containerd[1615]: time="2025-02-13T22:30:45.057129308Z" level=info msg="CreateContainer within sandbox \"1d4efe1fd33d6745fded2417cd79e0be1ddb671567e19841c75e683ff4ab1485\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9002574dbe12a61521a08628c1d5359032edc631f364142f9aac4c7311b0fd37\"" Feb 13 22:30:45.058606 containerd[1615]: time="2025-02-13T22:30:45.058276882Z" level=info msg="StartContainer for \"9002574dbe12a61521a08628c1d5359032edc631f364142f9aac4c7311b0fd37\"" Feb 13 22:30:45.146559 containerd[1615]: time="2025-02-13T22:30:45.146032433Z" level=info msg="StartContainer for \"9002574dbe12a61521a08628c1d5359032edc631f364142f9aac4c7311b0fd37\" returns successfully" Feb 13 22:30:45.281141 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 22:30:45.399410 kubelet[2032]: E0213 22:30:45.399244 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:30:45.448219 containerd[1615]: time="2025-02-13T22:30:45.447984699Z" level=info msg="shim disconnected" id=9002574dbe12a61521a08628c1d5359032edc631f364142f9aac4c7311b0fd37 namespace=k8s.io Feb 13 22:30:45.448219 containerd[1615]: time="2025-02-13T22:30:45.448057195Z" level=warning msg="cleaning up after shim disconnected" id=9002574dbe12a61521a08628c1d5359032edc631f364142f9aac4c7311b0fd37 namespace=k8s.io Feb 13 22:30:45.448219 containerd[1615]: time="2025-02-13T22:30:45.448078958Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 22:30:45.536940 kubelet[2032]: E0213 22:30:45.536478 2032 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xwf8m" podUID="30f07a2e-791e-4a29-bff8-bd4d882c17c8" Feb 13 22:30:45.570851 containerd[1615]: time="2025-02-13T22:30:45.570448998Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 22:30:45.590728 kubelet[2032]: I0213 22:30:45.590661 2032 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zb6gr" podStartSLOduration=6.365621669 podStartE2EDuration="8.590629685s" podCreationTimestamp="2025-02-13 22:30:37 +0000 UTC" firstStartedPulling="2025-02-13 22:30:40.950852886 +0000 UTC m=+4.424249419" lastFinishedPulling="2025-02-13 22:30:43.175860902 +0000 UTC m=+6.649257435" observedRunningTime="2025-02-13 22:30:43.583249769 +0000 UTC m=+7.056646326" watchObservedRunningTime="2025-02-13 22:30:45.590629685 +0000 UTC m=+9.064026221" Feb 13 22:30:45.810319 systemd[1]: run-containerd-runc-k8s.io-9002574dbe12a61521a08628c1d5359032edc631f364142f9aac4c7311b0fd37-runc.XzeeeX.mount: Deactivated successfully. Feb 13 22:30:45.810604 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9002574dbe12a61521a08628c1d5359032edc631f364142f9aac4c7311b0fd37-rootfs.mount: Deactivated successfully. Feb 13 22:30:46.400532 kubelet[2032]: E0213 22:30:46.400419 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:30:47.401308 kubelet[2032]: E0213 22:30:47.401142 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:30:47.544227 kubelet[2032]: E0213 22:30:47.541472 2032 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xwf8m" podUID="30f07a2e-791e-4a29-bff8-bd4d882c17c8" Feb 13 22:30:48.402276 kubelet[2032]: E0213 22:30:48.402146 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:30:49.402316 kubelet[2032]: E0213 22:30:49.402278 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:30:49.537031 kubelet[2032]: E0213 22:30:49.535864 2032 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xwf8m" podUID="30f07a2e-791e-4a29-bff8-bd4d882c17c8" Feb 13 22:30:50.403365 kubelet[2032]: E0213 22:30:50.403277 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:30:51.403733 kubelet[2032]: E0213 22:30:51.403632 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:30:51.536626 kubelet[2032]: E0213 22:30:51.536560 2032 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xwf8m" podUID="30f07a2e-791e-4a29-bff8-bd4d882c17c8" Feb 13 22:30:52.404554 kubelet[2032]: E0213 22:30:52.404511 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:30:52.493357 containerd[1615]: time="2025-02-13T22:30:52.492083371Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:30:52.493357 containerd[1615]: time="2025-02-13T22:30:52.493286659Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 13 22:30:52.494067 containerd[1615]: time="2025-02-13T22:30:52.494033466Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:30:52.496676 containerd[1615]: time="2025-02-13T22:30:52.496638068Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:30:52.498380 containerd[1615]: time="2025-02-13T22:30:52.498349323Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 6.927785957s" Feb 13 22:30:52.498522 containerd[1615]: time="2025-02-13T22:30:52.498494434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 13 22:30:52.501665 containerd[1615]: time="2025-02-13T22:30:52.501632106Z" level=info msg="CreateContainer within sandbox \"1d4efe1fd33d6745fded2417cd79e0be1ddb671567e19841c75e683ff4ab1485\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 22:30:52.524576 containerd[1615]: time="2025-02-13T22:30:52.524532211Z" level=info msg="CreateContainer within sandbox \"1d4efe1fd33d6745fded2417cd79e0be1ddb671567e19841c75e683ff4ab1485\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b81c7c0fac839bd5b02cc1a9683d9b52317a5d46063817cc9567f15f4b10284b\"" Feb 13 22:30:52.525626 containerd[1615]: time="2025-02-13T22:30:52.525579630Z" level=info msg="StartContainer for \"b81c7c0fac839bd5b02cc1a9683d9b52317a5d46063817cc9567f15f4b10284b\"" Feb 13 22:30:52.613223 containerd[1615]: time="2025-02-13T22:30:52.611792055Z" level=info msg="StartContainer for \"b81c7c0fac839bd5b02cc1a9683d9b52317a5d46063817cc9567f15f4b10284b\" returns successfully" Feb 13 22:30:53.405431 kubelet[2032]: E0213 22:30:53.405284 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:30:53.535998 kubelet[2032]: E0213 22:30:53.535598 2032 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xwf8m" podUID="30f07a2e-791e-4a29-bff8-bd4d882c17c8" Feb 13 22:30:53.648695 containerd[1615]: time="2025-02-13T22:30:53.648613143Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 22:30:53.675349 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b81c7c0fac839bd5b02cc1a9683d9b52317a5d46063817cc9567f15f4b10284b-rootfs.mount: Deactivated successfully. Feb 13 22:30:53.735803 kubelet[2032]: I0213 22:30:53.735744 2032 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 22:30:53.854459 containerd[1615]: time="2025-02-13T22:30:53.854332407Z" level=info msg="shim disconnected" id=b81c7c0fac839bd5b02cc1a9683d9b52317a5d46063817cc9567f15f4b10284b namespace=k8s.io Feb 13 22:30:53.854650 containerd[1615]: time="2025-02-13T22:30:53.854483660Z" level=warning msg="cleaning up after shim disconnected" id=b81c7c0fac839bd5b02cc1a9683d9b52317a5d46063817cc9567f15f4b10284b namespace=k8s.io Feb 13 22:30:53.854650 containerd[1615]: time="2025-02-13T22:30:53.854507029Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 22:30:54.389613 systemd[1]: Started sshd@7-10.244.31.90:22-116.110.7.76:59752.service - OpenSSH per-connection server daemon (116.110.7.76:59752). Feb 13 22:30:54.406343 kubelet[2032]: E0213 22:30:54.406271 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:30:54.616886 containerd[1615]: time="2025-02-13T22:30:54.616838188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 22:30:55.406764 kubelet[2032]: E0213 22:30:55.406690 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:30:55.540742 containerd[1615]: time="2025-02-13T22:30:55.540233875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xwf8m,Uid:30f07a2e-791e-4a29-bff8-bd4d882c17c8,Namespace:calico-system,Attempt:0,}" Feb 13 22:30:55.624377 containerd[1615]: time="2025-02-13T22:30:55.624176800Z" level=error msg="Failed to destroy network for sandbox \"f3eded1962285f1837ff002daa96bd430ad9d60ec11698a14326ce31e88f0ae2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:30:55.627348 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f3eded1962285f1837ff002daa96bd430ad9d60ec11698a14326ce31e88f0ae2-shm.mount: Deactivated successfully. Feb 13 22:30:55.628990 containerd[1615]: time="2025-02-13T22:30:55.627502699Z" level=error msg="encountered an error cleaning up failed sandbox \"f3eded1962285f1837ff002daa96bd430ad9d60ec11698a14326ce31e88f0ae2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:30:55.628990 containerd[1615]: time="2025-02-13T22:30:55.627578056Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xwf8m,Uid:30f07a2e-791e-4a29-bff8-bd4d882c17c8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f3eded1962285f1837ff002daa96bd430ad9d60ec11698a14326ce31e88f0ae2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:30:55.629111 kubelet[2032]: E0213 22:30:55.628307 2032 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3eded1962285f1837ff002daa96bd430ad9d60ec11698a14326ce31e88f0ae2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:30:55.629111 kubelet[2032]: E0213 22:30:55.628403 2032 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3eded1962285f1837ff002daa96bd430ad9d60ec11698a14326ce31e88f0ae2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xwf8m" Feb 13 22:30:55.629111 kubelet[2032]: E0213 22:30:55.628457 2032 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3eded1962285f1837ff002daa96bd430ad9d60ec11698a14326ce31e88f0ae2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xwf8m" Feb 13 22:30:55.629366 kubelet[2032]: E0213 22:30:55.628516 2032 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xwf8m_calico-system(30f07a2e-791e-4a29-bff8-bd4d882c17c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xwf8m_calico-system(30f07a2e-791e-4a29-bff8-bd4d882c17c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f3eded1962285f1837ff002daa96bd430ad9d60ec11698a14326ce31e88f0ae2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xwf8m" podUID="30f07a2e-791e-4a29-bff8-bd4d882c17c8" Feb 13 22:30:56.065877 sshd[2549]: Invalid user guest from 116.110.7.76 port 59752 Feb 13 22:30:56.136739 kubelet[2032]: I0213 22:30:56.136649 2032 topology_manager.go:215] "Topology Admit Handler" podUID="e78d438d-8e1b-4f6e-afaf-094ee4898fdc" podNamespace="default" podName="nginx-deployment-85f456d6dd-5hsnp" Feb 13 22:30:56.258732 kubelet[2032]: I0213 22:30:56.258642 2032 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vg9x\" (UniqueName: \"kubernetes.io/projected/e78d438d-8e1b-4f6e-afaf-094ee4898fdc-kube-api-access-6vg9x\") pod \"nginx-deployment-85f456d6dd-5hsnp\" (UID: \"e78d438d-8e1b-4f6e-afaf-094ee4898fdc\") " pod="default/nginx-deployment-85f456d6dd-5hsnp" Feb 13 22:30:56.408175 kubelet[2032]: E0213 22:30:56.407927 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:30:56.443463 containerd[1615]: time="2025-02-13T22:30:56.443387356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-5hsnp,Uid:e78d438d-8e1b-4f6e-afaf-094ee4898fdc,Namespace:default,Attempt:0,}" Feb 13 22:30:56.503384 sshd-session[2591]: pam_faillock(sshd:auth): User unknown Feb 13 22:30:56.507400 sshd[2549]: Postponed keyboard-interactive for invalid user guest from 116.110.7.76 port 59752 ssh2 [preauth] Feb 13 22:30:56.573598 containerd[1615]: time="2025-02-13T22:30:56.573385280Z" level=error msg="Failed to destroy network for sandbox \"b39c667b2390dcfd75bcf7108762c485b17c97a7942216db86265444f9d1fc56\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:30:56.574176 containerd[1615]: time="2025-02-13T22:30:56.574022139Z" level=error msg="encountered an error cleaning up failed sandbox \"b39c667b2390dcfd75bcf7108762c485b17c97a7942216db86265444f9d1fc56\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:30:56.574176 containerd[1615]: time="2025-02-13T22:30:56.574118911Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-5hsnp,Uid:e78d438d-8e1b-4f6e-afaf-094ee4898fdc,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b39c667b2390dcfd75bcf7108762c485b17c97a7942216db86265444f9d1fc56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:30:56.577064 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b39c667b2390dcfd75bcf7108762c485b17c97a7942216db86265444f9d1fc56-shm.mount: Deactivated successfully. Feb 13 22:30:56.578485 kubelet[2032]: E0213 22:30:56.577391 2032 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b39c667b2390dcfd75bcf7108762c485b17c97a7942216db86265444f9d1fc56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:30:56.578485 kubelet[2032]: E0213 22:30:56.577468 2032 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b39c667b2390dcfd75bcf7108762c485b17c97a7942216db86265444f9d1fc56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-5hsnp" Feb 13 22:30:56.578485 kubelet[2032]: E0213 22:30:56.577496 2032 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b39c667b2390dcfd75bcf7108762c485b17c97a7942216db86265444f9d1fc56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-5hsnp" Feb 13 22:30:56.578650 kubelet[2032]: E0213 22:30:56.577551 2032 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-5hsnp_default(e78d438d-8e1b-4f6e-afaf-094ee4898fdc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-5hsnp_default(e78d438d-8e1b-4f6e-afaf-094ee4898fdc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b39c667b2390dcfd75bcf7108762c485b17c97a7942216db86265444f9d1fc56\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-5hsnp" podUID="e78d438d-8e1b-4f6e-afaf-094ee4898fdc" Feb 13 22:30:56.628027 kubelet[2032]: I0213 22:30:56.626964 2032 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3eded1962285f1837ff002daa96bd430ad9d60ec11698a14326ce31e88f0ae2" Feb 13 22:30:56.629050 containerd[1615]: time="2025-02-13T22:30:56.628656762Z" level=info msg="StopPodSandbox for \"f3eded1962285f1837ff002daa96bd430ad9d60ec11698a14326ce31e88f0ae2\"" Feb 13 22:30:56.629412 containerd[1615]: time="2025-02-13T22:30:56.629381766Z" level=info msg="Ensure that sandbox f3eded1962285f1837ff002daa96bd430ad9d60ec11698a14326ce31e88f0ae2 in task-service has been cleanup successfully" Feb 13 22:30:56.633694 containerd[1615]: time="2025-02-13T22:30:56.633264738Z" level=info msg="TearDown network for sandbox \"f3eded1962285f1837ff002daa96bd430ad9d60ec11698a14326ce31e88f0ae2\" successfully" Feb 13 22:30:56.633694 containerd[1615]: time="2025-02-13T22:30:56.633307562Z" level=info msg="StopPodSandbox for \"f3eded1962285f1837ff002daa96bd430ad9d60ec11698a14326ce31e88f0ae2\" returns successfully" Feb 13 22:30:56.634043 systemd[1]: run-netns-cni\x2d266b8104\x2d39b5\x2d3bd9\x2d61b9\x2de91b7eff95cc.mount: Deactivated successfully. Feb 13 22:30:56.637474 containerd[1615]: time="2025-02-13T22:30:56.637069774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xwf8m,Uid:30f07a2e-791e-4a29-bff8-bd4d882c17c8,Namespace:calico-system,Attempt:1,}" Feb 13 22:30:56.638635 kubelet[2032]: I0213 22:30:56.638551 2032 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b39c667b2390dcfd75bcf7108762c485b17c97a7942216db86265444f9d1fc56" Feb 13 22:30:56.640555 containerd[1615]: time="2025-02-13T22:30:56.640522656Z" level=info msg="StopPodSandbox for \"b39c667b2390dcfd75bcf7108762c485b17c97a7942216db86265444f9d1fc56\"" Feb 13 22:30:56.641790 containerd[1615]: time="2025-02-13T22:30:56.641735680Z" level=info msg="Ensure that sandbox b39c667b2390dcfd75bcf7108762c485b17c97a7942216db86265444f9d1fc56 in task-service has been cleanup successfully" Feb 13 22:30:56.644417 containerd[1615]: time="2025-02-13T22:30:56.644332135Z" level=info msg="TearDown network for sandbox \"b39c667b2390dcfd75bcf7108762c485b17c97a7942216db86265444f9d1fc56\" successfully" Feb 13 22:30:56.645917 containerd[1615]: time="2025-02-13T22:30:56.644417094Z" level=info msg="StopPodSandbox for \"b39c667b2390dcfd75bcf7108762c485b17c97a7942216db86265444f9d1fc56\" returns successfully" Feb 13 22:30:56.645917 containerd[1615]: time="2025-02-13T22:30:56.645791604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-5hsnp,Uid:e78d438d-8e1b-4f6e-afaf-094ee4898fdc,Namespace:default,Attempt:1,}" Feb 13 22:30:56.645565 systemd[1]: run-netns-cni\x2def3ed0fa\x2db693\x2d6742\x2dc311\x2d0765aa92cd14.mount: Deactivated successfully. Feb 13 22:30:56.801635 containerd[1615]: time="2025-02-13T22:30:56.801561235Z" level=error msg="Failed to destroy network for sandbox \"a51e9c0dac8e3167d45b1ec6278b334ff0dcadf067d08bedf8f4406faa892195\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:30:56.802416 containerd[1615]: time="2025-02-13T22:30:56.802379017Z" level=error msg="encountered an error cleaning up failed sandbox \"a51e9c0dac8e3167d45b1ec6278b334ff0dcadf067d08bedf8f4406faa892195\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:30:56.804322 containerd[1615]: time="2025-02-13T22:30:56.804283375Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-5hsnp,Uid:e78d438d-8e1b-4f6e-afaf-094ee4898fdc,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"a51e9c0dac8e3167d45b1ec6278b334ff0dcadf067d08bedf8f4406faa892195\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:30:56.804939 kubelet[2032]: E0213 22:30:56.804881 2032 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a51e9c0dac8e3167d45b1ec6278b334ff0dcadf067d08bedf8f4406faa892195\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:30:56.805519 kubelet[2032]: E0213 22:30:56.805308 2032 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a51e9c0dac8e3167d45b1ec6278b334ff0dcadf067d08bedf8f4406faa892195\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-5hsnp" Feb 13 22:30:56.805519 kubelet[2032]: E0213 22:30:56.805361 2032 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a51e9c0dac8e3167d45b1ec6278b334ff0dcadf067d08bedf8f4406faa892195\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-5hsnp" Feb 13 22:30:56.805519 kubelet[2032]: E0213 22:30:56.805453 2032 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-5hsnp_default(e78d438d-8e1b-4f6e-afaf-094ee4898fdc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-5hsnp_default(e78d438d-8e1b-4f6e-afaf-094ee4898fdc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a51e9c0dac8e3167d45b1ec6278b334ff0dcadf067d08bedf8f4406faa892195\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-5hsnp" podUID="e78d438d-8e1b-4f6e-afaf-094ee4898fdc" Feb 13 22:30:56.812963 sshd-session[2591]: pam_unix(sshd:auth): check pass; user unknown Feb 13 22:30:56.813279 sshd-session[2591]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=116.110.7.76 Feb 13 22:30:56.814772 sshd-session[2591]: pam_faillock(sshd:auth): User unknown Feb 13 22:30:56.823322 containerd[1615]: time="2025-02-13T22:30:56.823251607Z" level=error msg="Failed to destroy network for sandbox \"70c7b8a184db90680e8f8b7825b6f798651bc5663501ce1629168cfdecf44ecb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:30:56.823866 containerd[1615]: time="2025-02-13T22:30:56.823826831Z" level=error msg="encountered an error cleaning up failed sandbox \"70c7b8a184db90680e8f8b7825b6f798651bc5663501ce1629168cfdecf44ecb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:30:56.823948 containerd[1615]: time="2025-02-13T22:30:56.823916608Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xwf8m,Uid:30f07a2e-791e-4a29-bff8-bd4d882c17c8,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"70c7b8a184db90680e8f8b7825b6f798651bc5663501ce1629168cfdecf44ecb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:30:56.824730 kubelet[2032]: E0213 22:30:56.824466 2032 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70c7b8a184db90680e8f8b7825b6f798651bc5663501ce1629168cfdecf44ecb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:30:56.824730 kubelet[2032]: E0213 22:30:56.824633 2032 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70c7b8a184db90680e8f8b7825b6f798651bc5663501ce1629168cfdecf44ecb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xwf8m" Feb 13 22:30:56.824730 kubelet[2032]: E0213 22:30:56.824669 2032 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70c7b8a184db90680e8f8b7825b6f798651bc5663501ce1629168cfdecf44ecb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xwf8m" Feb 13 22:30:56.824957 kubelet[2032]: E0213 22:30:56.824750 2032 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xwf8m_calico-system(30f07a2e-791e-4a29-bff8-bd4d882c17c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xwf8m_calico-system(30f07a2e-791e-4a29-bff8-bd4d882c17c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"70c7b8a184db90680e8f8b7825b6f798651bc5663501ce1629168cfdecf44ecb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xwf8m" podUID="30f07a2e-791e-4a29-bff8-bd4d882c17c8" Feb 13 22:30:57.390879 kubelet[2032]: E0213 22:30:57.390720 2032 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:30:57.409236 kubelet[2032]: E0213 22:30:57.409079 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:30:57.553605 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-70c7b8a184db90680e8f8b7825b6f798651bc5663501ce1629168cfdecf44ecb-shm.mount: Deactivated successfully. Feb 13 22:30:57.646283 kubelet[2032]: I0213 22:30:57.643370 2032 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a51e9c0dac8e3167d45b1ec6278b334ff0dcadf067d08bedf8f4406faa892195" Feb 13 22:30:57.646424 containerd[1615]: time="2025-02-13T22:30:57.644245274Z" level=info msg="StopPodSandbox for \"a51e9c0dac8e3167d45b1ec6278b334ff0dcadf067d08bedf8f4406faa892195\"" Feb 13 22:30:57.646424 containerd[1615]: time="2025-02-13T22:30:57.644481593Z" level=info msg="Ensure that sandbox a51e9c0dac8e3167d45b1ec6278b334ff0dcadf067d08bedf8f4406faa892195 in task-service has been cleanup successfully" Feb 13 22:30:57.647401 containerd[1615]: time="2025-02-13T22:30:57.646949125Z" level=info msg="TearDown network for sandbox \"a51e9c0dac8e3167d45b1ec6278b334ff0dcadf067d08bedf8f4406faa892195\" successfully" Feb 13 22:30:57.647401 containerd[1615]: time="2025-02-13T22:30:57.646977144Z" level=info msg="StopPodSandbox for \"a51e9c0dac8e3167d45b1ec6278b334ff0dcadf067d08bedf8f4406faa892195\" returns successfully" Feb 13 22:30:57.648746 containerd[1615]: time="2025-02-13T22:30:57.647693290Z" level=info msg="StopPodSandbox for \"b39c667b2390dcfd75bcf7108762c485b17c97a7942216db86265444f9d1fc56\"" Feb 13 22:30:57.648746 containerd[1615]: time="2025-02-13T22:30:57.647797803Z" level=info msg="TearDown network for sandbox \"b39c667b2390dcfd75bcf7108762c485b17c97a7942216db86265444f9d1fc56\" successfully" Feb 13 22:30:57.648746 containerd[1615]: time="2025-02-13T22:30:57.647815836Z" level=info msg="StopPodSandbox for \"b39c667b2390dcfd75bcf7108762c485b17c97a7942216db86265444f9d1fc56\" returns successfully" Feb 13 22:30:57.648272 systemd[1]: run-netns-cni\x2d7ae53483\x2dac6b\x2dc3ed\x2d91f8\x2dd88b09772d9f.mount: Deactivated successfully. Feb 13 22:30:57.650987 containerd[1615]: time="2025-02-13T22:30:57.650545085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-5hsnp,Uid:e78d438d-8e1b-4f6e-afaf-094ee4898fdc,Namespace:default,Attempt:2,}" Feb 13 22:30:57.655061 kubelet[2032]: I0213 22:30:57.654695 2032 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70c7b8a184db90680e8f8b7825b6f798651bc5663501ce1629168cfdecf44ecb" Feb 13 22:30:57.656234 containerd[1615]: time="2025-02-13T22:30:57.656038758Z" level=info msg="StopPodSandbox for \"70c7b8a184db90680e8f8b7825b6f798651bc5663501ce1629168cfdecf44ecb\"" Feb 13 22:30:57.656388 containerd[1615]: time="2025-02-13T22:30:57.656290304Z" level=info msg="Ensure that sandbox 70c7b8a184db90680e8f8b7825b6f798651bc5663501ce1629168cfdecf44ecb in task-service has been cleanup successfully" Feb 13 22:30:57.656530 containerd[1615]: time="2025-02-13T22:30:57.656497438Z" level=info msg="TearDown network for sandbox \"70c7b8a184db90680e8f8b7825b6f798651bc5663501ce1629168cfdecf44ecb\" successfully" Feb 13 22:30:57.656530 containerd[1615]: time="2025-02-13T22:30:57.656519587Z" level=info msg="StopPodSandbox for \"70c7b8a184db90680e8f8b7825b6f798651bc5663501ce1629168cfdecf44ecb\" returns successfully" Feb 13 22:30:57.656860 containerd[1615]: time="2025-02-13T22:30:57.656816156Z" level=info msg="StopPodSandbox for \"f3eded1962285f1837ff002daa96bd430ad9d60ec11698a14326ce31e88f0ae2\"" Feb 13 22:30:57.656975 containerd[1615]: time="2025-02-13T22:30:57.656922262Z" level=info msg="TearDown network for sandbox \"f3eded1962285f1837ff002daa96bd430ad9d60ec11698a14326ce31e88f0ae2\" successfully" Feb 13 22:30:57.656975 containerd[1615]: time="2025-02-13T22:30:57.656940752Z" level=info msg="StopPodSandbox for \"f3eded1962285f1837ff002daa96bd430ad9d60ec11698a14326ce31e88f0ae2\" returns successfully" Feb 13 22:30:57.659857 systemd[1]: run-netns-cni\x2d64ef3ac5\x2d5713\x2d2457\x2dc46d\x2d92db276002e4.mount: Deactivated successfully. Feb 13 22:30:57.662254 containerd[1615]: time="2025-02-13T22:30:57.662153102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xwf8m,Uid:30f07a2e-791e-4a29-bff8-bd4d882c17c8,Namespace:calico-system,Attempt:2,}" Feb 13 22:30:57.790550 containerd[1615]: time="2025-02-13T22:30:57.790416418Z" level=error msg="Failed to destroy network for sandbox \"1f21a6322e5ad03248caae957f1fca1ca644131f36d4001ade17c28fb384cd61\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:30:57.791347 containerd[1615]: time="2025-02-13T22:30:57.790937422Z" level=error msg="encountered an error cleaning up failed sandbox \"1f21a6322e5ad03248caae957f1fca1ca644131f36d4001ade17c28fb384cd61\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:30:57.791347 containerd[1615]: time="2025-02-13T22:30:57.791015664Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-5hsnp,Uid:e78d438d-8e1b-4f6e-afaf-094ee4898fdc,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"1f21a6322e5ad03248caae957f1fca1ca644131f36d4001ade17c28fb384cd61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:30:57.792657 kubelet[2032]: E0213 22:30:57.791740 2032 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f21a6322e5ad03248caae957f1fca1ca644131f36d4001ade17c28fb384cd61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:30:57.792657 kubelet[2032]: E0213 22:30:57.791851 2032 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f21a6322e5ad03248caae957f1fca1ca644131f36d4001ade17c28fb384cd61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-5hsnp" Feb 13 22:30:57.792657 kubelet[2032]: E0213 22:30:57.791887 2032 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f21a6322e5ad03248caae957f1fca1ca644131f36d4001ade17c28fb384cd61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-5hsnp" Feb 13 22:30:57.792862 kubelet[2032]: E0213 22:30:57.791956 2032 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-5hsnp_default(e78d438d-8e1b-4f6e-afaf-094ee4898fdc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-5hsnp_default(e78d438d-8e1b-4f6e-afaf-094ee4898fdc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1f21a6322e5ad03248caae957f1fca1ca644131f36d4001ade17c28fb384cd61\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-5hsnp" podUID="e78d438d-8e1b-4f6e-afaf-094ee4898fdc" Feb 13 22:30:57.818621 containerd[1615]: time="2025-02-13T22:30:57.818544610Z" level=error msg="Failed to destroy network for sandbox \"9990403887413b3acdb61739f3adf2455f0422608d6583d9b9800ca1fd4fb5ff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:30:57.819096 containerd[1615]: time="2025-02-13T22:30:57.819031834Z" level=error msg="encountered an error cleaning up failed sandbox \"9990403887413b3acdb61739f3adf2455f0422608d6583d9b9800ca1fd4fb5ff\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:30:57.820214 containerd[1615]: time="2025-02-13T22:30:57.819164138Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xwf8m,Uid:30f07a2e-791e-4a29-bff8-bd4d882c17c8,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"9990403887413b3acdb61739f3adf2455f0422608d6583d9b9800ca1fd4fb5ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:30:57.820295 kubelet[2032]: E0213 22:30:57.819546 2032 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9990403887413b3acdb61739f3adf2455f0422608d6583d9b9800ca1fd4fb5ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:30:57.820295 kubelet[2032]: E0213 22:30:57.819645 2032 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9990403887413b3acdb61739f3adf2455f0422608d6583d9b9800ca1fd4fb5ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xwf8m" Feb 13 22:30:57.820295 kubelet[2032]: E0213 22:30:57.819679 2032 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9990403887413b3acdb61739f3adf2455f0422608d6583d9b9800ca1fd4fb5ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xwf8m" Feb 13 22:30:57.820457 kubelet[2032]: E0213 22:30:57.819753 2032 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xwf8m_calico-system(30f07a2e-791e-4a29-bff8-bd4d882c17c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xwf8m_calico-system(30f07a2e-791e-4a29-bff8-bd4d882c17c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9990403887413b3acdb61739f3adf2455f0422608d6583d9b9800ca1fd4fb5ff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xwf8m" podUID="30f07a2e-791e-4a29-bff8-bd4d882c17c8" Feb 13 22:30:58.410356 kubelet[2032]: E0213 22:30:58.410264 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:30:58.554222 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9990403887413b3acdb61739f3adf2455f0422608d6583d9b9800ca1fd4fb5ff-shm.mount: Deactivated successfully. Feb 13 22:30:58.554458 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1f21a6322e5ad03248caae957f1fca1ca644131f36d4001ade17c28fb384cd61-shm.mount: Deactivated successfully. Feb 13 22:30:58.660231 kubelet[2032]: I0213 22:30:58.659529 2032 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9990403887413b3acdb61739f3adf2455f0422608d6583d9b9800ca1fd4fb5ff" Feb 13 22:30:58.661296 containerd[1615]: time="2025-02-13T22:30:58.660741765Z" level=info msg="StopPodSandbox for \"9990403887413b3acdb61739f3adf2455f0422608d6583d9b9800ca1fd4fb5ff\"" Feb 13 22:30:58.661296 containerd[1615]: time="2025-02-13T22:30:58.661020950Z" level=info msg="Ensure that sandbox 9990403887413b3acdb61739f3adf2455f0422608d6583d9b9800ca1fd4fb5ff in task-service has been cleanup successfully" Feb 13 22:30:58.665066 containerd[1615]: time="2025-02-13T22:30:58.664353216Z" level=info msg="TearDown network for sandbox \"9990403887413b3acdb61739f3adf2455f0422608d6583d9b9800ca1fd4fb5ff\" successfully" Feb 13 22:30:58.665066 containerd[1615]: time="2025-02-13T22:30:58.664382379Z" level=info msg="StopPodSandbox for \"9990403887413b3acdb61739f3adf2455f0422608d6583d9b9800ca1fd4fb5ff\" returns successfully" Feb 13 22:30:58.666104 systemd[1]: run-netns-cni\x2d70890114\x2d185a\x2d7fb9\x2d4e3c\x2df91b913d65ff.mount: Deactivated successfully. Feb 13 22:30:58.666843 containerd[1615]: time="2025-02-13T22:30:58.666651840Z" level=info msg="StopPodSandbox for \"70c7b8a184db90680e8f8b7825b6f798651bc5663501ce1629168cfdecf44ecb\"" Feb 13 22:30:58.666843 containerd[1615]: time="2025-02-13T22:30:58.666782052Z" level=info msg="TearDown network for sandbox \"70c7b8a184db90680e8f8b7825b6f798651bc5663501ce1629168cfdecf44ecb\" successfully" Feb 13 22:30:58.666843 containerd[1615]: time="2025-02-13T22:30:58.666803047Z" level=info msg="StopPodSandbox for \"70c7b8a184db90680e8f8b7825b6f798651bc5663501ce1629168cfdecf44ecb\" returns successfully" Feb 13 22:30:58.670335 containerd[1615]: time="2025-02-13T22:30:58.669762061Z" level=info msg="StopPodSandbox for \"f3eded1962285f1837ff002daa96bd430ad9d60ec11698a14326ce31e88f0ae2\"" Feb 13 22:30:58.670335 containerd[1615]: time="2025-02-13T22:30:58.669868074Z" level=info msg="TearDown network for sandbox \"f3eded1962285f1837ff002daa96bd430ad9d60ec11698a14326ce31e88f0ae2\" successfully" Feb 13 22:30:58.670335 containerd[1615]: time="2025-02-13T22:30:58.669887537Z" level=info msg="StopPodSandbox for \"f3eded1962285f1837ff002daa96bd430ad9d60ec11698a14326ce31e88f0ae2\" returns successfully" Feb 13 22:30:58.671722 kubelet[2032]: I0213 22:30:58.671497 2032 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f21a6322e5ad03248caae957f1fca1ca644131f36d4001ade17c28fb384cd61" Feb 13 22:30:58.675852 containerd[1615]: time="2025-02-13T22:30:58.674767487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xwf8m,Uid:30f07a2e-791e-4a29-bff8-bd4d882c17c8,Namespace:calico-system,Attempt:3,}" Feb 13 22:30:58.677880 containerd[1615]: time="2025-02-13T22:30:58.677845706Z" level=info msg="StopPodSandbox for \"1f21a6322e5ad03248caae957f1fca1ca644131f36d4001ade17c28fb384cd61\"" Feb 13 22:30:58.678355 containerd[1615]: time="2025-02-13T22:30:58.678320226Z" level=info msg="Ensure that sandbox 1f21a6322e5ad03248caae957f1fca1ca644131f36d4001ade17c28fb384cd61 in task-service has been cleanup successfully" Feb 13 22:30:58.681204 containerd[1615]: time="2025-02-13T22:30:58.679921850Z" level=info msg="TearDown network for sandbox \"1f21a6322e5ad03248caae957f1fca1ca644131f36d4001ade17c28fb384cd61\" successfully" Feb 13 22:30:58.681204 containerd[1615]: time="2025-02-13T22:30:58.679951491Z" level=info msg="StopPodSandbox for \"1f21a6322e5ad03248caae957f1fca1ca644131f36d4001ade17c28fb384cd61\" returns successfully" Feb 13 22:30:58.683475 systemd[1]: run-netns-cni\x2d43548a56\x2d48d5\x2d6eb9\x2d02ac\x2d58a2038be645.mount: Deactivated successfully. Feb 13 22:30:58.684889 containerd[1615]: time="2025-02-13T22:30:58.684399623Z" level=info msg="StopPodSandbox for \"a51e9c0dac8e3167d45b1ec6278b334ff0dcadf067d08bedf8f4406faa892195\"" Feb 13 22:30:58.684889 containerd[1615]: time="2025-02-13T22:30:58.684520081Z" level=info msg="TearDown network for sandbox \"a51e9c0dac8e3167d45b1ec6278b334ff0dcadf067d08bedf8f4406faa892195\" successfully" Feb 13 22:30:58.684889 containerd[1615]: time="2025-02-13T22:30:58.684539919Z" level=info msg="StopPodSandbox for \"a51e9c0dac8e3167d45b1ec6278b334ff0dcadf067d08bedf8f4406faa892195\" returns successfully" Feb 13 22:30:58.685902 containerd[1615]: time="2025-02-13T22:30:58.685793657Z" level=info msg="StopPodSandbox for \"b39c667b2390dcfd75bcf7108762c485b17c97a7942216db86265444f9d1fc56\"" Feb 13 22:30:58.685975 containerd[1615]: time="2025-02-13T22:30:58.685911617Z" level=info msg="TearDown network for sandbox \"b39c667b2390dcfd75bcf7108762c485b17c97a7942216db86265444f9d1fc56\" successfully" Feb 13 22:30:58.685975 containerd[1615]: time="2025-02-13T22:30:58.685930557Z" level=info msg="StopPodSandbox for \"b39c667b2390dcfd75bcf7108762c485b17c97a7942216db86265444f9d1fc56\" returns successfully" Feb 13 22:30:58.686781 containerd[1615]: time="2025-02-13T22:30:58.686552976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-5hsnp,Uid:e78d438d-8e1b-4f6e-afaf-094ee4898fdc,Namespace:default,Attempt:3,}" Feb 13 22:30:58.833574 containerd[1615]: time="2025-02-13T22:30:58.831760859Z" level=error msg="Failed to destroy network for sandbox \"912bd121fad3efcdf2e8de896b20f0578b164319a40cdf47c9644e520ede5664\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:30:58.833574 containerd[1615]: time="2025-02-13T22:30:58.832771252Z" level=error msg="encountered an error cleaning up failed sandbox \"912bd121fad3efcdf2e8de896b20f0578b164319a40cdf47c9644e520ede5664\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:30:58.833574 containerd[1615]: time="2025-02-13T22:30:58.832844880Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xwf8m,Uid:30f07a2e-791e-4a29-bff8-bd4d882c17c8,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"912bd121fad3efcdf2e8de896b20f0578b164319a40cdf47c9644e520ede5664\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:30:58.835666 kubelet[2032]: E0213 22:30:58.835603 2032 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"912bd121fad3efcdf2e8de896b20f0578b164319a40cdf47c9644e520ede5664\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:30:58.835925 kubelet[2032]: E0213 22:30:58.835873 2032 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"912bd121fad3efcdf2e8de896b20f0578b164319a40cdf47c9644e520ede5664\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xwf8m" Feb 13 22:30:58.836062 kubelet[2032]: E0213 22:30:58.836032 2032 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"912bd121fad3efcdf2e8de896b20f0578b164319a40cdf47c9644e520ede5664\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xwf8m" Feb 13 22:30:58.836335 kubelet[2032]: E0213 22:30:58.836270 2032 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xwf8m_calico-system(30f07a2e-791e-4a29-bff8-bd4d882c17c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xwf8m_calico-system(30f07a2e-791e-4a29-bff8-bd4d882c17c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"912bd121fad3efcdf2e8de896b20f0578b164319a40cdf47c9644e520ede5664\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xwf8m" podUID="30f07a2e-791e-4a29-bff8-bd4d882c17c8" Feb 13 22:30:58.858566 containerd[1615]: time="2025-02-13T22:30:58.858460693Z" level=error msg="Failed to destroy network for sandbox \"7b2bf979fec215edde68ae1ebd1c4b23a20419a73f5df32b563ed0e3007ca631\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:30:58.859549 containerd[1615]: time="2025-02-13T22:30:58.859161460Z" level=error msg="encountered an error cleaning up failed sandbox \"7b2bf979fec215edde68ae1ebd1c4b23a20419a73f5df32b563ed0e3007ca631\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:30:58.859549 containerd[1615]: time="2025-02-13T22:30:58.859263078Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-5hsnp,Uid:e78d438d-8e1b-4f6e-afaf-094ee4898fdc,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"7b2bf979fec215edde68ae1ebd1c4b23a20419a73f5df32b563ed0e3007ca631\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:30:58.859902 kubelet[2032]: E0213 22:30:58.859764 2032 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b2bf979fec215edde68ae1ebd1c4b23a20419a73f5df32b563ed0e3007ca631\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:30:58.859902 kubelet[2032]: E0213 22:30:58.859856 2032 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b2bf979fec215edde68ae1ebd1c4b23a20419a73f5df32b563ed0e3007ca631\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-5hsnp" Feb 13 22:30:58.860216 kubelet[2032]: E0213 22:30:58.860058 2032 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b2bf979fec215edde68ae1ebd1c4b23a20419a73f5df32b563ed0e3007ca631\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-5hsnp" Feb 13 22:30:58.860700 kubelet[2032]: E0213 22:30:58.860653 2032 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-5hsnp_default(e78d438d-8e1b-4f6e-afaf-094ee4898fdc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-5hsnp_default(e78d438d-8e1b-4f6e-afaf-094ee4898fdc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7b2bf979fec215edde68ae1ebd1c4b23a20419a73f5df32b563ed0e3007ca631\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-5hsnp" podUID="e78d438d-8e1b-4f6e-afaf-094ee4898fdc" Feb 13 22:30:59.124766 sshd[2549]: PAM: Permission denied for illegal user guest from 116.110.7.76 Feb 13 22:30:59.125901 sshd[2549]: Failed keyboard-interactive/pam for invalid user guest from 116.110.7.76 port 59752 ssh2 Feb 13 22:30:59.411557 kubelet[2032]: E0213 22:30:59.411396 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:30:59.441871 sshd[2549]: Connection closed by invalid user guest 116.110.7.76 port 59752 [preauth] Feb 13 22:30:59.448780 systemd[1]: sshd@7-10.244.31.90:22-116.110.7.76:59752.service: Deactivated successfully. Feb 13 22:30:59.554707 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7b2bf979fec215edde68ae1ebd1c4b23a20419a73f5df32b563ed0e3007ca631-shm.mount: Deactivated successfully. Feb 13 22:30:59.555411 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-912bd121fad3efcdf2e8de896b20f0578b164319a40cdf47c9644e520ede5664-shm.mount: Deactivated successfully. Feb 13 22:30:59.679698 kubelet[2032]: I0213 22:30:59.678410 2032 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b2bf979fec215edde68ae1ebd1c4b23a20419a73f5df32b563ed0e3007ca631" Feb 13 22:30:59.680030 containerd[1615]: time="2025-02-13T22:30:59.679980912Z" level=info msg="StopPodSandbox for \"7b2bf979fec215edde68ae1ebd1c4b23a20419a73f5df32b563ed0e3007ca631\"" Feb 13 22:30:59.682532 containerd[1615]: time="2025-02-13T22:30:59.680308158Z" level=info msg="Ensure that sandbox 7b2bf979fec215edde68ae1ebd1c4b23a20419a73f5df32b563ed0e3007ca631 in task-service has been cleanup successfully" Feb 13 22:30:59.682532 containerd[1615]: time="2025-02-13T22:30:59.680590046Z" level=info msg="TearDown network for sandbox \"7b2bf979fec215edde68ae1ebd1c4b23a20419a73f5df32b563ed0e3007ca631\" successfully" Feb 13 22:30:59.682532 containerd[1615]: time="2025-02-13T22:30:59.680615055Z" level=info msg="StopPodSandbox for \"7b2bf979fec215edde68ae1ebd1c4b23a20419a73f5df32b563ed0e3007ca631\" returns successfully" Feb 13 22:30:59.684204 containerd[1615]: time="2025-02-13T22:30:59.684058269Z" level=info msg="StopPodSandbox for \"1f21a6322e5ad03248caae957f1fca1ca644131f36d4001ade17c28fb384cd61\"" Feb 13 22:30:59.684977 containerd[1615]: time="2025-02-13T22:30:59.684304804Z" level=info msg="TearDown network for sandbox \"1f21a6322e5ad03248caae957f1fca1ca644131f36d4001ade17c28fb384cd61\" successfully" Feb 13 22:30:59.684977 containerd[1615]: time="2025-02-13T22:30:59.684332797Z" level=info msg="StopPodSandbox for \"1f21a6322e5ad03248caae957f1fca1ca644131f36d4001ade17c28fb384cd61\" returns successfully" Feb 13 22:30:59.684849 systemd[1]: run-netns-cni\x2d158cc664\x2d7ec2\x2d87fb\x2d13f4\x2d404c4c56c1f9.mount: Deactivated successfully. Feb 13 22:30:59.685566 containerd[1615]: time="2025-02-13T22:30:59.685153132Z" level=info msg="StopPodSandbox for \"a51e9c0dac8e3167d45b1ec6278b334ff0dcadf067d08bedf8f4406faa892195\"" Feb 13 22:30:59.685566 containerd[1615]: time="2025-02-13T22:30:59.685529815Z" level=info msg="TearDown network for sandbox \"a51e9c0dac8e3167d45b1ec6278b334ff0dcadf067d08bedf8f4406faa892195\" successfully" Feb 13 22:30:59.685566 containerd[1615]: time="2025-02-13T22:30:59.685558463Z" level=info msg="StopPodSandbox for \"a51e9c0dac8e3167d45b1ec6278b334ff0dcadf067d08bedf8f4406faa892195\" returns successfully" Feb 13 22:30:59.688031 containerd[1615]: time="2025-02-13T22:30:59.687405173Z" level=info msg="StopPodSandbox for \"b39c667b2390dcfd75bcf7108762c485b17c97a7942216db86265444f9d1fc56\"" Feb 13 22:30:59.688031 containerd[1615]: time="2025-02-13T22:30:59.687510644Z" level=info msg="TearDown network for sandbox \"b39c667b2390dcfd75bcf7108762c485b17c97a7942216db86265444f9d1fc56\" successfully" Feb 13 22:30:59.688031 containerd[1615]: time="2025-02-13T22:30:59.687530541Z" level=info msg="StopPodSandbox for \"b39c667b2390dcfd75bcf7108762c485b17c97a7942216db86265444f9d1fc56\" returns successfully" Feb 13 22:30:59.688802 containerd[1615]: time="2025-02-13T22:30:59.688592145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-5hsnp,Uid:e78d438d-8e1b-4f6e-afaf-094ee4898fdc,Namespace:default,Attempt:4,}" Feb 13 22:30:59.706379 kubelet[2032]: I0213 22:30:59.706229 2032 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="912bd121fad3efcdf2e8de896b20f0578b164319a40cdf47c9644e520ede5664" Feb 13 22:30:59.706992 containerd[1615]: time="2025-02-13T22:30:59.706943573Z" level=info msg="StopPodSandbox for \"912bd121fad3efcdf2e8de896b20f0578b164319a40cdf47c9644e520ede5664\"" Feb 13 22:30:59.707756 containerd[1615]: time="2025-02-13T22:30:59.707709208Z" level=info msg="Ensure that sandbox 912bd121fad3efcdf2e8de896b20f0578b164319a40cdf47c9644e520ede5664 in task-service has been cleanup successfully" Feb 13 22:30:59.711221 containerd[1615]: time="2025-02-13T22:30:59.710685157Z" level=info msg="TearDown network for sandbox \"912bd121fad3efcdf2e8de896b20f0578b164319a40cdf47c9644e520ede5664\" successfully" Feb 13 22:30:59.711221 containerd[1615]: time="2025-02-13T22:30:59.710718710Z" level=info msg="StopPodSandbox for \"912bd121fad3efcdf2e8de896b20f0578b164319a40cdf47c9644e520ede5664\" returns successfully" Feb 13 22:30:59.711843 systemd[1]: run-netns-cni\x2dba6a1678\x2debb9\x2dc179\x2df37a\x2d34d01b8bd855.mount: Deactivated successfully. Feb 13 22:30:59.712926 containerd[1615]: time="2025-02-13T22:30:59.712893179Z" level=info msg="StopPodSandbox for \"9990403887413b3acdb61739f3adf2455f0422608d6583d9b9800ca1fd4fb5ff\"" Feb 13 22:30:59.713240 containerd[1615]: time="2025-02-13T22:30:59.713015143Z" level=info msg="TearDown network for sandbox \"9990403887413b3acdb61739f3adf2455f0422608d6583d9b9800ca1fd4fb5ff\" successfully" Feb 13 22:30:59.713240 containerd[1615]: time="2025-02-13T22:30:59.713041147Z" level=info msg="StopPodSandbox for \"9990403887413b3acdb61739f3adf2455f0422608d6583d9b9800ca1fd4fb5ff\" returns successfully" Feb 13 22:30:59.714214 containerd[1615]: time="2025-02-13T22:30:59.714117354Z" level=info msg="StopPodSandbox for \"70c7b8a184db90680e8f8b7825b6f798651bc5663501ce1629168cfdecf44ecb\"" Feb 13 22:30:59.714353 containerd[1615]: time="2025-02-13T22:30:59.714242153Z" level=info msg="TearDown network for sandbox \"70c7b8a184db90680e8f8b7825b6f798651bc5663501ce1629168cfdecf44ecb\" successfully" Feb 13 22:30:59.714353 containerd[1615]: time="2025-02-13T22:30:59.714261788Z" level=info msg="StopPodSandbox for \"70c7b8a184db90680e8f8b7825b6f798651bc5663501ce1629168cfdecf44ecb\" returns successfully" Feb 13 22:30:59.715443 containerd[1615]: time="2025-02-13T22:30:59.715309875Z" level=info msg="StopPodSandbox for \"f3eded1962285f1837ff002daa96bd430ad9d60ec11698a14326ce31e88f0ae2\"" Feb 13 22:30:59.715443 containerd[1615]: time="2025-02-13T22:30:59.715413338Z" level=info msg="TearDown network for sandbox \"f3eded1962285f1837ff002daa96bd430ad9d60ec11698a14326ce31e88f0ae2\" successfully" Feb 13 22:30:59.715443 containerd[1615]: time="2025-02-13T22:30:59.715431237Z" level=info msg="StopPodSandbox for \"f3eded1962285f1837ff002daa96bd430ad9d60ec11698a14326ce31e88f0ae2\" returns successfully" Feb 13 22:30:59.716206 containerd[1615]: time="2025-02-13T22:30:59.715945401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xwf8m,Uid:30f07a2e-791e-4a29-bff8-bd4d882c17c8,Namespace:calico-system,Attempt:4,}" Feb 13 22:30:59.951459 containerd[1615]: time="2025-02-13T22:30:59.950887296Z" level=error msg="Failed to destroy network for sandbox \"657a476d72315dc06315e777bb4e227eb02155787782c7c29c2631115160be0d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:30:59.952260 containerd[1615]: time="2025-02-13T22:30:59.952220783Z" level=error msg="encountered an error cleaning up failed sandbox \"657a476d72315dc06315e777bb4e227eb02155787782c7c29c2631115160be0d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:30:59.952337 containerd[1615]: time="2025-02-13T22:30:59.952297679Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-5hsnp,Uid:e78d438d-8e1b-4f6e-afaf-094ee4898fdc,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"657a476d72315dc06315e777bb4e227eb02155787782c7c29c2631115160be0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:30:59.952675 kubelet[2032]: E0213 22:30:59.952624 2032 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"657a476d72315dc06315e777bb4e227eb02155787782c7c29c2631115160be0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:30:59.952775 kubelet[2032]: E0213 22:30:59.952707 2032 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"657a476d72315dc06315e777bb4e227eb02155787782c7c29c2631115160be0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-5hsnp" Feb 13 22:30:59.952775 kubelet[2032]: E0213 22:30:59.952743 2032 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"657a476d72315dc06315e777bb4e227eb02155787782c7c29c2631115160be0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-5hsnp" Feb 13 22:30:59.952877 kubelet[2032]: E0213 22:30:59.952821 2032 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-5hsnp_default(e78d438d-8e1b-4f6e-afaf-094ee4898fdc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-5hsnp_default(e78d438d-8e1b-4f6e-afaf-094ee4898fdc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"657a476d72315dc06315e777bb4e227eb02155787782c7c29c2631115160be0d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-5hsnp" podUID="e78d438d-8e1b-4f6e-afaf-094ee4898fdc" Feb 13 22:30:59.954940 containerd[1615]: time="2025-02-13T22:30:59.954727672Z" level=error msg="Failed to destroy network for sandbox \"692b056e95454c81cc355e80a80cc7b6627396b89288c6694923e58b233955fd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:30:59.955170 containerd[1615]: time="2025-02-13T22:30:59.955081403Z" level=error msg="encountered an error cleaning up failed sandbox \"692b056e95454c81cc355e80a80cc7b6627396b89288c6694923e58b233955fd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:30:59.955170 containerd[1615]: time="2025-02-13T22:30:59.955154892Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xwf8m,Uid:30f07a2e-791e-4a29-bff8-bd4d882c17c8,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"692b056e95454c81cc355e80a80cc7b6627396b89288c6694923e58b233955fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:30:59.955642 kubelet[2032]: E0213 22:30:59.955361 2032 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"692b056e95454c81cc355e80a80cc7b6627396b89288c6694923e58b233955fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:30:59.955642 kubelet[2032]: E0213 22:30:59.955406 2032 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"692b056e95454c81cc355e80a80cc7b6627396b89288c6694923e58b233955fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xwf8m" Feb 13 22:30:59.955642 kubelet[2032]: E0213 22:30:59.955430 2032 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"692b056e95454c81cc355e80a80cc7b6627396b89288c6694923e58b233955fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xwf8m" Feb 13 22:30:59.955830 kubelet[2032]: E0213 22:30:59.955482 2032 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xwf8m_calico-system(30f07a2e-791e-4a29-bff8-bd4d882c17c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xwf8m_calico-system(30f07a2e-791e-4a29-bff8-bd4d882c17c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"692b056e95454c81cc355e80a80cc7b6627396b89288c6694923e58b233955fd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xwf8m" podUID="30f07a2e-791e-4a29-bff8-bd4d882c17c8" Feb 13 22:31:00.193433 update_engine[1596]: I20250213 22:31:00.193291 1596 update_attempter.cc:509] Updating boot flags... Feb 13 22:31:00.265708 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2870) Feb 13 22:31:00.416251 kubelet[2032]: E0213 22:31:00.414514 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:31:00.434233 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2873) Feb 13 22:31:00.553917 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-692b056e95454c81cc355e80a80cc7b6627396b89288c6694923e58b233955fd-shm.mount: Deactivated successfully. Feb 13 22:31:00.554673 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-657a476d72315dc06315e777bb4e227eb02155787782c7c29c2631115160be0d-shm.mount: Deactivated successfully. Feb 13 22:31:00.714007 kubelet[2032]: I0213 22:31:00.713847 2032 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="657a476d72315dc06315e777bb4e227eb02155787782c7c29c2631115160be0d" Feb 13 22:31:00.719219 containerd[1615]: time="2025-02-13T22:31:00.716675236Z" level=info msg="StopPodSandbox for \"657a476d72315dc06315e777bb4e227eb02155787782c7c29c2631115160be0d\"" Feb 13 22:31:00.719219 containerd[1615]: time="2025-02-13T22:31:00.716934320Z" level=info msg="Ensure that sandbox 657a476d72315dc06315e777bb4e227eb02155787782c7c29c2631115160be0d in task-service has been cleanup successfully" Feb 13 22:31:00.720745 containerd[1615]: time="2025-02-13T22:31:00.720712138Z" level=info msg="TearDown network for sandbox \"657a476d72315dc06315e777bb4e227eb02155787782c7c29c2631115160be0d\" successfully" Feb 13 22:31:00.720745 containerd[1615]: time="2025-02-13T22:31:00.720743940Z" level=info msg="StopPodSandbox for \"657a476d72315dc06315e777bb4e227eb02155787782c7c29c2631115160be0d\" returns successfully" Feb 13 22:31:00.721682 containerd[1615]: time="2025-02-13T22:31:00.721594498Z" level=info msg="StopPodSandbox for \"7b2bf979fec215edde68ae1ebd1c4b23a20419a73f5df32b563ed0e3007ca631\"" Feb 13 22:31:00.721900 containerd[1615]: time="2025-02-13T22:31:00.721703048Z" level=info msg="TearDown network for sandbox \"7b2bf979fec215edde68ae1ebd1c4b23a20419a73f5df32b563ed0e3007ca631\" successfully" Feb 13 22:31:00.721900 containerd[1615]: time="2025-02-13T22:31:00.721721548Z" level=info msg="StopPodSandbox for \"7b2bf979fec215edde68ae1ebd1c4b23a20419a73f5df32b563ed0e3007ca631\" returns successfully" Feb 13 22:31:00.722898 systemd[1]: run-netns-cni\x2d4eb311c8\x2d7702\x2dd614\x2d8d5f\x2d4a2959c16323.mount: Deactivated successfully. Feb 13 22:31:00.724756 containerd[1615]: time="2025-02-13T22:31:00.724622187Z" level=info msg="StopPodSandbox for \"1f21a6322e5ad03248caae957f1fca1ca644131f36d4001ade17c28fb384cd61\"" Feb 13 22:31:00.724964 containerd[1615]: time="2025-02-13T22:31:00.724833089Z" level=info msg="TearDown network for sandbox \"1f21a6322e5ad03248caae957f1fca1ca644131f36d4001ade17c28fb384cd61\" successfully" Feb 13 22:31:00.724964 containerd[1615]: time="2025-02-13T22:31:00.724854217Z" level=info msg="StopPodSandbox for \"1f21a6322e5ad03248caae957f1fca1ca644131f36d4001ade17c28fb384cd61\" returns successfully" Feb 13 22:31:00.725641 containerd[1615]: time="2025-02-13T22:31:00.725405863Z" level=info msg="StopPodSandbox for \"a51e9c0dac8e3167d45b1ec6278b334ff0dcadf067d08bedf8f4406faa892195\"" Feb 13 22:31:00.725641 containerd[1615]: time="2025-02-13T22:31:00.725543194Z" level=info msg="TearDown network for sandbox \"a51e9c0dac8e3167d45b1ec6278b334ff0dcadf067d08bedf8f4406faa892195\" successfully" Feb 13 22:31:00.725641 containerd[1615]: time="2025-02-13T22:31:00.725562626Z" level=info msg="StopPodSandbox for \"a51e9c0dac8e3167d45b1ec6278b334ff0dcadf067d08bedf8f4406faa892195\" returns successfully" Feb 13 22:31:00.726304 kubelet[2032]: I0213 22:31:00.725952 2032 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="692b056e95454c81cc355e80a80cc7b6627396b89288c6694923e58b233955fd" Feb 13 22:31:00.726417 containerd[1615]: time="2025-02-13T22:31:00.725990357Z" level=info msg="StopPodSandbox for \"b39c667b2390dcfd75bcf7108762c485b17c97a7942216db86265444f9d1fc56\"" Feb 13 22:31:00.726417 containerd[1615]: time="2025-02-13T22:31:00.726103825Z" level=info msg="TearDown network for sandbox \"b39c667b2390dcfd75bcf7108762c485b17c97a7942216db86265444f9d1fc56\" successfully" Feb 13 22:31:00.726417 containerd[1615]: time="2025-02-13T22:31:00.726160925Z" level=info msg="StopPodSandbox for \"b39c667b2390dcfd75bcf7108762c485b17c97a7942216db86265444f9d1fc56\" returns successfully" Feb 13 22:31:00.727367 containerd[1615]: time="2025-02-13T22:31:00.726902301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-5hsnp,Uid:e78d438d-8e1b-4f6e-afaf-094ee4898fdc,Namespace:default,Attempt:5,}" Feb 13 22:31:00.728374 containerd[1615]: time="2025-02-13T22:31:00.728260202Z" level=info msg="StopPodSandbox for \"692b056e95454c81cc355e80a80cc7b6627396b89288c6694923e58b233955fd\"" Feb 13 22:31:00.730101 containerd[1615]: time="2025-02-13T22:31:00.730012689Z" level=info msg="Ensure that sandbox 692b056e95454c81cc355e80a80cc7b6627396b89288c6694923e58b233955fd in task-service has been cleanup successfully" Feb 13 22:31:00.732147 containerd[1615]: time="2025-02-13T22:31:00.731991696Z" level=info msg="TearDown network for sandbox \"692b056e95454c81cc355e80a80cc7b6627396b89288c6694923e58b233955fd\" successfully" Feb 13 22:31:00.732147 containerd[1615]: time="2025-02-13T22:31:00.732019659Z" level=info msg="StopPodSandbox for \"692b056e95454c81cc355e80a80cc7b6627396b89288c6694923e58b233955fd\" returns successfully" Feb 13 22:31:00.733805 containerd[1615]: time="2025-02-13T22:31:00.733601996Z" level=info msg="StopPodSandbox for \"912bd121fad3efcdf2e8de896b20f0578b164319a40cdf47c9644e520ede5664\"" Feb 13 22:31:00.733805 containerd[1615]: time="2025-02-13T22:31:00.733707253Z" level=info msg="TearDown network for sandbox \"912bd121fad3efcdf2e8de896b20f0578b164319a40cdf47c9644e520ede5664\" successfully" Feb 13 22:31:00.733805 containerd[1615]: time="2025-02-13T22:31:00.733726637Z" level=info msg="StopPodSandbox for \"912bd121fad3efcdf2e8de896b20f0578b164319a40cdf47c9644e520ede5664\" returns successfully" Feb 13 22:31:00.735033 containerd[1615]: time="2025-02-13T22:31:00.734608299Z" level=info msg="StopPodSandbox for \"9990403887413b3acdb61739f3adf2455f0422608d6583d9b9800ca1fd4fb5ff\"" Feb 13 22:31:00.735437 containerd[1615]: time="2025-02-13T22:31:00.735276676Z" level=info msg="TearDown network for sandbox \"9990403887413b3acdb61739f3adf2455f0422608d6583d9b9800ca1fd4fb5ff\" successfully" Feb 13 22:31:00.735661 containerd[1615]: time="2025-02-13T22:31:00.735633072Z" level=info msg="StopPodSandbox for \"9990403887413b3acdb61739f3adf2455f0422608d6583d9b9800ca1fd4fb5ff\" returns successfully" Feb 13 22:31:00.737106 containerd[1615]: time="2025-02-13T22:31:00.737067755Z" level=info msg="StopPodSandbox for \"70c7b8a184db90680e8f8b7825b6f798651bc5663501ce1629168cfdecf44ecb\"" Feb 13 22:31:00.737674 systemd[1]: run-netns-cni\x2d8711da65\x2dd440\x2d7a6a\x2d95be\x2d48033d731672.mount: Deactivated successfully. Feb 13 22:31:00.739159 containerd[1615]: time="2025-02-13T22:31:00.737850228Z" level=info msg="TearDown network for sandbox \"70c7b8a184db90680e8f8b7825b6f798651bc5663501ce1629168cfdecf44ecb\" successfully" Feb 13 22:31:00.739159 containerd[1615]: time="2025-02-13T22:31:00.737904387Z" level=info msg="StopPodSandbox for \"70c7b8a184db90680e8f8b7825b6f798651bc5663501ce1629168cfdecf44ecb\" returns successfully" Feb 13 22:31:00.739159 containerd[1615]: time="2025-02-13T22:31:00.738625021Z" level=info msg="StopPodSandbox for \"f3eded1962285f1837ff002daa96bd430ad9d60ec11698a14326ce31e88f0ae2\"" Feb 13 22:31:00.739159 containerd[1615]: time="2025-02-13T22:31:00.738743713Z" level=info msg="TearDown network for sandbox \"f3eded1962285f1837ff002daa96bd430ad9d60ec11698a14326ce31e88f0ae2\" successfully" Feb 13 22:31:00.739159 containerd[1615]: time="2025-02-13T22:31:00.738761327Z" level=info msg="StopPodSandbox for \"f3eded1962285f1837ff002daa96bd430ad9d60ec11698a14326ce31e88f0ae2\" returns successfully" Feb 13 22:31:00.740771 containerd[1615]: time="2025-02-13T22:31:00.740724083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xwf8m,Uid:30f07a2e-791e-4a29-bff8-bd4d882c17c8,Namespace:calico-system,Attempt:5,}" Feb 13 22:31:00.927573 containerd[1615]: time="2025-02-13T22:31:00.927174720Z" level=error msg="Failed to destroy network for sandbox \"48f6ecefe1fca3fc64b572878614fb43e4322bd6e6808f3650462e8ede1b8922\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:31:00.927751 containerd[1615]: time="2025-02-13T22:31:00.927659708Z" level=error msg="encountered an error cleaning up failed sandbox \"48f6ecefe1fca3fc64b572878614fb43e4322bd6e6808f3650462e8ede1b8922\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:31:00.927751 containerd[1615]: time="2025-02-13T22:31:00.927728770Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xwf8m,Uid:30f07a2e-791e-4a29-bff8-bd4d882c17c8,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"48f6ecefe1fca3fc64b572878614fb43e4322bd6e6808f3650462e8ede1b8922\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:31:00.928057 kubelet[2032]: E0213 22:31:00.928009 2032 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48f6ecefe1fca3fc64b572878614fb43e4322bd6e6808f3650462e8ede1b8922\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:31:00.928157 kubelet[2032]: E0213 22:31:00.928096 2032 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48f6ecefe1fca3fc64b572878614fb43e4322bd6e6808f3650462e8ede1b8922\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xwf8m" Feb 13 22:31:00.928157 kubelet[2032]: E0213 22:31:00.928142 2032 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48f6ecefe1fca3fc64b572878614fb43e4322bd6e6808f3650462e8ede1b8922\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xwf8m" Feb 13 22:31:00.928712 kubelet[2032]: E0213 22:31:00.928250 2032 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xwf8m_calico-system(30f07a2e-791e-4a29-bff8-bd4d882c17c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xwf8m_calico-system(30f07a2e-791e-4a29-bff8-bd4d882c17c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"48f6ecefe1fca3fc64b572878614fb43e4322bd6e6808f3650462e8ede1b8922\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xwf8m" podUID="30f07a2e-791e-4a29-bff8-bd4d882c17c8" Feb 13 22:31:00.930385 containerd[1615]: time="2025-02-13T22:31:00.930349649Z" level=error msg="Failed to destroy network for sandbox \"3ea1a11e3027ebbea77101959e89a850d2b37483f1c119d42fb7b78a2379bfb2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:31:00.930902 containerd[1615]: time="2025-02-13T22:31:00.930828836Z" level=error msg="encountered an error cleaning up failed sandbox \"3ea1a11e3027ebbea77101959e89a850d2b37483f1c119d42fb7b78a2379bfb2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:31:00.930973 containerd[1615]: time="2025-02-13T22:31:00.930940244Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-5hsnp,Uid:e78d438d-8e1b-4f6e-afaf-094ee4898fdc,Namespace:default,Attempt:5,} failed, error" error="failed to setup network for sandbox \"3ea1a11e3027ebbea77101959e89a850d2b37483f1c119d42fb7b78a2379bfb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:31:00.931133 kubelet[2032]: E0213 22:31:00.931099 2032 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ea1a11e3027ebbea77101959e89a850d2b37483f1c119d42fb7b78a2379bfb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:31:00.931220 kubelet[2032]: E0213 22:31:00.931148 2032 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ea1a11e3027ebbea77101959e89a850d2b37483f1c119d42fb7b78a2379bfb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-5hsnp" Feb 13 22:31:00.931360 kubelet[2032]: E0213 22:31:00.931174 2032 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ea1a11e3027ebbea77101959e89a850d2b37483f1c119d42fb7b78a2379bfb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-5hsnp" Feb 13 22:31:00.931505 kubelet[2032]: E0213 22:31:00.931469 2032 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-5hsnp_default(e78d438d-8e1b-4f6e-afaf-094ee4898fdc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-5hsnp_default(e78d438d-8e1b-4f6e-afaf-094ee4898fdc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3ea1a11e3027ebbea77101959e89a850d2b37483f1c119d42fb7b78a2379bfb2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-5hsnp" podUID="e78d438d-8e1b-4f6e-afaf-094ee4898fdc" Feb 13 22:31:01.416063 kubelet[2032]: E0213 22:31:01.415914 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:31:01.554623 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3ea1a11e3027ebbea77101959e89a850d2b37483f1c119d42fb7b78a2379bfb2-shm.mount: Deactivated successfully. Feb 13 22:31:01.734825 kubelet[2032]: I0213 22:31:01.734693 2032 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ea1a11e3027ebbea77101959e89a850d2b37483f1c119d42fb7b78a2379bfb2" Feb 13 22:31:01.735895 containerd[1615]: time="2025-02-13T22:31:01.735675994Z" level=info msg="StopPodSandbox for \"3ea1a11e3027ebbea77101959e89a850d2b37483f1c119d42fb7b78a2379bfb2\"" Feb 13 22:31:01.736731 containerd[1615]: time="2025-02-13T22:31:01.736490492Z" level=info msg="Ensure that sandbox 3ea1a11e3027ebbea77101959e89a850d2b37483f1c119d42fb7b78a2379bfb2 in task-service has been cleanup successfully" Feb 13 22:31:01.739303 containerd[1615]: time="2025-02-13T22:31:01.739262410Z" level=info msg="TearDown network for sandbox \"3ea1a11e3027ebbea77101959e89a850d2b37483f1c119d42fb7b78a2379bfb2\" successfully" Feb 13 22:31:01.739445 containerd[1615]: time="2025-02-13T22:31:01.739420363Z" level=info msg="StopPodSandbox for \"3ea1a11e3027ebbea77101959e89a850d2b37483f1c119d42fb7b78a2379bfb2\" returns successfully" Feb 13 22:31:01.740504 containerd[1615]: time="2025-02-13T22:31:01.740471865Z" level=info msg="StopPodSandbox for \"657a476d72315dc06315e777bb4e227eb02155787782c7c29c2631115160be0d\"" Feb 13 22:31:01.740734 containerd[1615]: time="2025-02-13T22:31:01.740704456Z" level=info msg="TearDown network for sandbox \"657a476d72315dc06315e777bb4e227eb02155787782c7c29c2631115160be0d\" successfully" Feb 13 22:31:01.740975 containerd[1615]: time="2025-02-13T22:31:01.740833826Z" level=info msg="StopPodSandbox for \"657a476d72315dc06315e777bb4e227eb02155787782c7c29c2631115160be0d\" returns successfully" Feb 13 22:31:01.741351 systemd[1]: run-netns-cni\x2db37c7177\x2d3217\x2d6524\x2d17a9\x2d25ba161a6d4a.mount: Deactivated successfully. Feb 13 22:31:01.744612 containerd[1615]: time="2025-02-13T22:31:01.743793459Z" level=info msg="StopPodSandbox for \"7b2bf979fec215edde68ae1ebd1c4b23a20419a73f5df32b563ed0e3007ca631\"" Feb 13 22:31:01.744612 containerd[1615]: time="2025-02-13T22:31:01.743925882Z" level=info msg="TearDown network for sandbox \"7b2bf979fec215edde68ae1ebd1c4b23a20419a73f5df32b563ed0e3007ca631\" successfully" Feb 13 22:31:01.744612 containerd[1615]: time="2025-02-13T22:31:01.743981579Z" level=info msg="StopPodSandbox for \"7b2bf979fec215edde68ae1ebd1c4b23a20419a73f5df32b563ed0e3007ca631\" returns successfully" Feb 13 22:31:01.745514 containerd[1615]: time="2025-02-13T22:31:01.745482043Z" level=info msg="StopPodSandbox for \"1f21a6322e5ad03248caae957f1fca1ca644131f36d4001ade17c28fb384cd61\"" Feb 13 22:31:01.745710 containerd[1615]: time="2025-02-13T22:31:01.745683085Z" level=info msg="TearDown network for sandbox \"1f21a6322e5ad03248caae957f1fca1ca644131f36d4001ade17c28fb384cd61\" successfully" Feb 13 22:31:01.745804 containerd[1615]: time="2025-02-13T22:31:01.745780697Z" level=info msg="StopPodSandbox for \"1f21a6322e5ad03248caae957f1fca1ca644131f36d4001ade17c28fb384cd61\" returns successfully" Feb 13 22:31:01.746749 containerd[1615]: time="2025-02-13T22:31:01.746720336Z" level=info msg="StopPodSandbox for \"a51e9c0dac8e3167d45b1ec6278b334ff0dcadf067d08bedf8f4406faa892195\"" Feb 13 22:31:01.747177 containerd[1615]: time="2025-02-13T22:31:01.747149498Z" level=info msg="TearDown network for sandbox \"a51e9c0dac8e3167d45b1ec6278b334ff0dcadf067d08bedf8f4406faa892195\" successfully" Feb 13 22:31:01.747320 containerd[1615]: time="2025-02-13T22:31:01.747294072Z" level=info msg="StopPodSandbox for \"a51e9c0dac8e3167d45b1ec6278b334ff0dcadf067d08bedf8f4406faa892195\" returns successfully" Feb 13 22:31:01.747744 containerd[1615]: time="2025-02-13T22:31:01.747714935Z" level=info msg="StopPodSandbox for \"b39c667b2390dcfd75bcf7108762c485b17c97a7942216db86265444f9d1fc56\"" Feb 13 22:31:01.747813 kubelet[2032]: I0213 22:31:01.747740 2032 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48f6ecefe1fca3fc64b572878614fb43e4322bd6e6808f3650462e8ede1b8922" Feb 13 22:31:01.748170 containerd[1615]: time="2025-02-13T22:31:01.748141934Z" level=info msg="TearDown network for sandbox \"b39c667b2390dcfd75bcf7108762c485b17c97a7942216db86265444f9d1fc56\" successfully" Feb 13 22:31:01.748296 containerd[1615]: time="2025-02-13T22:31:01.748271821Z" level=info msg="StopPodSandbox for \"b39c667b2390dcfd75bcf7108762c485b17c97a7942216db86265444f9d1fc56\" returns successfully" Feb 13 22:31:01.748992 containerd[1615]: time="2025-02-13T22:31:01.748962475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-5hsnp,Uid:e78d438d-8e1b-4f6e-afaf-094ee4898fdc,Namespace:default,Attempt:6,}" Feb 13 22:31:01.756052 containerd[1615]: time="2025-02-13T22:31:01.755987443Z" level=info msg="StopPodSandbox for \"48f6ecefe1fca3fc64b572878614fb43e4322bd6e6808f3650462e8ede1b8922\"" Feb 13 22:31:01.756364 containerd[1615]: time="2025-02-13T22:31:01.756320524Z" level=info msg="Ensure that sandbox 48f6ecefe1fca3fc64b572878614fb43e4322bd6e6808f3650462e8ede1b8922 in task-service has been cleanup successfully" Feb 13 22:31:01.760201 containerd[1615]: time="2025-02-13T22:31:01.756567742Z" level=info msg="TearDown network for sandbox \"48f6ecefe1fca3fc64b572878614fb43e4322bd6e6808f3650462e8ede1b8922\" successfully" Feb 13 22:31:01.760201 containerd[1615]: time="2025-02-13T22:31:01.756597539Z" level=info msg="StopPodSandbox for \"48f6ecefe1fca3fc64b572878614fb43e4322bd6e6808f3650462e8ede1b8922\" returns successfully" Feb 13 22:31:01.760201 containerd[1615]: time="2025-02-13T22:31:01.757015889Z" level=info msg="StopPodSandbox for \"692b056e95454c81cc355e80a80cc7b6627396b89288c6694923e58b233955fd\"" Feb 13 22:31:01.760201 containerd[1615]: time="2025-02-13T22:31:01.757133290Z" level=info msg="TearDown network for sandbox \"692b056e95454c81cc355e80a80cc7b6627396b89288c6694923e58b233955fd\" successfully" Feb 13 22:31:01.760201 containerd[1615]: time="2025-02-13T22:31:01.757153189Z" level=info msg="StopPodSandbox for \"692b056e95454c81cc355e80a80cc7b6627396b89288c6694923e58b233955fd\" returns successfully" Feb 13 22:31:01.760201 containerd[1615]: time="2025-02-13T22:31:01.757579477Z" level=info msg="StopPodSandbox for \"912bd121fad3efcdf2e8de896b20f0578b164319a40cdf47c9644e520ede5664\"" Feb 13 22:31:01.760201 containerd[1615]: time="2025-02-13T22:31:01.757673286Z" level=info msg="TearDown network for sandbox \"912bd121fad3efcdf2e8de896b20f0578b164319a40cdf47c9644e520ede5664\" successfully" Feb 13 22:31:01.760201 containerd[1615]: time="2025-02-13T22:31:01.757691050Z" level=info msg="StopPodSandbox for \"912bd121fad3efcdf2e8de896b20f0578b164319a40cdf47c9644e520ede5664\" returns successfully" Feb 13 22:31:01.760201 containerd[1615]: time="2025-02-13T22:31:01.758022464Z" level=info msg="StopPodSandbox for \"9990403887413b3acdb61739f3adf2455f0422608d6583d9b9800ca1fd4fb5ff\"" Feb 13 22:31:01.760201 containerd[1615]: time="2025-02-13T22:31:01.758135526Z" level=info msg="TearDown network for sandbox \"9990403887413b3acdb61739f3adf2455f0422608d6583d9b9800ca1fd4fb5ff\" successfully" Feb 13 22:31:01.760201 containerd[1615]: time="2025-02-13T22:31:01.758156374Z" level=info msg="StopPodSandbox for \"9990403887413b3acdb61739f3adf2455f0422608d6583d9b9800ca1fd4fb5ff\" returns successfully" Feb 13 22:31:01.760201 containerd[1615]: time="2025-02-13T22:31:01.758663105Z" level=info msg="StopPodSandbox for \"70c7b8a184db90680e8f8b7825b6f798651bc5663501ce1629168cfdecf44ecb\"" Feb 13 22:31:01.760201 containerd[1615]: time="2025-02-13T22:31:01.758990092Z" level=info msg="TearDown network for sandbox \"70c7b8a184db90680e8f8b7825b6f798651bc5663501ce1629168cfdecf44ecb\" successfully" Feb 13 22:31:01.760201 containerd[1615]: time="2025-02-13T22:31:01.759012741Z" level=info msg="StopPodSandbox for \"70c7b8a184db90680e8f8b7825b6f798651bc5663501ce1629168cfdecf44ecb\" returns successfully" Feb 13 22:31:01.760201 containerd[1615]: time="2025-02-13T22:31:01.759483972Z" level=info msg="StopPodSandbox for \"f3eded1962285f1837ff002daa96bd430ad9d60ec11698a14326ce31e88f0ae2\"" Feb 13 22:31:01.760201 containerd[1615]: time="2025-02-13T22:31:01.759588856Z" level=info msg="TearDown network for sandbox \"f3eded1962285f1837ff002daa96bd430ad9d60ec11698a14326ce31e88f0ae2\" successfully" Feb 13 22:31:01.760201 containerd[1615]: time="2025-02-13T22:31:01.759607337Z" level=info msg="StopPodSandbox for \"f3eded1962285f1837ff002daa96bd430ad9d60ec11698a14326ce31e88f0ae2\" returns successfully" Feb 13 22:31:01.760201 containerd[1615]: time="2025-02-13T22:31:01.760035319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xwf8m,Uid:30f07a2e-791e-4a29-bff8-bd4d882c17c8,Namespace:calico-system,Attempt:6,}" Feb 13 22:31:01.762620 systemd[1]: run-netns-cni\x2d7c58b310\x2d1977\x2d155f\x2d10d4\x2dc3b2bcadddfb.mount: Deactivated successfully. Feb 13 22:31:01.949442 containerd[1615]: time="2025-02-13T22:31:01.949264844Z" level=error msg="Failed to destroy network for sandbox \"0d994176f3dd82e4516689c8412d5a139db06ab0c0fbc3895262cf28e1ae81d9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:31:01.950212 containerd[1615]: time="2025-02-13T22:31:01.949741464Z" level=error msg="encountered an error cleaning up failed sandbox \"0d994176f3dd82e4516689c8412d5a139db06ab0c0fbc3895262cf28e1ae81d9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:31:01.950212 containerd[1615]: time="2025-02-13T22:31:01.949818968Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xwf8m,Uid:30f07a2e-791e-4a29-bff8-bd4d882c17c8,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"0d994176f3dd82e4516689c8412d5a139db06ab0c0fbc3895262cf28e1ae81d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:31:01.950345 kubelet[2032]: E0213 22:31:01.950139 2032 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d994176f3dd82e4516689c8412d5a139db06ab0c0fbc3895262cf28e1ae81d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:31:01.950661 kubelet[2032]: E0213 22:31:01.950512 2032 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d994176f3dd82e4516689c8412d5a139db06ab0c0fbc3895262cf28e1ae81d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xwf8m" Feb 13 22:31:01.950661 kubelet[2032]: E0213 22:31:01.950619 2032 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d994176f3dd82e4516689c8412d5a139db06ab0c0fbc3895262cf28e1ae81d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xwf8m" Feb 13 22:31:01.951592 kubelet[2032]: E0213 22:31:01.950920 2032 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xwf8m_calico-system(30f07a2e-791e-4a29-bff8-bd4d882c17c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xwf8m_calico-system(30f07a2e-791e-4a29-bff8-bd4d882c17c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d994176f3dd82e4516689c8412d5a139db06ab0c0fbc3895262cf28e1ae81d9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xwf8m" podUID="30f07a2e-791e-4a29-bff8-bd4d882c17c8" Feb 13 22:31:01.965339 containerd[1615]: time="2025-02-13T22:31:01.964085637Z" level=error msg="Failed to destroy network for sandbox \"330d9daa8badcff355b361f528d0a25d2b63d144899139d9e2795e89fd0e7b98\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:31:01.965339 containerd[1615]: time="2025-02-13T22:31:01.964519488Z" level=error msg="encountered an error cleaning up failed sandbox \"330d9daa8badcff355b361f528d0a25d2b63d144899139d9e2795e89fd0e7b98\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:31:01.965339 containerd[1615]: time="2025-02-13T22:31:01.964598755Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-5hsnp,Uid:e78d438d-8e1b-4f6e-afaf-094ee4898fdc,Namespace:default,Attempt:6,} failed, error" error="failed to setup network for sandbox \"330d9daa8badcff355b361f528d0a25d2b63d144899139d9e2795e89fd0e7b98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:31:01.965669 kubelet[2032]: E0213 22:31:01.964842 2032 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"330d9daa8badcff355b361f528d0a25d2b63d144899139d9e2795e89fd0e7b98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:31:01.965669 kubelet[2032]: E0213 22:31:01.964917 2032 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"330d9daa8badcff355b361f528d0a25d2b63d144899139d9e2795e89fd0e7b98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-5hsnp" Feb 13 22:31:01.965669 kubelet[2032]: E0213 22:31:01.964946 2032 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"330d9daa8badcff355b361f528d0a25d2b63d144899139d9e2795e89fd0e7b98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-5hsnp" Feb 13 22:31:01.965842 kubelet[2032]: E0213 22:31:01.965001 2032 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-5hsnp_default(e78d438d-8e1b-4f6e-afaf-094ee4898fdc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-5hsnp_default(e78d438d-8e1b-4f6e-afaf-094ee4898fdc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"330d9daa8badcff355b361f528d0a25d2b63d144899139d9e2795e89fd0e7b98\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-5hsnp" podUID="e78d438d-8e1b-4f6e-afaf-094ee4898fdc" Feb 13 22:31:02.416796 kubelet[2032]: E0213 22:31:02.416696 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:31:02.554202 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-330d9daa8badcff355b361f528d0a25d2b63d144899139d9e2795e89fd0e7b98-shm.mount: Deactivated successfully. Feb 13 22:31:02.758846 kubelet[2032]: I0213 22:31:02.758509 2032 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="330d9daa8badcff355b361f528d0a25d2b63d144899139d9e2795e89fd0e7b98" Feb 13 22:31:02.761747 containerd[1615]: time="2025-02-13T22:31:02.761668166Z" level=info msg="StopPodSandbox for \"330d9daa8badcff355b361f528d0a25d2b63d144899139d9e2795e89fd0e7b98\"" Feb 13 22:31:02.765373 containerd[1615]: time="2025-02-13T22:31:02.762473670Z" level=info msg="Ensure that sandbox 330d9daa8badcff355b361f528d0a25d2b63d144899139d9e2795e89fd0e7b98 in task-service has been cleanup successfully" Feb 13 22:31:02.765373 containerd[1615]: time="2025-02-13T22:31:02.762698901Z" level=info msg="TearDown network for sandbox \"330d9daa8badcff355b361f528d0a25d2b63d144899139d9e2795e89fd0e7b98\" successfully" Feb 13 22:31:02.765373 containerd[1615]: time="2025-02-13T22:31:02.765215617Z" level=info msg="StopPodSandbox for \"330d9daa8badcff355b361f528d0a25d2b63d144899139d9e2795e89fd0e7b98\" returns successfully" Feb 13 22:31:02.768949 systemd[1]: run-netns-cni\x2dac7b496f\x2d1159\x2d5f6a\x2d90b2\x2d1b2429d2b263.mount: Deactivated successfully. Feb 13 22:31:02.771195 containerd[1615]: time="2025-02-13T22:31:02.769299205Z" level=info msg="StopPodSandbox for \"3ea1a11e3027ebbea77101959e89a850d2b37483f1c119d42fb7b78a2379bfb2\"" Feb 13 22:31:02.771656 containerd[1615]: time="2025-02-13T22:31:02.771392479Z" level=info msg="TearDown network for sandbox \"3ea1a11e3027ebbea77101959e89a850d2b37483f1c119d42fb7b78a2379bfb2\" successfully" Feb 13 22:31:02.771656 containerd[1615]: time="2025-02-13T22:31:02.771422754Z" level=info msg="StopPodSandbox for \"3ea1a11e3027ebbea77101959e89a850d2b37483f1c119d42fb7b78a2379bfb2\" returns successfully" Feb 13 22:31:02.773579 containerd[1615]: time="2025-02-13T22:31:02.773551181Z" level=info msg="StopPodSandbox for \"657a476d72315dc06315e777bb4e227eb02155787782c7c29c2631115160be0d\"" Feb 13 22:31:02.774073 containerd[1615]: time="2025-02-13T22:31:02.773862024Z" level=info msg="TearDown network for sandbox \"657a476d72315dc06315e777bb4e227eb02155787782c7c29c2631115160be0d\" successfully" Feb 13 22:31:02.774073 containerd[1615]: time="2025-02-13T22:31:02.773887022Z" level=info msg="StopPodSandbox for \"657a476d72315dc06315e777bb4e227eb02155787782c7c29c2631115160be0d\" returns successfully" Feb 13 22:31:02.774645 containerd[1615]: time="2025-02-13T22:31:02.774566660Z" level=info msg="StopPodSandbox for \"7b2bf979fec215edde68ae1ebd1c4b23a20419a73f5df32b563ed0e3007ca631\"" Feb 13 22:31:02.775692 containerd[1615]: time="2025-02-13T22:31:02.775647028Z" level=info msg="TearDown network for sandbox \"7b2bf979fec215edde68ae1ebd1c4b23a20419a73f5df32b563ed0e3007ca631\" successfully" Feb 13 22:31:02.775866 containerd[1615]: time="2025-02-13T22:31:02.775673666Z" level=info msg="StopPodSandbox for \"7b2bf979fec215edde68ae1ebd1c4b23a20419a73f5df32b563ed0e3007ca631\" returns successfully" Feb 13 22:31:02.776463 containerd[1615]: time="2025-02-13T22:31:02.776417401Z" level=info msg="StopPodSandbox for \"1f21a6322e5ad03248caae957f1fca1ca644131f36d4001ade17c28fb384cd61\"" Feb 13 22:31:02.779028 containerd[1615]: time="2025-02-13T22:31:02.778905991Z" level=info msg="TearDown network for sandbox \"1f21a6322e5ad03248caae957f1fca1ca644131f36d4001ade17c28fb384cd61\" successfully" Feb 13 22:31:02.779028 containerd[1615]: time="2025-02-13T22:31:02.778934926Z" level=info msg="StopPodSandbox for \"1f21a6322e5ad03248caae957f1fca1ca644131f36d4001ade17c28fb384cd61\" returns successfully" Feb 13 22:31:02.779593 containerd[1615]: time="2025-02-13T22:31:02.779563512Z" level=info msg="StopPodSandbox for \"a51e9c0dac8e3167d45b1ec6278b334ff0dcadf067d08bedf8f4406faa892195\"" Feb 13 22:31:02.780675 containerd[1615]: time="2025-02-13T22:31:02.780531726Z" level=info msg="TearDown network for sandbox \"a51e9c0dac8e3167d45b1ec6278b334ff0dcadf067d08bedf8f4406faa892195\" successfully" Feb 13 22:31:02.780675 containerd[1615]: time="2025-02-13T22:31:02.780556609Z" level=info msg="StopPodSandbox for \"a51e9c0dac8e3167d45b1ec6278b334ff0dcadf067d08bedf8f4406faa892195\" returns successfully" Feb 13 22:31:02.781208 containerd[1615]: time="2025-02-13T22:31:02.781098497Z" level=info msg="StopPodSandbox for \"b39c667b2390dcfd75bcf7108762c485b17c97a7942216db86265444f9d1fc56\"" Feb 13 22:31:02.781290 containerd[1615]: time="2025-02-13T22:31:02.781226505Z" level=info msg="TearDown network for sandbox \"b39c667b2390dcfd75bcf7108762c485b17c97a7942216db86265444f9d1fc56\" successfully" Feb 13 22:31:02.781290 containerd[1615]: time="2025-02-13T22:31:02.781246507Z" level=info msg="StopPodSandbox for \"b39c667b2390dcfd75bcf7108762c485b17c97a7942216db86265444f9d1fc56\" returns successfully" Feb 13 22:31:02.781609 kubelet[2032]: I0213 22:31:02.781485 2032 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d994176f3dd82e4516689c8412d5a139db06ab0c0fbc3895262cf28e1ae81d9" Feb 13 22:31:02.782338 containerd[1615]: time="2025-02-13T22:31:02.782308146Z" level=info msg="StopPodSandbox for \"0d994176f3dd82e4516689c8412d5a139db06ab0c0fbc3895262cf28e1ae81d9\"" Feb 13 22:31:02.782545 containerd[1615]: time="2025-02-13T22:31:02.782506500Z" level=info msg="Ensure that sandbox 0d994176f3dd82e4516689c8412d5a139db06ab0c0fbc3895262cf28e1ae81d9 in task-service has been cleanup successfully" Feb 13 22:31:02.783204 containerd[1615]: time="2025-02-13T22:31:02.783105792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-5hsnp,Uid:e78d438d-8e1b-4f6e-afaf-094ee4898fdc,Namespace:default,Attempt:7,}" Feb 13 22:31:02.783937 containerd[1615]: time="2025-02-13T22:31:02.783903115Z" level=info msg="TearDown network for sandbox \"0d994176f3dd82e4516689c8412d5a139db06ab0c0fbc3895262cf28e1ae81d9\" successfully" Feb 13 22:31:02.784015 containerd[1615]: time="2025-02-13T22:31:02.783933475Z" level=info msg="StopPodSandbox for \"0d994176f3dd82e4516689c8412d5a139db06ab0c0fbc3895262cf28e1ae81d9\" returns successfully" Feb 13 22:31:02.788884 containerd[1615]: time="2025-02-13T22:31:02.786692360Z" level=info msg="StopPodSandbox for \"48f6ecefe1fca3fc64b572878614fb43e4322bd6e6808f3650462e8ede1b8922\"" Feb 13 22:31:02.788884 containerd[1615]: time="2025-02-13T22:31:02.786885763Z" level=info msg="TearDown network for sandbox \"48f6ecefe1fca3fc64b572878614fb43e4322bd6e6808f3650462e8ede1b8922\" successfully" Feb 13 22:31:02.788884 containerd[1615]: time="2025-02-13T22:31:02.786906923Z" level=info msg="StopPodSandbox for \"48f6ecefe1fca3fc64b572878614fb43e4322bd6e6808f3650462e8ede1b8922\" returns successfully" Feb 13 22:31:02.789560 containerd[1615]: time="2025-02-13T22:31:02.789310605Z" level=info msg="StopPodSandbox for \"692b056e95454c81cc355e80a80cc7b6627396b89288c6694923e58b233955fd\"" Feb 13 22:31:02.789560 containerd[1615]: time="2025-02-13T22:31:02.789421109Z" level=info msg="TearDown network for sandbox \"692b056e95454c81cc355e80a80cc7b6627396b89288c6694923e58b233955fd\" successfully" Feb 13 22:31:02.789560 containerd[1615]: time="2025-02-13T22:31:02.789440052Z" level=info msg="StopPodSandbox for \"692b056e95454c81cc355e80a80cc7b6627396b89288c6694923e58b233955fd\" returns successfully" Feb 13 22:31:02.789519 systemd[1]: run-netns-cni\x2dac5a0224\x2da1db\x2d3c8b\x2d83a3\x2dbf102c2d8f6e.mount: Deactivated successfully. Feb 13 22:31:02.790752 containerd[1615]: time="2025-02-13T22:31:02.789792776Z" level=info msg="StopPodSandbox for \"912bd121fad3efcdf2e8de896b20f0578b164319a40cdf47c9644e520ede5664\"" Feb 13 22:31:02.790752 containerd[1615]: time="2025-02-13T22:31:02.790218412Z" level=info msg="TearDown network for sandbox \"912bd121fad3efcdf2e8de896b20f0578b164319a40cdf47c9644e520ede5664\" successfully" Feb 13 22:31:02.790752 containerd[1615]: time="2025-02-13T22:31:02.790238765Z" level=info msg="StopPodSandbox for \"912bd121fad3efcdf2e8de896b20f0578b164319a40cdf47c9644e520ede5664\" returns successfully" Feb 13 22:31:02.791358 containerd[1615]: time="2025-02-13T22:31:02.790925143Z" level=info msg="StopPodSandbox for \"9990403887413b3acdb61739f3adf2455f0422608d6583d9b9800ca1fd4fb5ff\"" Feb 13 22:31:02.791570 containerd[1615]: time="2025-02-13T22:31:02.791408481Z" level=info msg="TearDown network for sandbox \"9990403887413b3acdb61739f3adf2455f0422608d6583d9b9800ca1fd4fb5ff\" successfully" Feb 13 22:31:02.791570 containerd[1615]: time="2025-02-13T22:31:02.791435574Z" level=info msg="StopPodSandbox for \"9990403887413b3acdb61739f3adf2455f0422608d6583d9b9800ca1fd4fb5ff\" returns successfully" Feb 13 22:31:02.792107 containerd[1615]: time="2025-02-13T22:31:02.791800503Z" level=info msg="StopPodSandbox for \"70c7b8a184db90680e8f8b7825b6f798651bc5663501ce1629168cfdecf44ecb\"" Feb 13 22:31:02.792107 containerd[1615]: time="2025-02-13T22:31:02.791940760Z" level=info msg="TearDown network for sandbox \"70c7b8a184db90680e8f8b7825b6f798651bc5663501ce1629168cfdecf44ecb\" successfully" Feb 13 22:31:02.792107 containerd[1615]: time="2025-02-13T22:31:02.791960679Z" level=info msg="StopPodSandbox for \"70c7b8a184db90680e8f8b7825b6f798651bc5663501ce1629168cfdecf44ecb\" returns successfully" Feb 13 22:31:02.792676 containerd[1615]: time="2025-02-13T22:31:02.792646967Z" level=info msg="StopPodSandbox for \"f3eded1962285f1837ff002daa96bd430ad9d60ec11698a14326ce31e88f0ae2\"" Feb 13 22:31:02.793229 containerd[1615]: time="2025-02-13T22:31:02.792807566Z" level=info msg="TearDown network for sandbox \"f3eded1962285f1837ff002daa96bd430ad9d60ec11698a14326ce31e88f0ae2\" successfully" Feb 13 22:31:02.793304 containerd[1615]: time="2025-02-13T22:31:02.793230731Z" level=info msg="StopPodSandbox for \"f3eded1962285f1837ff002daa96bd430ad9d60ec11698a14326ce31e88f0ae2\" returns successfully" Feb 13 22:31:02.794267 containerd[1615]: time="2025-02-13T22:31:02.794174011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xwf8m,Uid:30f07a2e-791e-4a29-bff8-bd4d882c17c8,Namespace:calico-system,Attempt:7,}" Feb 13 22:31:02.963760 containerd[1615]: time="2025-02-13T22:31:02.963701299Z" level=error msg="Failed to destroy network for sandbox \"80e55e8253138db8b470427ef3f09749c815d0c396b8d9b2d5170ba7da81f987\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:31:02.964873 containerd[1615]: time="2025-02-13T22:31:02.964624807Z" level=error msg="encountered an error cleaning up failed sandbox \"80e55e8253138db8b470427ef3f09749c815d0c396b8d9b2d5170ba7da81f987\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:31:02.964873 containerd[1615]: time="2025-02-13T22:31:02.964699136Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-5hsnp,Uid:e78d438d-8e1b-4f6e-afaf-094ee4898fdc,Namespace:default,Attempt:7,} failed, error" error="failed to setup network for sandbox \"80e55e8253138db8b470427ef3f09749c815d0c396b8d9b2d5170ba7da81f987\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:31:02.965277 kubelet[2032]: E0213 22:31:02.965225 2032 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80e55e8253138db8b470427ef3f09749c815d0c396b8d9b2d5170ba7da81f987\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:31:02.965389 kubelet[2032]: E0213 22:31:02.965305 2032 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80e55e8253138db8b470427ef3f09749c815d0c396b8d9b2d5170ba7da81f987\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-5hsnp" Feb 13 22:31:02.965389 kubelet[2032]: E0213 22:31:02.965343 2032 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80e55e8253138db8b470427ef3f09749c815d0c396b8d9b2d5170ba7da81f987\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-5hsnp" Feb 13 22:31:02.965487 kubelet[2032]: E0213 22:31:02.965411 2032 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-5hsnp_default(e78d438d-8e1b-4f6e-afaf-094ee4898fdc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-5hsnp_default(e78d438d-8e1b-4f6e-afaf-094ee4898fdc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"80e55e8253138db8b470427ef3f09749c815d0c396b8d9b2d5170ba7da81f987\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-5hsnp" podUID="e78d438d-8e1b-4f6e-afaf-094ee4898fdc" Feb 13 22:31:02.970657 containerd[1615]: time="2025-02-13T22:31:02.970279160Z" level=error msg="Failed to destroy network for sandbox \"67c36816cd42ec52634a63241f444fa1a73fdccfd1d9f8331b657a26db83a7a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:31:02.970864 containerd[1615]: time="2025-02-13T22:31:02.970826766Z" level=error msg="encountered an error cleaning up failed sandbox \"67c36816cd42ec52634a63241f444fa1a73fdccfd1d9f8331b657a26db83a7a6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:31:02.971074 containerd[1615]: time="2025-02-13T22:31:02.971025729Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xwf8m,Uid:30f07a2e-791e-4a29-bff8-bd4d882c17c8,Namespace:calico-system,Attempt:7,} failed, error" error="failed to setup network for sandbox \"67c36816cd42ec52634a63241f444fa1a73fdccfd1d9f8331b657a26db83a7a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:31:02.971413 kubelet[2032]: E0213 22:31:02.971359 2032 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67c36816cd42ec52634a63241f444fa1a73fdccfd1d9f8331b657a26db83a7a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:31:02.971493 kubelet[2032]: E0213 22:31:02.971424 2032 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67c36816cd42ec52634a63241f444fa1a73fdccfd1d9f8331b657a26db83a7a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xwf8m" Feb 13 22:31:02.971493 kubelet[2032]: E0213 22:31:02.971453 2032 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67c36816cd42ec52634a63241f444fa1a73fdccfd1d9f8331b657a26db83a7a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xwf8m" Feb 13 22:31:02.971619 kubelet[2032]: E0213 22:31:02.971520 2032 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xwf8m_calico-system(30f07a2e-791e-4a29-bff8-bd4d882c17c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xwf8m_calico-system(30f07a2e-791e-4a29-bff8-bd4d882c17c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"67c36816cd42ec52634a63241f444fa1a73fdccfd1d9f8331b657a26db83a7a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xwf8m" podUID="30f07a2e-791e-4a29-bff8-bd4d882c17c8" Feb 13 22:31:03.417500 kubelet[2032]: E0213 22:31:03.417400 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:31:03.554506 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-80e55e8253138db8b470427ef3f09749c815d0c396b8d9b2d5170ba7da81f987-shm.mount: Deactivated successfully. Feb 13 22:31:03.791889 kubelet[2032]: I0213 22:31:03.790764 2032 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80e55e8253138db8b470427ef3f09749c815d0c396b8d9b2d5170ba7da81f987" Feb 13 22:31:03.792448 containerd[1615]: time="2025-02-13T22:31:03.792027019Z" level=info msg="StopPodSandbox for \"80e55e8253138db8b470427ef3f09749c815d0c396b8d9b2d5170ba7da81f987\"" Feb 13 22:31:03.792448 containerd[1615]: time="2025-02-13T22:31:03.792340378Z" level=info msg="Ensure that sandbox 80e55e8253138db8b470427ef3f09749c815d0c396b8d9b2d5170ba7da81f987 in task-service has been cleanup successfully" Feb 13 22:31:03.797613 containerd[1615]: time="2025-02-13T22:31:03.795262043Z" level=info msg="TearDown network for sandbox \"80e55e8253138db8b470427ef3f09749c815d0c396b8d9b2d5170ba7da81f987\" successfully" Feb 13 22:31:03.797613 containerd[1615]: time="2025-02-13T22:31:03.795313198Z" level=info msg="StopPodSandbox for \"80e55e8253138db8b470427ef3f09749c815d0c396b8d9b2d5170ba7da81f987\" returns successfully" Feb 13 22:31:03.797613 containerd[1615]: time="2025-02-13T22:31:03.795664575Z" level=info msg="StopPodSandbox for \"330d9daa8badcff355b361f528d0a25d2b63d144899139d9e2795e89fd0e7b98\"" Feb 13 22:31:03.797613 containerd[1615]: time="2025-02-13T22:31:03.795759861Z" level=info msg="TearDown network for sandbox \"330d9daa8badcff355b361f528d0a25d2b63d144899139d9e2795e89fd0e7b98\" successfully" Feb 13 22:31:03.797613 containerd[1615]: time="2025-02-13T22:31:03.795777733Z" level=info msg="StopPodSandbox for \"330d9daa8badcff355b361f528d0a25d2b63d144899139d9e2795e89fd0e7b98\" returns successfully" Feb 13 22:31:03.797613 containerd[1615]: time="2025-02-13T22:31:03.796109082Z" level=info msg="StopPodSandbox for \"3ea1a11e3027ebbea77101959e89a850d2b37483f1c119d42fb7b78a2379bfb2\"" Feb 13 22:31:03.797613 containerd[1615]: time="2025-02-13T22:31:03.796227322Z" level=info msg="TearDown network for sandbox \"3ea1a11e3027ebbea77101959e89a850d2b37483f1c119d42fb7b78a2379bfb2\" successfully" Feb 13 22:31:03.797613 containerd[1615]: time="2025-02-13T22:31:03.796253586Z" level=info msg="StopPodSandbox for \"3ea1a11e3027ebbea77101959e89a850d2b37483f1c119d42fb7b78a2379bfb2\" returns successfully" Feb 13 22:31:03.797613 containerd[1615]: time="2025-02-13T22:31:03.797509648Z" level=info msg="StopPodSandbox for \"657a476d72315dc06315e777bb4e227eb02155787782c7c29c2631115160be0d\"" Feb 13 22:31:03.796691 systemd[1]: run-netns-cni\x2d280b0c18\x2db6ae\x2d8a0a\x2d82f7\x2d9ccb0aa04a35.mount: Deactivated successfully. Feb 13 22:31:03.798540 containerd[1615]: time="2025-02-13T22:31:03.797847498Z" level=info msg="TearDown network for sandbox \"657a476d72315dc06315e777bb4e227eb02155787782c7c29c2631115160be0d\" successfully" Feb 13 22:31:03.798540 containerd[1615]: time="2025-02-13T22:31:03.798071373Z" level=info msg="StopPodSandbox for \"657a476d72315dc06315e777bb4e227eb02155787782c7c29c2631115160be0d\" returns successfully" Feb 13 22:31:03.800938 containerd[1615]: time="2025-02-13T22:31:03.799799985Z" level=info msg="StopPodSandbox for \"7b2bf979fec215edde68ae1ebd1c4b23a20419a73f5df32b563ed0e3007ca631\"" Feb 13 22:31:03.800938 containerd[1615]: time="2025-02-13T22:31:03.800416906Z" level=info msg="TearDown network for sandbox \"7b2bf979fec215edde68ae1ebd1c4b23a20419a73f5df32b563ed0e3007ca631\" successfully" Feb 13 22:31:03.800938 containerd[1615]: time="2025-02-13T22:31:03.800527356Z" level=info msg="StopPodSandbox for \"7b2bf979fec215edde68ae1ebd1c4b23a20419a73f5df32b563ed0e3007ca631\" returns successfully" Feb 13 22:31:03.802020 containerd[1615]: time="2025-02-13T22:31:03.801985280Z" level=info msg="StopPodSandbox for \"1f21a6322e5ad03248caae957f1fca1ca644131f36d4001ade17c28fb384cd61\"" Feb 13 22:31:03.802587 containerd[1615]: time="2025-02-13T22:31:03.802272109Z" level=info msg="TearDown network for sandbox \"1f21a6322e5ad03248caae957f1fca1ca644131f36d4001ade17c28fb384cd61\" successfully" Feb 13 22:31:03.802587 containerd[1615]: time="2025-02-13T22:31:03.802305706Z" level=info msg="StopPodSandbox for \"1f21a6322e5ad03248caae957f1fca1ca644131f36d4001ade17c28fb384cd61\" returns successfully" Feb 13 22:31:03.803573 containerd[1615]: time="2025-02-13T22:31:03.803434784Z" level=info msg="StopPodSandbox for \"a51e9c0dac8e3167d45b1ec6278b334ff0dcadf067d08bedf8f4406faa892195\"" Feb 13 22:31:03.803834 containerd[1615]: time="2025-02-13T22:31:03.803806897Z" level=info msg="TearDown network for sandbox \"a51e9c0dac8e3167d45b1ec6278b334ff0dcadf067d08bedf8f4406faa892195\" successfully" Feb 13 22:31:03.804208 containerd[1615]: time="2025-02-13T22:31:03.803927124Z" level=info msg="StopPodSandbox for \"a51e9c0dac8e3167d45b1ec6278b334ff0dcadf067d08bedf8f4406faa892195\" returns successfully" Feb 13 22:31:03.805348 containerd[1615]: time="2025-02-13T22:31:03.805314575Z" level=info msg="StopPodSandbox for \"b39c667b2390dcfd75bcf7108762c485b17c97a7942216db86265444f9d1fc56\"" Feb 13 22:31:03.805729 containerd[1615]: time="2025-02-13T22:31:03.805702225Z" level=info msg="TearDown network for sandbox \"b39c667b2390dcfd75bcf7108762c485b17c97a7942216db86265444f9d1fc56\" successfully" Feb 13 22:31:03.805858 containerd[1615]: time="2025-02-13T22:31:03.805832767Z" level=info msg="StopPodSandbox for \"b39c667b2390dcfd75bcf7108762c485b17c97a7942216db86265444f9d1fc56\" returns successfully" Feb 13 22:31:03.807654 containerd[1615]: time="2025-02-13T22:31:03.807623713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-5hsnp,Uid:e78d438d-8e1b-4f6e-afaf-094ee4898fdc,Namespace:default,Attempt:8,}" Feb 13 22:31:03.823353 kubelet[2032]: I0213 22:31:03.823283 2032 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67c36816cd42ec52634a63241f444fa1a73fdccfd1d9f8331b657a26db83a7a6" Feb 13 22:31:03.825219 containerd[1615]: time="2025-02-13T22:31:03.824018367Z" level=info msg="StopPodSandbox for \"67c36816cd42ec52634a63241f444fa1a73fdccfd1d9f8331b657a26db83a7a6\"" Feb 13 22:31:03.825219 containerd[1615]: time="2025-02-13T22:31:03.824464820Z" level=info msg="Ensure that sandbox 67c36816cd42ec52634a63241f444fa1a73fdccfd1d9f8331b657a26db83a7a6 in task-service has been cleanup successfully" Feb 13 22:31:03.825219 containerd[1615]: time="2025-02-13T22:31:03.824735425Z" level=info msg="TearDown network for sandbox \"67c36816cd42ec52634a63241f444fa1a73fdccfd1d9f8331b657a26db83a7a6\" successfully" Feb 13 22:31:03.825219 containerd[1615]: time="2025-02-13T22:31:03.824788417Z" level=info msg="StopPodSandbox for \"67c36816cd42ec52634a63241f444fa1a73fdccfd1d9f8331b657a26db83a7a6\" returns successfully" Feb 13 22:31:03.828318 containerd[1615]: time="2025-02-13T22:31:03.828282414Z" level=info msg="StopPodSandbox for \"0d994176f3dd82e4516689c8412d5a139db06ab0c0fbc3895262cf28e1ae81d9\"" Feb 13 22:31:03.828418 containerd[1615]: time="2025-02-13T22:31:03.828391003Z" level=info msg="TearDown network for sandbox \"0d994176f3dd82e4516689c8412d5a139db06ab0c0fbc3895262cf28e1ae81d9\" successfully" Feb 13 22:31:03.828517 containerd[1615]: time="2025-02-13T22:31:03.828417546Z" level=info msg="StopPodSandbox for \"0d994176f3dd82e4516689c8412d5a139db06ab0c0fbc3895262cf28e1ae81d9\" returns successfully" Feb 13 22:31:03.828902 systemd[1]: run-netns-cni\x2d64f9d425\x2db31e\x2def36\x2db02a\x2d6dc675d2dc82.mount: Deactivated successfully. Feb 13 22:31:03.831314 containerd[1615]: time="2025-02-13T22:31:03.830618566Z" level=info msg="StopPodSandbox for \"48f6ecefe1fca3fc64b572878614fb43e4322bd6e6808f3650462e8ede1b8922\"" Feb 13 22:31:03.831314 containerd[1615]: time="2025-02-13T22:31:03.830723921Z" level=info msg="TearDown network for sandbox \"48f6ecefe1fca3fc64b572878614fb43e4322bd6e6808f3650462e8ede1b8922\" successfully" Feb 13 22:31:03.831314 containerd[1615]: time="2025-02-13T22:31:03.830743390Z" level=info msg="StopPodSandbox for \"48f6ecefe1fca3fc64b572878614fb43e4322bd6e6808f3650462e8ede1b8922\" returns successfully" Feb 13 22:31:03.839139 containerd[1615]: time="2025-02-13T22:31:03.837319115Z" level=info msg="StopPodSandbox for \"692b056e95454c81cc355e80a80cc7b6627396b89288c6694923e58b233955fd\"" Feb 13 22:31:03.839139 containerd[1615]: time="2025-02-13T22:31:03.837442806Z" level=info msg="TearDown network for sandbox \"692b056e95454c81cc355e80a80cc7b6627396b89288c6694923e58b233955fd\" successfully" Feb 13 22:31:03.839139 containerd[1615]: time="2025-02-13T22:31:03.837496672Z" level=info msg="StopPodSandbox for \"692b056e95454c81cc355e80a80cc7b6627396b89288c6694923e58b233955fd\" returns successfully" Feb 13 22:31:03.842936 containerd[1615]: time="2025-02-13T22:31:03.839598928Z" level=info msg="StopPodSandbox for \"912bd121fad3efcdf2e8de896b20f0578b164319a40cdf47c9644e520ede5664\"" Feb 13 22:31:03.842936 containerd[1615]: time="2025-02-13T22:31:03.842232330Z" level=info msg="TearDown network for sandbox \"912bd121fad3efcdf2e8de896b20f0578b164319a40cdf47c9644e520ede5664\" successfully" Feb 13 22:31:03.842936 containerd[1615]: time="2025-02-13T22:31:03.842260927Z" level=info msg="StopPodSandbox for \"912bd121fad3efcdf2e8de896b20f0578b164319a40cdf47c9644e520ede5664\" returns successfully" Feb 13 22:31:03.850151 containerd[1615]: time="2025-02-13T22:31:03.843899001Z" level=info msg="StopPodSandbox for \"9990403887413b3acdb61739f3adf2455f0422608d6583d9b9800ca1fd4fb5ff\"" Feb 13 22:31:03.850151 containerd[1615]: time="2025-02-13T22:31:03.844060405Z" level=info msg="TearDown network for sandbox \"9990403887413b3acdb61739f3adf2455f0422608d6583d9b9800ca1fd4fb5ff\" successfully" Feb 13 22:31:03.850151 containerd[1615]: time="2025-02-13T22:31:03.844080744Z" level=info msg="StopPodSandbox for \"9990403887413b3acdb61739f3adf2455f0422608d6583d9b9800ca1fd4fb5ff\" returns successfully" Feb 13 22:31:03.850151 containerd[1615]: time="2025-02-13T22:31:03.845671164Z" level=info msg="StopPodSandbox for \"70c7b8a184db90680e8f8b7825b6f798651bc5663501ce1629168cfdecf44ecb\"" Feb 13 22:31:03.850151 containerd[1615]: time="2025-02-13T22:31:03.845781039Z" level=info msg="TearDown network for sandbox \"70c7b8a184db90680e8f8b7825b6f798651bc5663501ce1629168cfdecf44ecb\" successfully" Feb 13 22:31:03.850151 containerd[1615]: time="2025-02-13T22:31:03.845800703Z" level=info msg="StopPodSandbox for \"70c7b8a184db90680e8f8b7825b6f798651bc5663501ce1629168cfdecf44ecb\" returns successfully" Feb 13 22:31:03.850151 containerd[1615]: time="2025-02-13T22:31:03.846907291Z" level=info msg="StopPodSandbox for \"f3eded1962285f1837ff002daa96bd430ad9d60ec11698a14326ce31e88f0ae2\"" Feb 13 22:31:03.850151 containerd[1615]: time="2025-02-13T22:31:03.847027907Z" level=info msg="TearDown network for sandbox \"f3eded1962285f1837ff002daa96bd430ad9d60ec11698a14326ce31e88f0ae2\" successfully" Feb 13 22:31:03.850151 containerd[1615]: time="2025-02-13T22:31:03.847064533Z" level=info msg="StopPodSandbox for \"f3eded1962285f1837ff002daa96bd430ad9d60ec11698a14326ce31e88f0ae2\" returns successfully" Feb 13 22:31:03.850151 containerd[1615]: time="2025-02-13T22:31:03.847699517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xwf8m,Uid:30f07a2e-791e-4a29-bff8-bd4d882c17c8,Namespace:calico-system,Attempt:8,}" Feb 13 22:31:03.960772 containerd[1615]: time="2025-02-13T22:31:03.960690642Z" level=error msg="Failed to destroy network for sandbox \"4609d94c485f09088b0459e4192419979163f9cc404c226554d1e9d395d9d330\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:31:03.962077 containerd[1615]: time="2025-02-13T22:31:03.961171873Z" level=error msg="encountered an error cleaning up failed sandbox \"4609d94c485f09088b0459e4192419979163f9cc404c226554d1e9d395d9d330\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:31:03.962077 containerd[1615]: time="2025-02-13T22:31:03.961287424Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xwf8m,Uid:30f07a2e-791e-4a29-bff8-bd4d882c17c8,Namespace:calico-system,Attempt:8,} failed, error" error="failed to setup network for sandbox \"4609d94c485f09088b0459e4192419979163f9cc404c226554d1e9d395d9d330\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:31:03.962260 kubelet[2032]: E0213 22:31:03.961551 2032 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4609d94c485f09088b0459e4192419979163f9cc404c226554d1e9d395d9d330\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:31:03.962260 kubelet[2032]: E0213 22:31:03.961636 2032 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4609d94c485f09088b0459e4192419979163f9cc404c226554d1e9d395d9d330\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xwf8m" Feb 13 22:31:03.962260 kubelet[2032]: E0213 22:31:03.961677 2032 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4609d94c485f09088b0459e4192419979163f9cc404c226554d1e9d395d9d330\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xwf8m" Feb 13 22:31:03.962465 kubelet[2032]: E0213 22:31:03.961731 2032 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xwf8m_calico-system(30f07a2e-791e-4a29-bff8-bd4d882c17c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xwf8m_calico-system(30f07a2e-791e-4a29-bff8-bd4d882c17c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4609d94c485f09088b0459e4192419979163f9cc404c226554d1e9d395d9d330\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xwf8m" podUID="30f07a2e-791e-4a29-bff8-bd4d882c17c8" Feb 13 22:31:03.970456 containerd[1615]: time="2025-02-13T22:31:03.970370567Z" level=error msg="Failed to destroy network for sandbox \"fcb7816869d35d40c234be503f1fffbc139adc3e8f3115249c7cd598c7376d64\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:31:03.970918 containerd[1615]: time="2025-02-13T22:31:03.970853396Z" level=error msg="encountered an error cleaning up failed sandbox \"fcb7816869d35d40c234be503f1fffbc139adc3e8f3115249c7cd598c7376d64\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:31:03.971003 containerd[1615]: time="2025-02-13T22:31:03.970937419Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-5hsnp,Uid:e78d438d-8e1b-4f6e-afaf-094ee4898fdc,Namespace:default,Attempt:8,} failed, error" error="failed to setup network for sandbox \"fcb7816869d35d40c234be503f1fffbc139adc3e8f3115249c7cd598c7376d64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:31:03.971307 kubelet[2032]: E0213 22:31:03.971255 2032 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcb7816869d35d40c234be503f1fffbc139adc3e8f3115249c7cd598c7376d64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 22:31:03.971406 kubelet[2032]: E0213 22:31:03.971336 2032 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcb7816869d35d40c234be503f1fffbc139adc3e8f3115249c7cd598c7376d64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-5hsnp" Feb 13 22:31:03.971406 kubelet[2032]: E0213 22:31:03.971366 2032 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcb7816869d35d40c234be503f1fffbc139adc3e8f3115249c7cd598c7376d64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-5hsnp" Feb 13 22:31:03.971503 kubelet[2032]: E0213 22:31:03.971432 2032 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-5hsnp_default(e78d438d-8e1b-4f6e-afaf-094ee4898fdc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-5hsnp_default(e78d438d-8e1b-4f6e-afaf-094ee4898fdc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fcb7816869d35d40c234be503f1fffbc139adc3e8f3115249c7cd598c7376d64\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-5hsnp" podUID="e78d438d-8e1b-4f6e-afaf-094ee4898fdc" Feb 13 22:31:04.161994 containerd[1615]: time="2025-02-13T22:31:04.161934019Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:31:04.163323 containerd[1615]: time="2025-02-13T22:31:04.163279001Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 13 22:31:04.192955 containerd[1615]: time="2025-02-13T22:31:04.192857938Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:31:04.195933 containerd[1615]: time="2025-02-13T22:31:04.195854247Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:31:04.197487 containerd[1615]: time="2025-02-13T22:31:04.196756135Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 9.579861835s" Feb 13 22:31:04.197487 containerd[1615]: time="2025-02-13T22:31:04.196801373Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 13 22:31:04.206774 containerd[1615]: time="2025-02-13T22:31:04.206729677Z" level=info msg="CreateContainer within sandbox \"1d4efe1fd33d6745fded2417cd79e0be1ddb671567e19841c75e683ff4ab1485\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 22:31:04.222806 containerd[1615]: time="2025-02-13T22:31:04.222744530Z" level=info msg="CreateContainer within sandbox \"1d4efe1fd33d6745fded2417cd79e0be1ddb671567e19841c75e683ff4ab1485\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"24e1fbc2795b02c34adf3cb8ceff38947e803fbe0c4b9b3e6f78f3a9626fb4c6\"" Feb 13 22:31:04.223636 containerd[1615]: time="2025-02-13T22:31:04.223571407Z" level=info msg="StartContainer for \"24e1fbc2795b02c34adf3cb8ceff38947e803fbe0c4b9b3e6f78f3a9626fb4c6\"" Feb 13 22:31:04.351781 containerd[1615]: time="2025-02-13T22:31:04.351611075Z" level=info msg="StartContainer for \"24e1fbc2795b02c34adf3cb8ceff38947e803fbe0c4b9b3e6f78f3a9626fb4c6\" returns successfully" Feb 13 22:31:04.418358 kubelet[2032]: E0213 22:31:04.418225 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:31:04.441713 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 22:31:04.441889 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 22:31:04.557385 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fcb7816869d35d40c234be503f1fffbc139adc3e8f3115249c7cd598c7376d64-shm.mount: Deactivated successfully. Feb 13 22:31:04.557637 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1816806869.mount: Deactivated successfully. Feb 13 22:31:04.832770 kubelet[2032]: I0213 22:31:04.832658 2032 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4609d94c485f09088b0459e4192419979163f9cc404c226554d1e9d395d9d330" Feb 13 22:31:04.834257 containerd[1615]: time="2025-02-13T22:31:04.833626767Z" level=info msg="StopPodSandbox for \"4609d94c485f09088b0459e4192419979163f9cc404c226554d1e9d395d9d330\"" Feb 13 22:31:04.838237 containerd[1615]: time="2025-02-13T22:31:04.834238920Z" level=info msg="Ensure that sandbox 4609d94c485f09088b0459e4192419979163f9cc404c226554d1e9d395d9d330 in task-service has been cleanup successfully" Feb 13 22:31:04.838237 containerd[1615]: time="2025-02-13T22:31:04.836686338Z" level=info msg="TearDown network for sandbox \"4609d94c485f09088b0459e4192419979163f9cc404c226554d1e9d395d9d330\" successfully" Feb 13 22:31:04.838237 containerd[1615]: time="2025-02-13T22:31:04.836712517Z" level=info msg="StopPodSandbox for \"4609d94c485f09088b0459e4192419979163f9cc404c226554d1e9d395d9d330\" returns successfully" Feb 13 22:31:04.839319 systemd[1]: run-netns-cni\x2d89a5f30e\x2dcd08\x2dfa4d\x2dec2f\x2d43df3707c3bc.mount: Deactivated successfully. Feb 13 22:31:04.840512 containerd[1615]: time="2025-02-13T22:31:04.840375793Z" level=info msg="StopPodSandbox for \"67c36816cd42ec52634a63241f444fa1a73fdccfd1d9f8331b657a26db83a7a6\"" Feb 13 22:31:04.840823 containerd[1615]: time="2025-02-13T22:31:04.840769556Z" level=info msg="TearDown network for sandbox \"67c36816cd42ec52634a63241f444fa1a73fdccfd1d9f8331b657a26db83a7a6\" successfully" Feb 13 22:31:04.841028 containerd[1615]: time="2025-02-13T22:31:04.840923641Z" level=info msg="StopPodSandbox for \"67c36816cd42ec52634a63241f444fa1a73fdccfd1d9f8331b657a26db83a7a6\" returns successfully" Feb 13 22:31:04.842948 containerd[1615]: time="2025-02-13T22:31:04.842661588Z" level=info msg="StopPodSandbox for \"0d994176f3dd82e4516689c8412d5a139db06ab0c0fbc3895262cf28e1ae81d9\"" Feb 13 22:31:04.842948 containerd[1615]: time="2025-02-13T22:31:04.842799591Z" level=info msg="TearDown network for sandbox \"0d994176f3dd82e4516689c8412d5a139db06ab0c0fbc3895262cf28e1ae81d9\" successfully" Feb 13 22:31:04.842948 containerd[1615]: time="2025-02-13T22:31:04.842879833Z" level=info msg="StopPodSandbox for \"0d994176f3dd82e4516689c8412d5a139db06ab0c0fbc3895262cf28e1ae81d9\" returns successfully" Feb 13 22:31:04.843782 containerd[1615]: time="2025-02-13T22:31:04.843513187Z" level=info msg="StopPodSandbox for \"48f6ecefe1fca3fc64b572878614fb43e4322bd6e6808f3650462e8ede1b8922\"" Feb 13 22:31:04.843782 containerd[1615]: time="2025-02-13T22:31:04.843611243Z" level=info msg="TearDown network for sandbox \"48f6ecefe1fca3fc64b572878614fb43e4322bd6e6808f3650462e8ede1b8922\" successfully" Feb 13 22:31:04.843782 containerd[1615]: time="2025-02-13T22:31:04.843629736Z" level=info msg="StopPodSandbox for \"48f6ecefe1fca3fc64b572878614fb43e4322bd6e6808f3650462e8ede1b8922\" returns successfully" Feb 13 22:31:04.846637 containerd[1615]: time="2025-02-13T22:31:04.846603127Z" level=info msg="StopPodSandbox for \"692b056e95454c81cc355e80a80cc7b6627396b89288c6694923e58b233955fd\"" Feb 13 22:31:04.846922 containerd[1615]: time="2025-02-13T22:31:04.846884550Z" level=info msg="TearDown network for sandbox \"692b056e95454c81cc355e80a80cc7b6627396b89288c6694923e58b233955fd\" successfully" Feb 13 22:31:04.847367 containerd[1615]: time="2025-02-13T22:31:04.847316169Z" level=info msg="StopPodSandbox for \"692b056e95454c81cc355e80a80cc7b6627396b89288c6694923e58b233955fd\" returns successfully" Feb 13 22:31:04.849102 containerd[1615]: time="2025-02-13T22:31:04.849039846Z" level=info msg="StopPodSandbox for \"912bd121fad3efcdf2e8de896b20f0578b164319a40cdf47c9644e520ede5664\"" Feb 13 22:31:04.849582 containerd[1615]: time="2025-02-13T22:31:04.849157258Z" level=info msg="TearDown network for sandbox \"912bd121fad3efcdf2e8de896b20f0578b164319a40cdf47c9644e520ede5664\" successfully" Feb 13 22:31:04.850143 containerd[1615]: time="2025-02-13T22:31:04.849176520Z" level=info msg="StopPodSandbox for \"912bd121fad3efcdf2e8de896b20f0578b164319a40cdf47c9644e520ede5664\" returns successfully" Feb 13 22:31:04.851988 containerd[1615]: time="2025-02-13T22:31:04.851427832Z" level=info msg="StopPodSandbox for \"9990403887413b3acdb61739f3adf2455f0422608d6583d9b9800ca1fd4fb5ff\"" Feb 13 22:31:04.851988 containerd[1615]: time="2025-02-13T22:31:04.851547104Z" level=info msg="TearDown network for sandbox \"9990403887413b3acdb61739f3adf2455f0422608d6583d9b9800ca1fd4fb5ff\" successfully" Feb 13 22:31:04.851988 containerd[1615]: time="2025-02-13T22:31:04.851566410Z" level=info msg="StopPodSandbox for \"9990403887413b3acdb61739f3adf2455f0422608d6583d9b9800ca1fd4fb5ff\" returns successfully" Feb 13 22:31:04.853395 containerd[1615]: time="2025-02-13T22:31:04.853365059Z" level=info msg="StopPodSandbox for \"70c7b8a184db90680e8f8b7825b6f798651bc5663501ce1629168cfdecf44ecb\"" Feb 13 22:31:04.853498 containerd[1615]: time="2025-02-13T22:31:04.853473475Z" level=info msg="TearDown network for sandbox \"70c7b8a184db90680e8f8b7825b6f798651bc5663501ce1629168cfdecf44ecb\" successfully" Feb 13 22:31:04.853498 containerd[1615]: time="2025-02-13T22:31:04.853492020Z" level=info msg="StopPodSandbox for \"70c7b8a184db90680e8f8b7825b6f798651bc5663501ce1629168cfdecf44ecb\" returns successfully" Feb 13 22:31:04.855811 containerd[1615]: time="2025-02-13T22:31:04.855450986Z" level=info msg="StopPodSandbox for \"f3eded1962285f1837ff002daa96bd430ad9d60ec11698a14326ce31e88f0ae2\"" Feb 13 22:31:04.855811 containerd[1615]: time="2025-02-13T22:31:04.855583559Z" level=info msg="TearDown network for sandbox \"f3eded1962285f1837ff002daa96bd430ad9d60ec11698a14326ce31e88f0ae2\" successfully" Feb 13 22:31:04.855811 containerd[1615]: time="2025-02-13T22:31:04.855603583Z" level=info msg="StopPodSandbox for \"f3eded1962285f1837ff002daa96bd430ad9d60ec11698a14326ce31e88f0ae2\" returns successfully" Feb 13 22:31:04.858301 containerd[1615]: time="2025-02-13T22:31:04.856814214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xwf8m,Uid:30f07a2e-791e-4a29-bff8-bd4d882c17c8,Namespace:calico-system,Attempt:9,}" Feb 13 22:31:04.865067 kubelet[2032]: I0213 22:31:04.864995 2032 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcb7816869d35d40c234be503f1fffbc139adc3e8f3115249c7cd598c7376d64" Feb 13 22:31:04.868206 containerd[1615]: time="2025-02-13T22:31:04.865883627Z" level=info msg="StopPodSandbox for \"fcb7816869d35d40c234be503f1fffbc139adc3e8f3115249c7cd598c7376d64\"" Feb 13 22:31:04.868206 containerd[1615]: time="2025-02-13T22:31:04.866208567Z" level=info msg="Ensure that sandbox fcb7816869d35d40c234be503f1fffbc139adc3e8f3115249c7cd598c7376d64 in task-service has been cleanup successfully" Feb 13 22:31:04.868472 containerd[1615]: time="2025-02-13T22:31:04.868426354Z" level=info msg="TearDown network for sandbox \"fcb7816869d35d40c234be503f1fffbc139adc3e8f3115249c7cd598c7376d64\" successfully" Feb 13 22:31:04.868645 containerd[1615]: time="2025-02-13T22:31:04.868600595Z" level=info msg="StopPodSandbox for \"fcb7816869d35d40c234be503f1fffbc139adc3e8f3115249c7cd598c7376d64\" returns successfully" Feb 13 22:31:04.871834 systemd[1]: run-netns-cni\x2d6a772cae\x2d85d8\x2d3dfd\x2de870\x2dfd8040488f04.mount: Deactivated successfully. Feb 13 22:31:04.874595 containerd[1615]: time="2025-02-13T22:31:04.874392633Z" level=info msg="StopPodSandbox for \"80e55e8253138db8b470427ef3f09749c815d0c396b8d9b2d5170ba7da81f987\"" Feb 13 22:31:04.874595 containerd[1615]: time="2025-02-13T22:31:04.874525413Z" level=info msg="TearDown network for sandbox \"80e55e8253138db8b470427ef3f09749c815d0c396b8d9b2d5170ba7da81f987\" successfully" Feb 13 22:31:04.874595 containerd[1615]: time="2025-02-13T22:31:04.874547051Z" level=info msg="StopPodSandbox for \"80e55e8253138db8b470427ef3f09749c815d0c396b8d9b2d5170ba7da81f987\" returns successfully" Feb 13 22:31:04.875154 containerd[1615]: time="2025-02-13T22:31:04.875114546Z" level=info msg="StopPodSandbox for \"330d9daa8badcff355b361f528d0a25d2b63d144899139d9e2795e89fd0e7b98\"" Feb 13 22:31:04.875471 containerd[1615]: time="2025-02-13T22:31:04.875376823Z" level=info msg="TearDown network for sandbox \"330d9daa8badcff355b361f528d0a25d2b63d144899139d9e2795e89fd0e7b98\" successfully" Feb 13 22:31:04.875471 containerd[1615]: time="2025-02-13T22:31:04.875401844Z" level=info msg="StopPodSandbox for \"330d9daa8badcff355b361f528d0a25d2b63d144899139d9e2795e89fd0e7b98\" returns successfully" Feb 13 22:31:04.876156 containerd[1615]: time="2025-02-13T22:31:04.875958049Z" level=info msg="StopPodSandbox for \"3ea1a11e3027ebbea77101959e89a850d2b37483f1c119d42fb7b78a2379bfb2\"" Feb 13 22:31:04.876156 containerd[1615]: time="2025-02-13T22:31:04.876109861Z" level=info msg="TearDown network for sandbox \"3ea1a11e3027ebbea77101959e89a850d2b37483f1c119d42fb7b78a2379bfb2\" successfully" Feb 13 22:31:04.876156 containerd[1615]: time="2025-02-13T22:31:04.876131538Z" level=info msg="StopPodSandbox for \"3ea1a11e3027ebbea77101959e89a850d2b37483f1c119d42fb7b78a2379bfb2\" returns successfully" Feb 13 22:31:04.876897 containerd[1615]: time="2025-02-13T22:31:04.876845226Z" level=info msg="StopPodSandbox for \"657a476d72315dc06315e777bb4e227eb02155787782c7c29c2631115160be0d\"" Feb 13 22:31:04.876991 containerd[1615]: time="2025-02-13T22:31:04.876955171Z" level=info msg="TearDown network for sandbox \"657a476d72315dc06315e777bb4e227eb02155787782c7c29c2631115160be0d\" successfully" Feb 13 22:31:04.876991 containerd[1615]: time="2025-02-13T22:31:04.876974150Z" level=info msg="StopPodSandbox for \"657a476d72315dc06315e777bb4e227eb02155787782c7c29c2631115160be0d\" returns successfully" Feb 13 22:31:04.877876 containerd[1615]: time="2025-02-13T22:31:04.877707504Z" level=info msg="StopPodSandbox for \"7b2bf979fec215edde68ae1ebd1c4b23a20419a73f5df32b563ed0e3007ca631\"" Feb 13 22:31:04.878193 containerd[1615]: time="2025-02-13T22:31:04.877864498Z" level=info msg="TearDown network for sandbox \"7b2bf979fec215edde68ae1ebd1c4b23a20419a73f5df32b563ed0e3007ca631\" successfully" Feb 13 22:31:04.878193 containerd[1615]: time="2025-02-13T22:31:04.878107186Z" level=info msg="StopPodSandbox for \"7b2bf979fec215edde68ae1ebd1c4b23a20419a73f5df32b563ed0e3007ca631\" returns successfully" Feb 13 22:31:04.878822 containerd[1615]: time="2025-02-13T22:31:04.878609486Z" level=info msg="StopPodSandbox for \"1f21a6322e5ad03248caae957f1fca1ca644131f36d4001ade17c28fb384cd61\"" Feb 13 22:31:04.878822 containerd[1615]: time="2025-02-13T22:31:04.878718548Z" level=info msg="TearDown network for sandbox \"1f21a6322e5ad03248caae957f1fca1ca644131f36d4001ade17c28fb384cd61\" successfully" Feb 13 22:31:04.878822 containerd[1615]: time="2025-02-13T22:31:04.878736600Z" level=info msg="StopPodSandbox for \"1f21a6322e5ad03248caae957f1fca1ca644131f36d4001ade17c28fb384cd61\" returns successfully" Feb 13 22:31:04.879523 containerd[1615]: time="2025-02-13T22:31:04.879395309Z" level=info msg="StopPodSandbox for \"a51e9c0dac8e3167d45b1ec6278b334ff0dcadf067d08bedf8f4406faa892195\"" Feb 13 22:31:04.879861 containerd[1615]: time="2025-02-13T22:31:04.879787536Z" level=info msg="TearDown network for sandbox \"a51e9c0dac8e3167d45b1ec6278b334ff0dcadf067d08bedf8f4406faa892195\" successfully" Feb 13 22:31:04.879861 containerd[1615]: time="2025-02-13T22:31:04.879811009Z" level=info msg="StopPodSandbox for \"a51e9c0dac8e3167d45b1ec6278b334ff0dcadf067d08bedf8f4406faa892195\" returns successfully" Feb 13 22:31:04.880814 containerd[1615]: time="2025-02-13T22:31:04.880370997Z" level=info msg="StopPodSandbox for \"b39c667b2390dcfd75bcf7108762c485b17c97a7942216db86265444f9d1fc56\"" Feb 13 22:31:04.883284 containerd[1615]: time="2025-02-13T22:31:04.883257145Z" level=info msg="TearDown network for sandbox \"b39c667b2390dcfd75bcf7108762c485b17c97a7942216db86265444f9d1fc56\" successfully" Feb 13 22:31:04.883462 containerd[1615]: time="2025-02-13T22:31:04.883409325Z" level=info msg="StopPodSandbox for \"b39c667b2390dcfd75bcf7108762c485b17c97a7942216db86265444f9d1fc56\" returns successfully" Feb 13 22:31:04.890529 containerd[1615]: time="2025-02-13T22:31:04.890421906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-5hsnp,Uid:e78d438d-8e1b-4f6e-afaf-094ee4898fdc,Namespace:default,Attempt:9,}" Feb 13 22:31:05.196328 systemd-networkd[1261]: cali3650fdc8a5b: Link UP Feb 13 22:31:05.196667 systemd-networkd[1261]: cali3650fdc8a5b: Gained carrier Feb 13 22:31:05.209504 kubelet[2032]: I0213 22:31:05.208804 2032 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-qgqjd" podStartSLOduration=4.968601745 podStartE2EDuration="28.208762858s" podCreationTimestamp="2025-02-13 22:30:37 +0000 UTC" firstStartedPulling="2025-02-13 22:30:40.957899498 +0000 UTC m=+4.431296025" lastFinishedPulling="2025-02-13 22:31:04.198060611 +0000 UTC m=+27.671457138" observedRunningTime="2025-02-13 22:31:04.901443334 +0000 UTC m=+28.374839885" watchObservedRunningTime="2025-02-13 22:31:05.208762858 +0000 UTC m=+28.682159400" Feb 13 22:31:05.215794 containerd[1615]: 2025-02-13 22:31:04.989 [INFO][3207] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 22:31:05.215794 containerd[1615]: 2025-02-13 22:31:05.039 [INFO][3207] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.244.31.90-k8s-nginx--deployment--85f456d6dd--5hsnp-eth0 nginx-deployment-85f456d6dd- default e78d438d-8e1b-4f6e-afaf-094ee4898fdc 1112 0 2025-02-13 22:30:56 +0000 UTC map[app:nginx pod-template-hash:85f456d6dd projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.244.31.90 nginx-deployment-85f456d6dd-5hsnp eth0 default [] [] [kns.default ksa.default.default] cali3650fdc8a5b [] []}} ContainerID="0f9471b95c78f6c68b23c4659e1f918cc98c6acab6a4a9e341eabd0104af30ff" Namespace="default" Pod="nginx-deployment-85f456d6dd-5hsnp" WorkloadEndpoint="10.244.31.90-k8s-nginx--deployment--85f456d6dd--5hsnp-" Feb 13 22:31:05.215794 containerd[1615]: 2025-02-13 22:31:05.039 [INFO][3207] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0f9471b95c78f6c68b23c4659e1f918cc98c6acab6a4a9e341eabd0104af30ff" Namespace="default" Pod="nginx-deployment-85f456d6dd-5hsnp" WorkloadEndpoint="10.244.31.90-k8s-nginx--deployment--85f456d6dd--5hsnp-eth0" Feb 13 22:31:05.215794 containerd[1615]: 2025-02-13 22:31:05.118 [INFO][3233] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0f9471b95c78f6c68b23c4659e1f918cc98c6acab6a4a9e341eabd0104af30ff" HandleID="k8s-pod-network.0f9471b95c78f6c68b23c4659e1f918cc98c6acab6a4a9e341eabd0104af30ff" Workload="10.244.31.90-k8s-nginx--deployment--85f456d6dd--5hsnp-eth0" Feb 13 22:31:05.215794 containerd[1615]: 2025-02-13 22:31:05.134 [INFO][3233] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0f9471b95c78f6c68b23c4659e1f918cc98c6acab6a4a9e341eabd0104af30ff" HandleID="k8s-pod-network.0f9471b95c78f6c68b23c4659e1f918cc98c6acab6a4a9e341eabd0104af30ff" Workload="10.244.31.90-k8s-nginx--deployment--85f456d6dd--5hsnp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051880), Attrs:map[string]string{"namespace":"default", "node":"10.244.31.90", "pod":"nginx-deployment-85f456d6dd-5hsnp", "timestamp":"2025-02-13 22:31:05.118769159 +0000 UTC"}, Hostname:"10.244.31.90", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 22:31:05.215794 containerd[1615]: 2025-02-13 22:31:05.134 [INFO][3233] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 22:31:05.215794 containerd[1615]: 2025-02-13 22:31:05.134 [INFO][3233] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 22:31:05.215794 containerd[1615]: 2025-02-13 22:31:05.135 [INFO][3233] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.244.31.90' Feb 13 22:31:05.215794 containerd[1615]: 2025-02-13 22:31:05.137 [INFO][3233] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0f9471b95c78f6c68b23c4659e1f918cc98c6acab6a4a9e341eabd0104af30ff" host="10.244.31.90" Feb 13 22:31:05.215794 containerd[1615]: 2025-02-13 22:31:05.143 [INFO][3233] ipam/ipam.go 372: Looking up existing affinities for host host="10.244.31.90" Feb 13 22:31:05.215794 containerd[1615]: 2025-02-13 22:31:05.151 [INFO][3233] ipam/ipam.go 489: Trying affinity for 192.168.13.192/26 host="10.244.31.90" Feb 13 22:31:05.215794 containerd[1615]: 2025-02-13 22:31:05.155 [INFO][3233] ipam/ipam.go 155: Attempting to load block cidr=192.168.13.192/26 host="10.244.31.90" Feb 13 22:31:05.215794 containerd[1615]: 2025-02-13 22:31:05.158 [INFO][3233] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.13.192/26 host="10.244.31.90" Feb 13 22:31:05.215794 containerd[1615]: 2025-02-13 22:31:05.158 [INFO][3233] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.13.192/26 handle="k8s-pod-network.0f9471b95c78f6c68b23c4659e1f918cc98c6acab6a4a9e341eabd0104af30ff" host="10.244.31.90" Feb 13 22:31:05.215794 containerd[1615]: 2025-02-13 22:31:05.161 [INFO][3233] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0f9471b95c78f6c68b23c4659e1f918cc98c6acab6a4a9e341eabd0104af30ff Feb 13 22:31:05.215794 containerd[1615]: 2025-02-13 22:31:05.171 [INFO][3233] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.13.192/26 handle="k8s-pod-network.0f9471b95c78f6c68b23c4659e1f918cc98c6acab6a4a9e341eabd0104af30ff" host="10.244.31.90" Feb 13 22:31:05.215794 containerd[1615]: 2025-02-13 22:31:05.178 [INFO][3233] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.13.193/26] block=192.168.13.192/26 handle="k8s-pod-network.0f9471b95c78f6c68b23c4659e1f918cc98c6acab6a4a9e341eabd0104af30ff" host="10.244.31.90" Feb 13 22:31:05.215794 containerd[1615]: 2025-02-13 22:31:05.178 [INFO][3233] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.13.193/26] handle="k8s-pod-network.0f9471b95c78f6c68b23c4659e1f918cc98c6acab6a4a9e341eabd0104af30ff" host="10.244.31.90" Feb 13 22:31:05.215794 containerd[1615]: 2025-02-13 22:31:05.178 [INFO][3233] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 22:31:05.215794 containerd[1615]: 2025-02-13 22:31:05.178 [INFO][3233] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.13.193/26] IPv6=[] ContainerID="0f9471b95c78f6c68b23c4659e1f918cc98c6acab6a4a9e341eabd0104af30ff" HandleID="k8s-pod-network.0f9471b95c78f6c68b23c4659e1f918cc98c6acab6a4a9e341eabd0104af30ff" Workload="10.244.31.90-k8s-nginx--deployment--85f456d6dd--5hsnp-eth0" Feb 13 22:31:05.218514 containerd[1615]: 2025-02-13 22:31:05.181 [INFO][3207] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0f9471b95c78f6c68b23c4659e1f918cc98c6acab6a4a9e341eabd0104af30ff" Namespace="default" Pod="nginx-deployment-85f456d6dd-5hsnp" WorkloadEndpoint="10.244.31.90-k8s-nginx--deployment--85f456d6dd--5hsnp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.244.31.90-k8s-nginx--deployment--85f456d6dd--5hsnp-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"e78d438d-8e1b-4f6e-afaf-094ee4898fdc", ResourceVersion:"1112", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 22, 30, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.244.31.90", ContainerID:"", Pod:"nginx-deployment-85f456d6dd-5hsnp", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.13.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali3650fdc8a5b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 22:31:05.218514 containerd[1615]: 2025-02-13 22:31:05.182 [INFO][3207] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.13.193/32] ContainerID="0f9471b95c78f6c68b23c4659e1f918cc98c6acab6a4a9e341eabd0104af30ff" Namespace="default" Pod="nginx-deployment-85f456d6dd-5hsnp" WorkloadEndpoint="10.244.31.90-k8s-nginx--deployment--85f456d6dd--5hsnp-eth0" Feb 13 22:31:05.218514 containerd[1615]: 2025-02-13 22:31:05.182 [INFO][3207] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3650fdc8a5b ContainerID="0f9471b95c78f6c68b23c4659e1f918cc98c6acab6a4a9e341eabd0104af30ff" Namespace="default" Pod="nginx-deployment-85f456d6dd-5hsnp" WorkloadEndpoint="10.244.31.90-k8s-nginx--deployment--85f456d6dd--5hsnp-eth0" Feb 13 22:31:05.218514 containerd[1615]: 2025-02-13 22:31:05.196 [INFO][3207] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0f9471b95c78f6c68b23c4659e1f918cc98c6acab6a4a9e341eabd0104af30ff" Namespace="default" Pod="nginx-deployment-85f456d6dd-5hsnp" WorkloadEndpoint="10.244.31.90-k8s-nginx--deployment--85f456d6dd--5hsnp-eth0" Feb 13 22:31:05.218514 containerd[1615]: 2025-02-13 22:31:05.199 [INFO][3207] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0f9471b95c78f6c68b23c4659e1f918cc98c6acab6a4a9e341eabd0104af30ff" Namespace="default" Pod="nginx-deployment-85f456d6dd-5hsnp" WorkloadEndpoint="10.244.31.90-k8s-nginx--deployment--85f456d6dd--5hsnp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.244.31.90-k8s-nginx--deployment--85f456d6dd--5hsnp-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"e78d438d-8e1b-4f6e-afaf-094ee4898fdc", ResourceVersion:"1112", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 22, 30, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.244.31.90", ContainerID:"0f9471b95c78f6c68b23c4659e1f918cc98c6acab6a4a9e341eabd0104af30ff", Pod:"nginx-deployment-85f456d6dd-5hsnp", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.13.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali3650fdc8a5b", MAC:"8a:d4:70:42:6d:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 22:31:05.218514 containerd[1615]: 2025-02-13 22:31:05.214 [INFO][3207] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0f9471b95c78f6c68b23c4659e1f918cc98c6acab6a4a9e341eabd0104af30ff" Namespace="default" Pod="nginx-deployment-85f456d6dd-5hsnp" WorkloadEndpoint="10.244.31.90-k8s-nginx--deployment--85f456d6dd--5hsnp-eth0" Feb 13 22:31:05.242039 systemd-networkd[1261]: calie663ee3afa8: Link UP Feb 13 22:31:05.243507 systemd-networkd[1261]: calie663ee3afa8: Gained carrier Feb 13 22:31:05.262350 containerd[1615]: 2025-02-13 22:31:04.981 [INFO][3181] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 22:31:05.262350 containerd[1615]: 2025-02-13 22:31:05.039 [INFO][3181] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.244.31.90-k8s-csi--node--driver--xwf8m-eth0 csi-node-driver- calico-system 30f07a2e-791e-4a29-bff8-bd4d882c17c8 1031 0 2025-02-13 22:30:37 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.244.31.90 csi-node-driver-xwf8m eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie663ee3afa8 [] []}} ContainerID="2a2b06dd183ac76eabf4be900c468f75b34e10e401c0817781e19823dce46262" Namespace="calico-system" Pod="csi-node-driver-xwf8m" WorkloadEndpoint="10.244.31.90-k8s-csi--node--driver--xwf8m-" Feb 13 22:31:05.262350 containerd[1615]: 2025-02-13 22:31:05.040 [INFO][3181] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2a2b06dd183ac76eabf4be900c468f75b34e10e401c0817781e19823dce46262" Namespace="calico-system" Pod="csi-node-driver-xwf8m" WorkloadEndpoint="10.244.31.90-k8s-csi--node--driver--xwf8m-eth0" Feb 13 22:31:05.262350 containerd[1615]: 2025-02-13 22:31:05.118 [INFO][3232] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2a2b06dd183ac76eabf4be900c468f75b34e10e401c0817781e19823dce46262" HandleID="k8s-pod-network.2a2b06dd183ac76eabf4be900c468f75b34e10e401c0817781e19823dce46262" Workload="10.244.31.90-k8s-csi--node--driver--xwf8m-eth0" Feb 13 22:31:05.262350 containerd[1615]: 2025-02-13 22:31:05.135 [INFO][3232] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2a2b06dd183ac76eabf4be900c468f75b34e10e401c0817781e19823dce46262" HandleID="k8s-pod-network.2a2b06dd183ac76eabf4be900c468f75b34e10e401c0817781e19823dce46262" Workload="10.244.31.90-k8s-csi--node--driver--xwf8m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000363300), Attrs:map[string]string{"namespace":"calico-system", "node":"10.244.31.90", "pod":"csi-node-driver-xwf8m", "timestamp":"2025-02-13 22:31:05.118092318 +0000 UTC"}, Hostname:"10.244.31.90", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 22:31:05.262350 containerd[1615]: 2025-02-13 22:31:05.135 [INFO][3232] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 22:31:05.262350 containerd[1615]: 2025-02-13 22:31:05.178 [INFO][3232] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 22:31:05.262350 containerd[1615]: 2025-02-13 22:31:05.178 [INFO][3232] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.244.31.90' Feb 13 22:31:05.262350 containerd[1615]: 2025-02-13 22:31:05.185 [INFO][3232] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2a2b06dd183ac76eabf4be900c468f75b34e10e401c0817781e19823dce46262" host="10.244.31.90" Feb 13 22:31:05.262350 containerd[1615]: 2025-02-13 22:31:05.193 [INFO][3232] ipam/ipam.go 372: Looking up existing affinities for host host="10.244.31.90" Feb 13 22:31:05.262350 containerd[1615]: 2025-02-13 22:31:05.206 [INFO][3232] ipam/ipam.go 489: Trying affinity for 192.168.13.192/26 host="10.244.31.90" Feb 13 22:31:05.262350 containerd[1615]: 2025-02-13 22:31:05.210 [INFO][3232] ipam/ipam.go 155: Attempting to load block cidr=192.168.13.192/26 host="10.244.31.90" Feb 13 22:31:05.262350 containerd[1615]: 2025-02-13 22:31:05.213 [INFO][3232] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.13.192/26 host="10.244.31.90" Feb 13 22:31:05.262350 containerd[1615]: 2025-02-13 22:31:05.213 [INFO][3232] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.13.192/26 handle="k8s-pod-network.2a2b06dd183ac76eabf4be900c468f75b34e10e401c0817781e19823dce46262" host="10.244.31.90" Feb 13 22:31:05.262350 containerd[1615]: 2025-02-13 22:31:05.216 [INFO][3232] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2a2b06dd183ac76eabf4be900c468f75b34e10e401c0817781e19823dce46262 Feb 13 22:31:05.262350 containerd[1615]: 2025-02-13 22:31:05.222 [INFO][3232] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.13.192/26 handle="k8s-pod-network.2a2b06dd183ac76eabf4be900c468f75b34e10e401c0817781e19823dce46262" host="10.244.31.90" Feb 13 22:31:05.262350 containerd[1615]: 2025-02-13 22:31:05.229 [INFO][3232] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.13.194/26] block=192.168.13.192/26 handle="k8s-pod-network.2a2b06dd183ac76eabf4be900c468f75b34e10e401c0817781e19823dce46262" host="10.244.31.90" Feb 13 22:31:05.262350 containerd[1615]: 2025-02-13 22:31:05.229 [INFO][3232] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.13.194/26] handle="k8s-pod-network.2a2b06dd183ac76eabf4be900c468f75b34e10e401c0817781e19823dce46262" host="10.244.31.90" Feb 13 22:31:05.262350 containerd[1615]: 2025-02-13 22:31:05.229 [INFO][3232] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 22:31:05.262350 containerd[1615]: 2025-02-13 22:31:05.229 [INFO][3232] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.13.194/26] IPv6=[] ContainerID="2a2b06dd183ac76eabf4be900c468f75b34e10e401c0817781e19823dce46262" HandleID="k8s-pod-network.2a2b06dd183ac76eabf4be900c468f75b34e10e401c0817781e19823dce46262" Workload="10.244.31.90-k8s-csi--node--driver--xwf8m-eth0" Feb 13 22:31:05.264164 containerd[1615]: 2025-02-13 22:31:05.232 [INFO][3181] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2a2b06dd183ac76eabf4be900c468f75b34e10e401c0817781e19823dce46262" Namespace="calico-system" Pod="csi-node-driver-xwf8m" WorkloadEndpoint="10.244.31.90-k8s-csi--node--driver--xwf8m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.244.31.90-k8s-csi--node--driver--xwf8m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"30f07a2e-791e-4a29-bff8-bd4d882c17c8", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 22, 30, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.244.31.90", ContainerID:"", Pod:"csi-node-driver-xwf8m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.13.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie663ee3afa8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 22:31:05.264164 containerd[1615]: 2025-02-13 22:31:05.235 [INFO][3181] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.13.194/32] ContainerID="2a2b06dd183ac76eabf4be900c468f75b34e10e401c0817781e19823dce46262" Namespace="calico-system" Pod="csi-node-driver-xwf8m" WorkloadEndpoint="10.244.31.90-k8s-csi--node--driver--xwf8m-eth0" Feb 13 22:31:05.264164 containerd[1615]: 2025-02-13 22:31:05.236 [INFO][3181] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie663ee3afa8 ContainerID="2a2b06dd183ac76eabf4be900c468f75b34e10e401c0817781e19823dce46262" Namespace="calico-system" Pod="csi-node-driver-xwf8m" WorkloadEndpoint="10.244.31.90-k8s-csi--node--driver--xwf8m-eth0" Feb 13 22:31:05.264164 containerd[1615]: 2025-02-13 22:31:05.245 [INFO][3181] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2a2b06dd183ac76eabf4be900c468f75b34e10e401c0817781e19823dce46262" Namespace="calico-system" Pod="csi-node-driver-xwf8m" WorkloadEndpoint="10.244.31.90-k8s-csi--node--driver--xwf8m-eth0" Feb 13 22:31:05.264164 containerd[1615]: 2025-02-13 22:31:05.245 [INFO][3181] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2a2b06dd183ac76eabf4be900c468f75b34e10e401c0817781e19823dce46262" Namespace="calico-system" Pod="csi-node-driver-xwf8m" WorkloadEndpoint="10.244.31.90-k8s-csi--node--driver--xwf8m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.244.31.90-k8s-csi--node--driver--xwf8m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"30f07a2e-791e-4a29-bff8-bd4d882c17c8", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 22, 30, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.244.31.90", ContainerID:"2a2b06dd183ac76eabf4be900c468f75b34e10e401c0817781e19823dce46262", Pod:"csi-node-driver-xwf8m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.13.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie663ee3afa8", MAC:"ee:c8:7a:cf:46:18", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 22:31:05.264164 containerd[1615]: 2025-02-13 22:31:05.258 [INFO][3181] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2a2b06dd183ac76eabf4be900c468f75b34e10e401c0817781e19823dce46262" Namespace="calico-system" Pod="csi-node-driver-xwf8m" WorkloadEndpoint="10.244.31.90-k8s-csi--node--driver--xwf8m-eth0" Feb 13 22:31:05.265484 containerd[1615]: time="2025-02-13T22:31:05.265370429Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 22:31:05.265651 containerd[1615]: time="2025-02-13T22:31:05.265610070Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 22:31:05.267578 containerd[1615]: time="2025-02-13T22:31:05.267347086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 22:31:05.267578 containerd[1615]: time="2025-02-13T22:31:05.267493204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 22:31:05.313597 containerd[1615]: time="2025-02-13T22:31:05.312965501Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 22:31:05.313597 containerd[1615]: time="2025-02-13T22:31:05.313074508Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 22:31:05.313597 containerd[1615]: time="2025-02-13T22:31:05.313093360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 22:31:05.313597 containerd[1615]: time="2025-02-13T22:31:05.313316572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 22:31:05.375549 containerd[1615]: time="2025-02-13T22:31:05.375382145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-5hsnp,Uid:e78d438d-8e1b-4f6e-afaf-094ee4898fdc,Namespace:default,Attempt:9,} returns sandbox id \"0f9471b95c78f6c68b23c4659e1f918cc98c6acab6a4a9e341eabd0104af30ff\"" Feb 13 22:31:05.379199 containerd[1615]: time="2025-02-13T22:31:05.379151376Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 22:31:05.384028 containerd[1615]: time="2025-02-13T22:31:05.383959207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xwf8m,Uid:30f07a2e-791e-4a29-bff8-bd4d882c17c8,Namespace:calico-system,Attempt:9,} returns sandbox id \"2a2b06dd183ac76eabf4be900c468f75b34e10e401c0817781e19823dce46262\"" Feb 13 22:31:05.419468 kubelet[2032]: E0213 22:31:05.419393 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:31:05.905755 systemd[1]: run-containerd-runc-k8s.io-24e1fbc2795b02c34adf3cb8ceff38947e803fbe0c4b9b3e6f78f3a9626fb4c6-runc.Dtureq.mount: Deactivated successfully. Feb 13 22:31:06.323651 kernel: bpftool[3491]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 22:31:06.420514 kubelet[2032]: E0213 22:31:06.420426 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:31:06.521412 systemd-networkd[1261]: calie663ee3afa8: Gained IPv6LL Feb 13 22:31:06.640836 systemd-networkd[1261]: vxlan.calico: Link UP Feb 13 22:31:06.640851 systemd-networkd[1261]: vxlan.calico: Gained carrier Feb 13 22:31:07.161493 systemd-networkd[1261]: cali3650fdc8a5b: Gained IPv6LL Feb 13 22:31:07.421505 kubelet[2032]: E0213 22:31:07.420999 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:31:07.866151 systemd-networkd[1261]: vxlan.calico: Gained IPv6LL Feb 13 22:31:08.421380 kubelet[2032]: E0213 22:31:08.421313 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:31:09.367552 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1895322416.mount: Deactivated successfully. Feb 13 22:31:09.422281 kubelet[2032]: E0213 22:31:09.422225 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:31:10.422666 kubelet[2032]: E0213 22:31:10.422593 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:31:11.276231 containerd[1615]: time="2025-02-13T22:31:11.276073890Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:31:11.277914 containerd[1615]: time="2025-02-13T22:31:11.277829910Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73054493" Feb 13 22:31:11.279432 containerd[1615]: time="2025-02-13T22:31:11.279341767Z" level=info msg="ImageCreate event name:\"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:31:11.287213 containerd[1615]: time="2025-02-13T22:31:11.285376066Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:31:11.291125 containerd[1615]: time="2025-02-13T22:31:11.291070579Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 5.911856743s" Feb 13 22:31:11.291275 containerd[1615]: time="2025-02-13T22:31:11.291131066Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 22:31:11.293689 containerd[1615]: time="2025-02-13T22:31:11.293522731Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 22:31:11.302780 containerd[1615]: time="2025-02-13T22:31:11.302702730Z" level=info msg="CreateContainer within sandbox \"0f9471b95c78f6c68b23c4659e1f918cc98c6acab6a4a9e341eabd0104af30ff\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 13 22:31:11.417596 containerd[1615]: time="2025-02-13T22:31:11.417494499Z" level=info msg="CreateContainer within sandbox \"0f9471b95c78f6c68b23c4659e1f918cc98c6acab6a4a9e341eabd0104af30ff\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"1b82e5db1c8aa77dfebf5a1401684ce2674abe5605bfc33052ee6dc6c154f788\"" Feb 13 22:31:11.418949 containerd[1615]: time="2025-02-13T22:31:11.418875992Z" level=info msg="StartContainer for \"1b82e5db1c8aa77dfebf5a1401684ce2674abe5605bfc33052ee6dc6c154f788\"" Feb 13 22:31:11.423167 kubelet[2032]: E0213 22:31:11.423114 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:31:11.491651 containerd[1615]: time="2025-02-13T22:31:11.491587806Z" level=info msg="StartContainer for \"1b82e5db1c8aa77dfebf5a1401684ce2674abe5605bfc33052ee6dc6c154f788\" returns successfully" Feb 13 22:31:12.424435 kubelet[2032]: E0213 22:31:12.424322 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:31:13.080709 containerd[1615]: time="2025-02-13T22:31:13.080634336Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:31:13.083170 containerd[1615]: time="2025-02-13T22:31:13.082445283Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 13 22:31:13.085838 containerd[1615]: time="2025-02-13T22:31:13.084379329Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:31:13.086986 containerd[1615]: time="2025-02-13T22:31:13.086947115Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:31:13.088222 containerd[1615]: time="2025-02-13T22:31:13.088164044Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.794599424s" Feb 13 22:31:13.088378 containerd[1615]: time="2025-02-13T22:31:13.088349478Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 13 22:31:13.091505 containerd[1615]: time="2025-02-13T22:31:13.091466124Z" level=info msg="CreateContainer within sandbox \"2a2b06dd183ac76eabf4be900c468f75b34e10e401c0817781e19823dce46262\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 22:31:13.110671 containerd[1615]: time="2025-02-13T22:31:13.110607729Z" level=info msg="CreateContainer within sandbox \"2a2b06dd183ac76eabf4be900c468f75b34e10e401c0817781e19823dce46262\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"40697edb90da4888a3638eb4e61ef0d602dc22362e32040e60c8b569963bcfa7\"" Feb 13 22:31:13.111827 containerd[1615]: time="2025-02-13T22:31:13.111797241Z" level=info msg="StartContainer for \"40697edb90da4888a3638eb4e61ef0d602dc22362e32040e60c8b569963bcfa7\"" Feb 13 22:31:13.200090 containerd[1615]: time="2025-02-13T22:31:13.200024487Z" level=info msg="StartContainer for \"40697edb90da4888a3638eb4e61ef0d602dc22362e32040e60c8b569963bcfa7\" returns successfully" Feb 13 22:31:13.202927 containerd[1615]: time="2025-02-13T22:31:13.202878010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 22:31:13.425557 kubelet[2032]: E0213 22:31:13.425383 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:31:14.425900 kubelet[2032]: E0213 22:31:14.425788 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:31:14.531585 systemd[1]: Started sshd@8-10.244.31.90:22-116.110.7.76:39068.service - OpenSSH per-connection server daemon (116.110.7.76:39068). Feb 13 22:31:14.885511 containerd[1615]: time="2025-02-13T22:31:14.885430851Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:31:14.887027 containerd[1615]: time="2025-02-13T22:31:14.886951929Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 13 22:31:14.888131 containerd[1615]: time="2025-02-13T22:31:14.888070358Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:31:14.891272 containerd[1615]: time="2025-02-13T22:31:14.891170623Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:31:14.892514 containerd[1615]: time="2025-02-13T22:31:14.892148838Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.689031651s" Feb 13 22:31:14.892514 containerd[1615]: time="2025-02-13T22:31:14.892210186Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 13 22:31:14.895880 containerd[1615]: time="2025-02-13T22:31:14.895675367Z" level=info msg="CreateContainer within sandbox \"2a2b06dd183ac76eabf4be900c468f75b34e10e401c0817781e19823dce46262\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 22:31:14.914770 containerd[1615]: time="2025-02-13T22:31:14.914717230Z" level=info msg="CreateContainer within sandbox \"2a2b06dd183ac76eabf4be900c468f75b34e10e401c0817781e19823dce46262\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"592bef1eeba345837f1fe1fa3bc04d37661d701a6d250c9f284431984b33e063\"" Feb 13 22:31:14.917216 containerd[1615]: time="2025-02-13T22:31:14.915546279Z" level=info msg="StartContainer for \"592bef1eeba345837f1fe1fa3bc04d37661d701a6d250c9f284431984b33e063\"" Feb 13 22:31:15.057357 containerd[1615]: time="2025-02-13T22:31:15.057281074Z" level=info msg="StartContainer for \"592bef1eeba345837f1fe1fa3bc04d37661d701a6d250c9f284431984b33e063\" returns successfully" Feb 13 22:31:15.426208 kubelet[2032]: E0213 22:31:15.426137 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:31:15.541501 kubelet[2032]: I0213 22:31:15.541455 2032 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 22:31:15.541501 kubelet[2032]: I0213 22:31:15.541514 2032 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 22:31:15.974712 kubelet[2032]: I0213 22:31:15.974570 2032 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-xwf8m" podStartSLOduration=29.466518941 podStartE2EDuration="38.974544976s" podCreationTimestamp="2025-02-13 22:30:37 +0000 UTC" firstStartedPulling="2025-02-13 22:31:05.385744927 +0000 UTC m=+28.859141459" lastFinishedPulling="2025-02-13 22:31:14.893770955 +0000 UTC m=+38.367167494" observedRunningTime="2025-02-13 22:31:15.969274012 +0000 UTC m=+39.442670558" watchObservedRunningTime="2025-02-13 22:31:15.974544976 +0000 UTC m=+39.447941515" Feb 13 22:31:15.975046 kubelet[2032]: I0213 22:31:15.974759 2032 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-5hsnp" podStartSLOduration=14.060907364 podStartE2EDuration="19.974750615s" podCreationTimestamp="2025-02-13 22:30:56 +0000 UTC" firstStartedPulling="2025-02-13 22:31:05.378591415 +0000 UTC m=+28.851987948" lastFinishedPulling="2025-02-13 22:31:11.292434671 +0000 UTC m=+34.765831199" observedRunningTime="2025-02-13 22:31:11.937172582 +0000 UTC m=+35.410569122" watchObservedRunningTime="2025-02-13 22:31:15.974750615 +0000 UTC m=+39.448147155" Feb 13 22:31:16.426932 kubelet[2032]: E0213 22:31:16.426830 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:31:17.390773 kubelet[2032]: E0213 22:31:17.390696 2032 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:31:17.427739 kubelet[2032]: E0213 22:31:17.427626 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:31:18.428445 kubelet[2032]: E0213 22:31:18.428356 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:31:19.057120 sshd[3732]: Invalid user system from 116.110.7.76 port 39068 Feb 13 22:31:19.429722 kubelet[2032]: E0213 22:31:19.429524 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:31:19.601313 sshd-session[3783]: pam_faillock(sshd:auth): User unknown Feb 13 22:31:19.605927 sshd[3732]: Postponed keyboard-interactive for invalid user system from 116.110.7.76 port 39068 ssh2 [preauth] Feb 13 22:31:20.145100 sshd-session[3783]: pam_unix(sshd:auth): check pass; user unknown Feb 13 22:31:20.145143 sshd-session[3783]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=116.110.7.76 Feb 13 22:31:20.146310 sshd-session[3783]: pam_faillock(sshd:auth): User unknown Feb 13 22:31:20.430829 kubelet[2032]: E0213 22:31:20.430560 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:31:21.431711 kubelet[2032]: E0213 22:31:21.431610 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:31:22.431887 kubelet[2032]: E0213 22:31:22.431810 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:31:22.610382 sshd[3732]: PAM: Permission denied for illegal user system from 116.110.7.76 Feb 13 22:31:22.611153 sshd[3732]: Failed keyboard-interactive/pam for invalid user system from 116.110.7.76 port 39068 ssh2 Feb 13 22:31:23.433074 kubelet[2032]: E0213 22:31:23.433006 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:31:23.617028 sshd[3732]: Connection closed by invalid user system 116.110.7.76 port 39068 [preauth] Feb 13 22:31:23.620764 systemd[1]: sshd@8-10.244.31.90:22-116.110.7.76:39068.service: Deactivated successfully. Feb 13 22:31:24.434375 kubelet[2032]: E0213 22:31:24.434297 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:31:25.110295 kubelet[2032]: I0213 22:31:25.110220 2032 topology_manager.go:215] "Topology Admit Handler" podUID="7f71f4ea-2e72-45f4-bf44-8fa056e385e4" podNamespace="default" podName="nfs-server-provisioner-0" Feb 13 22:31:25.263835 kubelet[2032]: I0213 22:31:25.263706 2032 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/7f71f4ea-2e72-45f4-bf44-8fa056e385e4-data\") pod \"nfs-server-provisioner-0\" (UID: \"7f71f4ea-2e72-45f4-bf44-8fa056e385e4\") " pod="default/nfs-server-provisioner-0" Feb 13 22:31:25.263835 kubelet[2032]: I0213 22:31:25.263781 2032 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d78tj\" (UniqueName: \"kubernetes.io/projected/7f71f4ea-2e72-45f4-bf44-8fa056e385e4-kube-api-access-d78tj\") pod \"nfs-server-provisioner-0\" (UID: \"7f71f4ea-2e72-45f4-bf44-8fa056e385e4\") " pod="default/nfs-server-provisioner-0" Feb 13 22:31:25.417231 containerd[1615]: time="2025-02-13T22:31:25.416674129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:7f71f4ea-2e72-45f4-bf44-8fa056e385e4,Namespace:default,Attempt:0,}" Feb 13 22:31:25.436367 kubelet[2032]: E0213 22:31:25.436293 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:31:25.588997 systemd-networkd[1261]: cali60e51b789ff: Link UP Feb 13 22:31:25.590118 systemd-networkd[1261]: cali60e51b789ff: Gained carrier Feb 13 22:31:25.606511 containerd[1615]: 2025-02-13 22:31:25.480 [INFO][3796] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.244.31.90-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 7f71f4ea-2e72-45f4-bf44-8fa056e385e4 1257 0 2025-02-13 22:31:25 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.244.31.90 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="bdc344f9d6f7931f4f71921f18d1b427ca53dd1241845fba2ab255a9a133e0b0" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.244.31.90-k8s-nfs--server--provisioner--0-" Feb 13 22:31:25.606511 containerd[1615]: 2025-02-13 22:31:25.480 [INFO][3796] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bdc344f9d6f7931f4f71921f18d1b427ca53dd1241845fba2ab255a9a133e0b0" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.244.31.90-k8s-nfs--server--provisioner--0-eth0" Feb 13 22:31:25.606511 containerd[1615]: 2025-02-13 22:31:25.519 [INFO][3807] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bdc344f9d6f7931f4f71921f18d1b427ca53dd1241845fba2ab255a9a133e0b0" HandleID="k8s-pod-network.bdc344f9d6f7931f4f71921f18d1b427ca53dd1241845fba2ab255a9a133e0b0" Workload="10.244.31.90-k8s-nfs--server--provisioner--0-eth0" Feb 13 22:31:25.606511 containerd[1615]: 2025-02-13 22:31:25.534 [INFO][3807] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bdc344f9d6f7931f4f71921f18d1b427ca53dd1241845fba2ab255a9a133e0b0" HandleID="k8s-pod-network.bdc344f9d6f7931f4f71921f18d1b427ca53dd1241845fba2ab255a9a133e0b0" Workload="10.244.31.90-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000509e0), Attrs:map[string]string{"namespace":"default", "node":"10.244.31.90", "pod":"nfs-server-provisioner-0", "timestamp":"2025-02-13 22:31:25.51994796 +0000 UTC"}, Hostname:"10.244.31.90", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 22:31:25.606511 containerd[1615]: 2025-02-13 22:31:25.534 [INFO][3807] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 22:31:25.606511 containerd[1615]: 2025-02-13 22:31:25.534 [INFO][3807] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 22:31:25.606511 containerd[1615]: 2025-02-13 22:31:25.534 [INFO][3807] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.244.31.90' Feb 13 22:31:25.606511 containerd[1615]: 2025-02-13 22:31:25.538 [INFO][3807] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bdc344f9d6f7931f4f71921f18d1b427ca53dd1241845fba2ab255a9a133e0b0" host="10.244.31.90" Feb 13 22:31:25.606511 containerd[1615]: 2025-02-13 22:31:25.549 [INFO][3807] ipam/ipam.go 372: Looking up existing affinities for host host="10.244.31.90" Feb 13 22:31:25.606511 containerd[1615]: 2025-02-13 22:31:25.556 [INFO][3807] ipam/ipam.go 489: Trying affinity for 192.168.13.192/26 host="10.244.31.90" Feb 13 22:31:25.606511 containerd[1615]: 2025-02-13 22:31:25.559 [INFO][3807] ipam/ipam.go 155: Attempting to load block cidr=192.168.13.192/26 host="10.244.31.90" Feb 13 22:31:25.606511 containerd[1615]: 2025-02-13 22:31:25.562 [INFO][3807] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.13.192/26 host="10.244.31.90" Feb 13 22:31:25.606511 containerd[1615]: 2025-02-13 22:31:25.562 [INFO][3807] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.13.192/26 handle="k8s-pod-network.bdc344f9d6f7931f4f71921f18d1b427ca53dd1241845fba2ab255a9a133e0b0" host="10.244.31.90" Feb 13 22:31:25.606511 containerd[1615]: 2025-02-13 22:31:25.565 [INFO][3807] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bdc344f9d6f7931f4f71921f18d1b427ca53dd1241845fba2ab255a9a133e0b0 Feb 13 22:31:25.606511 containerd[1615]: 2025-02-13 22:31:25.573 [INFO][3807] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.13.192/26 handle="k8s-pod-network.bdc344f9d6f7931f4f71921f18d1b427ca53dd1241845fba2ab255a9a133e0b0" host="10.244.31.90" Feb 13 22:31:25.606511 containerd[1615]: 2025-02-13 22:31:25.580 [INFO][3807] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.13.195/26] block=192.168.13.192/26 handle="k8s-pod-network.bdc344f9d6f7931f4f71921f18d1b427ca53dd1241845fba2ab255a9a133e0b0" host="10.244.31.90" Feb 13 22:31:25.606511 containerd[1615]: 2025-02-13 22:31:25.580 [INFO][3807] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.13.195/26] handle="k8s-pod-network.bdc344f9d6f7931f4f71921f18d1b427ca53dd1241845fba2ab255a9a133e0b0" host="10.244.31.90" Feb 13 22:31:25.606511 containerd[1615]: 2025-02-13 22:31:25.581 [INFO][3807] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 22:31:25.606511 containerd[1615]: 2025-02-13 22:31:25.581 [INFO][3807] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.13.195/26] IPv6=[] ContainerID="bdc344f9d6f7931f4f71921f18d1b427ca53dd1241845fba2ab255a9a133e0b0" HandleID="k8s-pod-network.bdc344f9d6f7931f4f71921f18d1b427ca53dd1241845fba2ab255a9a133e0b0" Workload="10.244.31.90-k8s-nfs--server--provisioner--0-eth0" Feb 13 22:31:25.607654 containerd[1615]: 2025-02-13 22:31:25.582 [INFO][3796] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bdc344f9d6f7931f4f71921f18d1b427ca53dd1241845fba2ab255a9a133e0b0" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.244.31.90-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.244.31.90-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"7f71f4ea-2e72-45f4-bf44-8fa056e385e4", ResourceVersion:"1257", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 22, 31, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.244.31.90", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.13.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 22:31:25.607654 containerd[1615]: 2025-02-13 22:31:25.582 [INFO][3796] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.13.195/32] ContainerID="bdc344f9d6f7931f4f71921f18d1b427ca53dd1241845fba2ab255a9a133e0b0" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.244.31.90-k8s-nfs--server--provisioner--0-eth0" Feb 13 22:31:25.607654 containerd[1615]: 2025-02-13 22:31:25.582 [INFO][3796] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="bdc344f9d6f7931f4f71921f18d1b427ca53dd1241845fba2ab255a9a133e0b0" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.244.31.90-k8s-nfs--server--provisioner--0-eth0" Feb 13 22:31:25.607654 containerd[1615]: 2025-02-13 22:31:25.590 [INFO][3796] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bdc344f9d6f7931f4f71921f18d1b427ca53dd1241845fba2ab255a9a133e0b0" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.244.31.90-k8s-nfs--server--provisioner--0-eth0" Feb 13 22:31:25.607944 containerd[1615]: 2025-02-13 22:31:25.591 [INFO][3796] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bdc344f9d6f7931f4f71921f18d1b427ca53dd1241845fba2ab255a9a133e0b0" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.244.31.90-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.244.31.90-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"7f71f4ea-2e72-45f4-bf44-8fa056e385e4", ResourceVersion:"1257", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 22, 31, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.244.31.90", ContainerID:"bdc344f9d6f7931f4f71921f18d1b427ca53dd1241845fba2ab255a9a133e0b0", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.13.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"ee:51:90:66:75:3e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 22:31:25.607944 containerd[1615]: 2025-02-13 22:31:25.604 [INFO][3796] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bdc344f9d6f7931f4f71921f18d1b427ca53dd1241845fba2ab255a9a133e0b0" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.244.31.90-k8s-nfs--server--provisioner--0-eth0" Feb 13 22:31:25.644374 containerd[1615]: time="2025-02-13T22:31:25.644000203Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 22:31:25.644374 containerd[1615]: time="2025-02-13T22:31:25.644072930Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 22:31:25.644374 containerd[1615]: time="2025-02-13T22:31:25.644096685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 22:31:25.644374 containerd[1615]: time="2025-02-13T22:31:25.644227818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 22:31:25.727125 containerd[1615]: time="2025-02-13T22:31:25.727074963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:7f71f4ea-2e72-45f4-bf44-8fa056e385e4,Namespace:default,Attempt:0,} returns sandbox id \"bdc344f9d6f7931f4f71921f18d1b427ca53dd1241845fba2ab255a9a133e0b0\"" Feb 13 22:31:25.729261 containerd[1615]: time="2025-02-13T22:31:25.728946693Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 13 22:31:26.437051 kubelet[2032]: E0213 22:31:26.436974 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:31:27.321935 systemd-networkd[1261]: cali60e51b789ff: Gained IPv6LL Feb 13 22:31:27.438613 kubelet[2032]: E0213 22:31:27.437905 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:31:28.438868 kubelet[2032]: E0213 22:31:28.438820 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:31:29.173655 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2191055299.mount: Deactivated successfully. Feb 13 22:31:29.440582 kubelet[2032]: E0213 22:31:29.440454 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:31:30.441681 kubelet[2032]: E0213 22:31:30.441621 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:31:31.442713 kubelet[2032]: E0213 22:31:31.442562 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:31:32.135909 containerd[1615]: time="2025-02-13T22:31:32.135814988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:31:32.137861 containerd[1615]: time="2025-02-13T22:31:32.137766978Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039414" Feb 13 22:31:32.138995 containerd[1615]: time="2025-02-13T22:31:32.138958835Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:31:32.156631 containerd[1615]: time="2025-02-13T22:31:32.156489819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:31:32.163108 containerd[1615]: time="2025-02-13T22:31:32.162857172Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 6.433866086s" Feb 13 22:31:32.163108 containerd[1615]: time="2025-02-13T22:31:32.162933164Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 13 22:31:32.167105 containerd[1615]: time="2025-02-13T22:31:32.167066461Z" level=info msg="CreateContainer within sandbox \"bdc344f9d6f7931f4f71921f18d1b427ca53dd1241845fba2ab255a9a133e0b0\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 13 22:31:32.185074 containerd[1615]: time="2025-02-13T22:31:32.185011902Z" level=info msg="CreateContainer within sandbox \"bdc344f9d6f7931f4f71921f18d1b427ca53dd1241845fba2ab255a9a133e0b0\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"45893c46d51d1d0c7afb1fc933470245b5870ab9895ef0322970e76c965e2fc5\"" Feb 13 22:31:32.187226 containerd[1615]: time="2025-02-13T22:31:32.185961595Z" level=info msg="StartContainer for \"45893c46d51d1d0c7afb1fc933470245b5870ab9895ef0322970e76c965e2fc5\"" Feb 13 22:31:32.271896 containerd[1615]: time="2025-02-13T22:31:32.271756772Z" level=info msg="StartContainer for \"45893c46d51d1d0c7afb1fc933470245b5870ab9895ef0322970e76c965e2fc5\" returns successfully" Feb 13 22:31:32.443226 kubelet[2032]: E0213 22:31:32.442985 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:31:33.443524 kubelet[2032]: E0213 22:31:33.443448 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:31:34.443924 kubelet[2032]: E0213 22:31:34.443857 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:31:35.444856 kubelet[2032]: E0213 22:31:35.444783 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:31:36.445208 kubelet[2032]: E0213 22:31:36.445133 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:31:37.391064 kubelet[2032]: E0213 22:31:37.390994 2032 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:31:37.431200 containerd[1615]: time="2025-02-13T22:31:37.431131471Z" level=info msg="StopPodSandbox for \"f3eded1962285f1837ff002daa96bd430ad9d60ec11698a14326ce31e88f0ae2\"" Feb 13 22:31:37.432428 containerd[1615]: time="2025-02-13T22:31:37.432055198Z" level=info msg="TearDown network for sandbox \"f3eded1962285f1837ff002daa96bd430ad9d60ec11698a14326ce31e88f0ae2\" successfully" Feb 13 22:31:37.432428 containerd[1615]: time="2025-02-13T22:31:37.432169140Z" level=info msg="StopPodSandbox for \"f3eded1962285f1837ff002daa96bd430ad9d60ec11698a14326ce31e88f0ae2\" returns successfully" Feb 13 22:31:37.437077 containerd[1615]: time="2025-02-13T22:31:37.437004974Z" level=info msg="RemovePodSandbox for \"f3eded1962285f1837ff002daa96bd430ad9d60ec11698a14326ce31e88f0ae2\"" Feb 13 22:31:37.442823 containerd[1615]: time="2025-02-13T22:31:37.442754703Z" level=info msg="Forcibly stopping sandbox \"f3eded1962285f1837ff002daa96bd430ad9d60ec11698a14326ce31e88f0ae2\"" Feb 13 22:31:37.446316 kubelet[2032]: E0213 22:31:37.446264 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:31:37.453259 containerd[1615]: time="2025-02-13T22:31:37.442885529Z" level=info msg="TearDown network for sandbox \"f3eded1962285f1837ff002daa96bd430ad9d60ec11698a14326ce31e88f0ae2\" successfully" Feb 13 22:31:37.465902 containerd[1615]: time="2025-02-13T22:31:37.465806140Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f3eded1962285f1837ff002daa96bd430ad9d60ec11698a14326ce31e88f0ae2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 22:31:37.465902 containerd[1615]: time="2025-02-13T22:31:37.465892164Z" level=info msg="RemovePodSandbox \"f3eded1962285f1837ff002daa96bd430ad9d60ec11698a14326ce31e88f0ae2\" returns successfully" Feb 13 22:31:37.466620 containerd[1615]: time="2025-02-13T22:31:37.466591557Z" level=info msg="StopPodSandbox for \"70c7b8a184db90680e8f8b7825b6f798651bc5663501ce1629168cfdecf44ecb\"" Feb 13 22:31:37.466734 containerd[1615]: time="2025-02-13T22:31:37.466708794Z" level=info msg="TearDown network for sandbox \"70c7b8a184db90680e8f8b7825b6f798651bc5663501ce1629168cfdecf44ecb\" successfully" Feb 13 22:31:37.466810 containerd[1615]: time="2025-02-13T22:31:37.466735646Z" level=info msg="StopPodSandbox for \"70c7b8a184db90680e8f8b7825b6f798651bc5663501ce1629168cfdecf44ecb\" returns successfully" Feb 13 22:31:37.467324 containerd[1615]: time="2025-02-13T22:31:37.467284532Z" level=info msg="RemovePodSandbox for \"70c7b8a184db90680e8f8b7825b6f798651bc5663501ce1629168cfdecf44ecb\"" Feb 13 22:31:37.467324 containerd[1615]: time="2025-02-13T22:31:37.467321889Z" level=info msg="Forcibly stopping sandbox \"70c7b8a184db90680e8f8b7825b6f798651bc5663501ce1629168cfdecf44ecb\"" Feb 13 22:31:37.467524 containerd[1615]: time="2025-02-13T22:31:37.467414689Z" level=info msg="TearDown network for sandbox \"70c7b8a184db90680e8f8b7825b6f798651bc5663501ce1629168cfdecf44ecb\" successfully" Feb 13 22:31:37.470748 containerd[1615]: time="2025-02-13T22:31:37.470654445Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"70c7b8a184db90680e8f8b7825b6f798651bc5663501ce1629168cfdecf44ecb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 22:31:37.470845 containerd[1615]: time="2025-02-13T22:31:37.470795473Z" level=info msg="RemovePodSandbox \"70c7b8a184db90680e8f8b7825b6f798651bc5663501ce1629168cfdecf44ecb\" returns successfully" Feb 13 22:31:37.471233 containerd[1615]: time="2025-02-13T22:31:37.471199923Z" level=info msg="StopPodSandbox for \"9990403887413b3acdb61739f3adf2455f0422608d6583d9b9800ca1fd4fb5ff\"" Feb 13 22:31:37.471405 containerd[1615]: time="2025-02-13T22:31:37.471329845Z" level=info msg="TearDown network for sandbox \"9990403887413b3acdb61739f3adf2455f0422608d6583d9b9800ca1fd4fb5ff\" successfully" Feb 13 22:31:37.471405 containerd[1615]: time="2025-02-13T22:31:37.471390738Z" level=info msg="StopPodSandbox for \"9990403887413b3acdb61739f3adf2455f0422608d6583d9b9800ca1fd4fb5ff\" returns successfully" Feb 13 22:31:37.471943 containerd[1615]: time="2025-02-13T22:31:37.471898377Z" level=info msg="RemovePodSandbox for \"9990403887413b3acdb61739f3adf2455f0422608d6583d9b9800ca1fd4fb5ff\"" Feb 13 22:31:37.471943 containerd[1615]: time="2025-02-13T22:31:37.471939188Z" level=info msg="Forcibly stopping sandbox \"9990403887413b3acdb61739f3adf2455f0422608d6583d9b9800ca1fd4fb5ff\"" Feb 13 22:31:37.472061 containerd[1615]: time="2025-02-13T22:31:37.472029178Z" level=info msg="TearDown network for sandbox \"9990403887413b3acdb61739f3adf2455f0422608d6583d9b9800ca1fd4fb5ff\" successfully" Feb 13 22:31:37.492212 containerd[1615]: time="2025-02-13T22:31:37.492037022Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9990403887413b3acdb61739f3adf2455f0422608d6583d9b9800ca1fd4fb5ff\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 22:31:37.492212 containerd[1615]: time="2025-02-13T22:31:37.492156856Z" level=info msg="RemovePodSandbox \"9990403887413b3acdb61739f3adf2455f0422608d6583d9b9800ca1fd4fb5ff\" returns successfully" Feb 13 22:31:37.492926 containerd[1615]: time="2025-02-13T22:31:37.492784120Z" level=info msg="StopPodSandbox for \"912bd121fad3efcdf2e8de896b20f0578b164319a40cdf47c9644e520ede5664\"" Feb 13 22:31:37.494203 containerd[1615]: time="2025-02-13T22:31:37.493065763Z" level=info msg="TearDown network for sandbox \"912bd121fad3efcdf2e8de896b20f0578b164319a40cdf47c9644e520ede5664\" successfully" Feb 13 22:31:37.494203 containerd[1615]: time="2025-02-13T22:31:37.493092913Z" level=info msg="StopPodSandbox for \"912bd121fad3efcdf2e8de896b20f0578b164319a40cdf47c9644e520ede5664\" returns successfully" Feb 13 22:31:37.494403 containerd[1615]: time="2025-02-13T22:31:37.494372586Z" level=info msg="RemovePodSandbox for \"912bd121fad3efcdf2e8de896b20f0578b164319a40cdf47c9644e520ede5664\"" Feb 13 22:31:37.494470 containerd[1615]: time="2025-02-13T22:31:37.494409444Z" level=info msg="Forcibly stopping sandbox \"912bd121fad3efcdf2e8de896b20f0578b164319a40cdf47c9644e520ede5664\"" Feb 13 22:31:37.494559 containerd[1615]: time="2025-02-13T22:31:37.494519449Z" level=info msg="TearDown network for sandbox \"912bd121fad3efcdf2e8de896b20f0578b164319a40cdf47c9644e520ede5664\" successfully" Feb 13 22:31:37.497447 containerd[1615]: time="2025-02-13T22:31:37.497121688Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"912bd121fad3efcdf2e8de896b20f0578b164319a40cdf47c9644e520ede5664\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 22:31:37.497447 containerd[1615]: time="2025-02-13T22:31:37.497201192Z" level=info msg="RemovePodSandbox \"912bd121fad3efcdf2e8de896b20f0578b164319a40cdf47c9644e520ede5664\" returns successfully" Feb 13 22:31:37.497683 containerd[1615]: time="2025-02-13T22:31:37.497535361Z" level=info msg="StopPodSandbox for \"692b056e95454c81cc355e80a80cc7b6627396b89288c6694923e58b233955fd\"" Feb 13 22:31:37.497683 containerd[1615]: time="2025-02-13T22:31:37.497634930Z" level=info msg="TearDown network for sandbox \"692b056e95454c81cc355e80a80cc7b6627396b89288c6694923e58b233955fd\" successfully" Feb 13 22:31:37.497683 containerd[1615]: time="2025-02-13T22:31:37.497654652Z" level=info msg="StopPodSandbox for \"692b056e95454c81cc355e80a80cc7b6627396b89288c6694923e58b233955fd\" returns successfully" Feb 13 22:31:37.499156 containerd[1615]: time="2025-02-13T22:31:37.498299512Z" level=info msg="RemovePodSandbox for \"692b056e95454c81cc355e80a80cc7b6627396b89288c6694923e58b233955fd\"" Feb 13 22:31:37.499156 containerd[1615]: time="2025-02-13T22:31:37.498335598Z" level=info msg="Forcibly stopping sandbox \"692b056e95454c81cc355e80a80cc7b6627396b89288c6694923e58b233955fd\"" Feb 13 22:31:37.499156 containerd[1615]: time="2025-02-13T22:31:37.498417321Z" level=info msg="TearDown network for sandbox \"692b056e95454c81cc355e80a80cc7b6627396b89288c6694923e58b233955fd\" successfully" Feb 13 22:31:37.501242 containerd[1615]: time="2025-02-13T22:31:37.501210078Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"692b056e95454c81cc355e80a80cc7b6627396b89288c6694923e58b233955fd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 22:31:37.501591 containerd[1615]: time="2025-02-13T22:31:37.501384859Z" level=info msg="RemovePodSandbox \"692b056e95454c81cc355e80a80cc7b6627396b89288c6694923e58b233955fd\" returns successfully" Feb 13 22:31:37.501865 containerd[1615]: time="2025-02-13T22:31:37.501794587Z" level=info msg="StopPodSandbox for \"48f6ecefe1fca3fc64b572878614fb43e4322bd6e6808f3650462e8ede1b8922\"" Feb 13 22:31:37.501955 containerd[1615]: time="2025-02-13T22:31:37.501900695Z" level=info msg="TearDown network for sandbox \"48f6ecefe1fca3fc64b572878614fb43e4322bd6e6808f3650462e8ede1b8922\" successfully" Feb 13 22:31:37.501955 containerd[1615]: time="2025-02-13T22:31:37.501919239Z" level=info msg="StopPodSandbox for \"48f6ecefe1fca3fc64b572878614fb43e4322bd6e6808f3650462e8ede1b8922\" returns successfully" Feb 13 22:31:37.502516 containerd[1615]: time="2025-02-13T22:31:37.502409086Z" level=info msg="RemovePodSandbox for \"48f6ecefe1fca3fc64b572878614fb43e4322bd6e6808f3650462e8ede1b8922\"" Feb 13 22:31:37.502516 containerd[1615]: time="2025-02-13T22:31:37.502444854Z" level=info msg="Forcibly stopping sandbox \"48f6ecefe1fca3fc64b572878614fb43e4322bd6e6808f3650462e8ede1b8922\"" Feb 13 22:31:37.502647 containerd[1615]: time="2025-02-13T22:31:37.502550530Z" level=info msg="TearDown network for sandbox \"48f6ecefe1fca3fc64b572878614fb43e4322bd6e6808f3650462e8ede1b8922\" successfully" Feb 13 22:31:37.504997 containerd[1615]: time="2025-02-13T22:31:37.504960053Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"48f6ecefe1fca3fc64b572878614fb43e4322bd6e6808f3650462e8ede1b8922\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 22:31:37.505080 containerd[1615]: time="2025-02-13T22:31:37.505007131Z" level=info msg="RemovePodSandbox \"48f6ecefe1fca3fc64b572878614fb43e4322bd6e6808f3650462e8ede1b8922\" returns successfully" Feb 13 22:31:37.505942 containerd[1615]: time="2025-02-13T22:31:37.505539183Z" level=info msg="StopPodSandbox for \"0d994176f3dd82e4516689c8412d5a139db06ab0c0fbc3895262cf28e1ae81d9\"" Feb 13 22:31:37.505942 containerd[1615]: time="2025-02-13T22:31:37.505646148Z" level=info msg="TearDown network for sandbox \"0d994176f3dd82e4516689c8412d5a139db06ab0c0fbc3895262cf28e1ae81d9\" successfully" Feb 13 22:31:37.505942 containerd[1615]: time="2025-02-13T22:31:37.505668430Z" level=info msg="StopPodSandbox for \"0d994176f3dd82e4516689c8412d5a139db06ab0c0fbc3895262cf28e1ae81d9\" returns successfully" Feb 13 22:31:37.506143 containerd[1615]: time="2025-02-13T22:31:37.505987531Z" level=info msg="RemovePodSandbox for \"0d994176f3dd82e4516689c8412d5a139db06ab0c0fbc3895262cf28e1ae81d9\"" Feb 13 22:31:37.506143 containerd[1615]: time="2025-02-13T22:31:37.506015134Z" level=info msg="Forcibly stopping sandbox \"0d994176f3dd82e4516689c8412d5a139db06ab0c0fbc3895262cf28e1ae81d9\"" Feb 13 22:31:37.506143 containerd[1615]: time="2025-02-13T22:31:37.506099790Z" level=info msg="TearDown network for sandbox \"0d994176f3dd82e4516689c8412d5a139db06ab0c0fbc3895262cf28e1ae81d9\" successfully" Feb 13 22:31:37.508701 containerd[1615]: time="2025-02-13T22:31:37.508603547Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0d994176f3dd82e4516689c8412d5a139db06ab0c0fbc3895262cf28e1ae81d9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 22:31:37.508701 containerd[1615]: time="2025-02-13T22:31:37.508649551Z" level=info msg="RemovePodSandbox \"0d994176f3dd82e4516689c8412d5a139db06ab0c0fbc3895262cf28e1ae81d9\" returns successfully" Feb 13 22:31:37.509433 containerd[1615]: time="2025-02-13T22:31:37.508986417Z" level=info msg="StopPodSandbox for \"67c36816cd42ec52634a63241f444fa1a73fdccfd1d9f8331b657a26db83a7a6\"" Feb 13 22:31:37.519201 containerd[1615]: time="2025-02-13T22:31:37.519066444Z" level=info msg="TearDown network for sandbox \"67c36816cd42ec52634a63241f444fa1a73fdccfd1d9f8331b657a26db83a7a6\" successfully" Feb 13 22:31:37.519201 containerd[1615]: time="2025-02-13T22:31:37.519099141Z" level=info msg="StopPodSandbox for \"67c36816cd42ec52634a63241f444fa1a73fdccfd1d9f8331b657a26db83a7a6\" returns successfully" Feb 13 22:31:37.519969 containerd[1615]: time="2025-02-13T22:31:37.519595104Z" level=info msg="RemovePodSandbox for \"67c36816cd42ec52634a63241f444fa1a73fdccfd1d9f8331b657a26db83a7a6\"" Feb 13 22:31:37.519969 containerd[1615]: time="2025-02-13T22:31:37.519625127Z" level=info msg="Forcibly stopping sandbox \"67c36816cd42ec52634a63241f444fa1a73fdccfd1d9f8331b657a26db83a7a6\"" Feb 13 22:31:37.519969 containerd[1615]: time="2025-02-13T22:31:37.519705640Z" level=info msg="TearDown network for sandbox \"67c36816cd42ec52634a63241f444fa1a73fdccfd1d9f8331b657a26db83a7a6\" successfully" Feb 13 22:31:37.524313 containerd[1615]: time="2025-02-13T22:31:37.524233149Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"67c36816cd42ec52634a63241f444fa1a73fdccfd1d9f8331b657a26db83a7a6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 22:31:37.524313 containerd[1615]: time="2025-02-13T22:31:37.524291415Z" level=info msg="RemovePodSandbox \"67c36816cd42ec52634a63241f444fa1a73fdccfd1d9f8331b657a26db83a7a6\" returns successfully" Feb 13 22:31:37.525028 containerd[1615]: time="2025-02-13T22:31:37.524666654Z" level=info msg="StopPodSandbox for \"4609d94c485f09088b0459e4192419979163f9cc404c226554d1e9d395d9d330\"" Feb 13 22:31:37.525028 containerd[1615]: time="2025-02-13T22:31:37.524783105Z" level=info msg="TearDown network for sandbox \"4609d94c485f09088b0459e4192419979163f9cc404c226554d1e9d395d9d330\" successfully" Feb 13 22:31:37.525028 containerd[1615]: time="2025-02-13T22:31:37.524801265Z" level=info msg="StopPodSandbox for \"4609d94c485f09088b0459e4192419979163f9cc404c226554d1e9d395d9d330\" returns successfully" Feb 13 22:31:37.526828 containerd[1615]: time="2025-02-13T22:31:37.525802102Z" level=info msg="RemovePodSandbox for \"4609d94c485f09088b0459e4192419979163f9cc404c226554d1e9d395d9d330\"" Feb 13 22:31:37.526828 containerd[1615]: time="2025-02-13T22:31:37.525837694Z" level=info msg="Forcibly stopping sandbox \"4609d94c485f09088b0459e4192419979163f9cc404c226554d1e9d395d9d330\"" Feb 13 22:31:37.526828 containerd[1615]: time="2025-02-13T22:31:37.525926746Z" level=info msg="TearDown network for sandbox \"4609d94c485f09088b0459e4192419979163f9cc404c226554d1e9d395d9d330\" successfully" Feb 13 22:31:37.546434 containerd[1615]: time="2025-02-13T22:31:37.546340464Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4609d94c485f09088b0459e4192419979163f9cc404c226554d1e9d395d9d330\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 22:31:37.546434 containerd[1615]: time="2025-02-13T22:31:37.546407683Z" level=info msg="RemovePodSandbox \"4609d94c485f09088b0459e4192419979163f9cc404c226554d1e9d395d9d330\" returns successfully" Feb 13 22:31:37.550607 containerd[1615]: time="2025-02-13T22:31:37.550026865Z" level=info msg="StopPodSandbox for \"b39c667b2390dcfd75bcf7108762c485b17c97a7942216db86265444f9d1fc56\"" Feb 13 22:31:37.550607 containerd[1615]: time="2025-02-13T22:31:37.550148987Z" level=info msg="TearDown network for sandbox \"b39c667b2390dcfd75bcf7108762c485b17c97a7942216db86265444f9d1fc56\" successfully" Feb 13 22:31:37.550607 containerd[1615]: time="2025-02-13T22:31:37.550168137Z" level=info msg="StopPodSandbox for \"b39c667b2390dcfd75bcf7108762c485b17c97a7942216db86265444f9d1fc56\" returns successfully" Feb 13 22:31:37.550607 containerd[1615]: time="2025-02-13T22:31:37.550546125Z" level=info msg="RemovePodSandbox for \"b39c667b2390dcfd75bcf7108762c485b17c97a7942216db86265444f9d1fc56\"" Feb 13 22:31:37.550607 containerd[1615]: time="2025-02-13T22:31:37.550577339Z" level=info msg="Forcibly stopping sandbox \"b39c667b2390dcfd75bcf7108762c485b17c97a7942216db86265444f9d1fc56\"" Feb 13 22:31:37.550877 containerd[1615]: time="2025-02-13T22:31:37.550655032Z" level=info msg="TearDown network for sandbox \"b39c667b2390dcfd75bcf7108762c485b17c97a7942216db86265444f9d1fc56\" successfully" Feb 13 22:31:37.566889 containerd[1615]: time="2025-02-13T22:31:37.566586498Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b39c667b2390dcfd75bcf7108762c485b17c97a7942216db86265444f9d1fc56\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 22:31:37.566889 containerd[1615]: time="2025-02-13T22:31:37.566708950Z" level=info msg="RemovePodSandbox \"b39c667b2390dcfd75bcf7108762c485b17c97a7942216db86265444f9d1fc56\" returns successfully" Feb 13 22:31:37.571914 containerd[1615]: time="2025-02-13T22:31:37.571651990Z" level=info msg="StopPodSandbox for \"a51e9c0dac8e3167d45b1ec6278b334ff0dcadf067d08bedf8f4406faa892195\"" Feb 13 22:31:37.571914 containerd[1615]: time="2025-02-13T22:31:37.571850782Z" level=info msg="TearDown network for sandbox \"a51e9c0dac8e3167d45b1ec6278b334ff0dcadf067d08bedf8f4406faa892195\" successfully" Feb 13 22:31:37.571914 containerd[1615]: time="2025-02-13T22:31:37.571871898Z" level=info msg="StopPodSandbox for \"a51e9c0dac8e3167d45b1ec6278b334ff0dcadf067d08bedf8f4406faa892195\" returns successfully" Feb 13 22:31:37.574208 containerd[1615]: time="2025-02-13T22:31:37.572827601Z" level=info msg="RemovePodSandbox for \"a51e9c0dac8e3167d45b1ec6278b334ff0dcadf067d08bedf8f4406faa892195\"" Feb 13 22:31:37.574208 containerd[1615]: time="2025-02-13T22:31:37.572865640Z" level=info msg="Forcibly stopping sandbox \"a51e9c0dac8e3167d45b1ec6278b334ff0dcadf067d08bedf8f4406faa892195\"" Feb 13 22:31:37.574208 containerd[1615]: time="2025-02-13T22:31:37.572963541Z" level=info msg="TearDown network for sandbox \"a51e9c0dac8e3167d45b1ec6278b334ff0dcadf067d08bedf8f4406faa892195\" successfully" Feb 13 22:31:37.629060 containerd[1615]: time="2025-02-13T22:31:37.628910590Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a51e9c0dac8e3167d45b1ec6278b334ff0dcadf067d08bedf8f4406faa892195\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 22:31:37.629060 containerd[1615]: time="2025-02-13T22:31:37.628996526Z" level=info msg="RemovePodSandbox \"a51e9c0dac8e3167d45b1ec6278b334ff0dcadf067d08bedf8f4406faa892195\" returns successfully" Feb 13 22:31:37.630088 containerd[1615]: time="2025-02-13T22:31:37.629785270Z" level=info msg="StopPodSandbox for \"1f21a6322e5ad03248caae957f1fca1ca644131f36d4001ade17c28fb384cd61\"" Feb 13 22:31:37.630088 containerd[1615]: time="2025-02-13T22:31:37.629983821Z" level=info msg="TearDown network for sandbox \"1f21a6322e5ad03248caae957f1fca1ca644131f36d4001ade17c28fb384cd61\" successfully" Feb 13 22:31:37.630088 containerd[1615]: time="2025-02-13T22:31:37.630006345Z" level=info msg="StopPodSandbox for \"1f21a6322e5ad03248caae957f1fca1ca644131f36d4001ade17c28fb384cd61\" returns successfully" Feb 13 22:31:37.630566 containerd[1615]: time="2025-02-13T22:31:37.630512748Z" level=info msg="RemovePodSandbox for \"1f21a6322e5ad03248caae957f1fca1ca644131f36d4001ade17c28fb384cd61\"" Feb 13 22:31:37.630642 containerd[1615]: time="2025-02-13T22:31:37.630564919Z" level=info msg="Forcibly stopping sandbox \"1f21a6322e5ad03248caae957f1fca1ca644131f36d4001ade17c28fb384cd61\"" Feb 13 22:31:37.630702 containerd[1615]: time="2025-02-13T22:31:37.630657216Z" level=info msg="TearDown network for sandbox \"1f21a6322e5ad03248caae957f1fca1ca644131f36d4001ade17c28fb384cd61\" successfully" Feb 13 22:31:37.651390 containerd[1615]: time="2025-02-13T22:31:37.651156472Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1f21a6322e5ad03248caae957f1fca1ca644131f36d4001ade17c28fb384cd61\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 22:31:37.651390 containerd[1615]: time="2025-02-13T22:31:37.651261082Z" level=info msg="RemovePodSandbox \"1f21a6322e5ad03248caae957f1fca1ca644131f36d4001ade17c28fb384cd61\" returns successfully" Feb 13 22:31:37.653325 containerd[1615]: time="2025-02-13T22:31:37.652628737Z" level=info msg="StopPodSandbox for \"7b2bf979fec215edde68ae1ebd1c4b23a20419a73f5df32b563ed0e3007ca631\"" Feb 13 22:31:37.653325 containerd[1615]: time="2025-02-13T22:31:37.652764945Z" level=info msg="TearDown network for sandbox \"7b2bf979fec215edde68ae1ebd1c4b23a20419a73f5df32b563ed0e3007ca631\" successfully" Feb 13 22:31:37.653325 containerd[1615]: time="2025-02-13T22:31:37.652784246Z" level=info msg="StopPodSandbox for \"7b2bf979fec215edde68ae1ebd1c4b23a20419a73f5df32b563ed0e3007ca631\" returns successfully" Feb 13 22:31:37.654317 containerd[1615]: time="2025-02-13T22:31:37.653936017Z" level=info msg="RemovePodSandbox for \"7b2bf979fec215edde68ae1ebd1c4b23a20419a73f5df32b563ed0e3007ca631\"" Feb 13 22:31:37.654317 containerd[1615]: time="2025-02-13T22:31:37.654018391Z" level=info msg="Forcibly stopping sandbox \"7b2bf979fec215edde68ae1ebd1c4b23a20419a73f5df32b563ed0e3007ca631\"" Feb 13 22:31:37.654317 containerd[1615]: time="2025-02-13T22:31:37.654110624Z" level=info msg="TearDown network for sandbox \"7b2bf979fec215edde68ae1ebd1c4b23a20419a73f5df32b563ed0e3007ca631\" successfully" Feb 13 22:31:37.670653 containerd[1615]: time="2025-02-13T22:31:37.670607428Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7b2bf979fec215edde68ae1ebd1c4b23a20419a73f5df32b563ed0e3007ca631\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 22:31:37.670856 containerd[1615]: time="2025-02-13T22:31:37.670671939Z" level=info msg="RemovePodSandbox \"7b2bf979fec215edde68ae1ebd1c4b23a20419a73f5df32b563ed0e3007ca631\" returns successfully" Feb 13 22:31:37.671354 containerd[1615]: time="2025-02-13T22:31:37.671085737Z" level=info msg="StopPodSandbox for \"657a476d72315dc06315e777bb4e227eb02155787782c7c29c2631115160be0d\"" Feb 13 22:31:37.671354 containerd[1615]: time="2025-02-13T22:31:37.671256631Z" level=info msg="TearDown network for sandbox \"657a476d72315dc06315e777bb4e227eb02155787782c7c29c2631115160be0d\" successfully" Feb 13 22:31:37.671354 containerd[1615]: time="2025-02-13T22:31:37.671278276Z" level=info msg="StopPodSandbox for \"657a476d72315dc06315e777bb4e227eb02155787782c7c29c2631115160be0d\" returns successfully" Feb 13 22:31:37.671899 containerd[1615]: time="2025-02-13T22:31:37.671795724Z" level=info msg="RemovePodSandbox for \"657a476d72315dc06315e777bb4e227eb02155787782c7c29c2631115160be0d\"" Feb 13 22:31:37.671899 containerd[1615]: time="2025-02-13T22:31:37.671835001Z" level=info msg="Forcibly stopping sandbox \"657a476d72315dc06315e777bb4e227eb02155787782c7c29c2631115160be0d\"" Feb 13 22:31:37.672098 containerd[1615]: time="2025-02-13T22:31:37.671921064Z" level=info msg="TearDown network for sandbox \"657a476d72315dc06315e777bb4e227eb02155787782c7c29c2631115160be0d\" successfully" Feb 13 22:31:37.683706 containerd[1615]: time="2025-02-13T22:31:37.683639474Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"657a476d72315dc06315e777bb4e227eb02155787782c7c29c2631115160be0d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 22:31:37.683706 containerd[1615]: time="2025-02-13T22:31:37.683706276Z" level=info msg="RemovePodSandbox \"657a476d72315dc06315e777bb4e227eb02155787782c7c29c2631115160be0d\" returns successfully" Feb 13 22:31:37.684456 containerd[1615]: time="2025-02-13T22:31:37.684176325Z" level=info msg="StopPodSandbox for \"3ea1a11e3027ebbea77101959e89a850d2b37483f1c119d42fb7b78a2379bfb2\"" Feb 13 22:31:37.684456 containerd[1615]: time="2025-02-13T22:31:37.684325735Z" level=info msg="TearDown network for sandbox \"3ea1a11e3027ebbea77101959e89a850d2b37483f1c119d42fb7b78a2379bfb2\" successfully" Feb 13 22:31:37.684456 containerd[1615]: time="2025-02-13T22:31:37.684345325Z" level=info msg="StopPodSandbox for \"3ea1a11e3027ebbea77101959e89a850d2b37483f1c119d42fb7b78a2379bfb2\" returns successfully" Feb 13 22:31:37.685232 containerd[1615]: time="2025-02-13T22:31:37.684816204Z" level=info msg="RemovePodSandbox for \"3ea1a11e3027ebbea77101959e89a850d2b37483f1c119d42fb7b78a2379bfb2\"" Feb 13 22:31:37.685232 containerd[1615]: time="2025-02-13T22:31:37.684857814Z" level=info msg="Forcibly stopping sandbox \"3ea1a11e3027ebbea77101959e89a850d2b37483f1c119d42fb7b78a2379bfb2\"" Feb 13 22:31:37.685232 containerd[1615]: time="2025-02-13T22:31:37.684944489Z" level=info msg="TearDown network for sandbox \"3ea1a11e3027ebbea77101959e89a850d2b37483f1c119d42fb7b78a2379bfb2\" successfully" Feb 13 22:31:37.701708 containerd[1615]: time="2025-02-13T22:31:37.700907471Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3ea1a11e3027ebbea77101959e89a850d2b37483f1c119d42fb7b78a2379bfb2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 22:31:37.701708 containerd[1615]: time="2025-02-13T22:31:37.700961428Z" level=info msg="RemovePodSandbox \"3ea1a11e3027ebbea77101959e89a850d2b37483f1c119d42fb7b78a2379bfb2\" returns successfully" Feb 13 22:31:37.701708 containerd[1615]: time="2025-02-13T22:31:37.701302040Z" level=info msg="StopPodSandbox for \"330d9daa8badcff355b361f528d0a25d2b63d144899139d9e2795e89fd0e7b98\"" Feb 13 22:31:37.701708 containerd[1615]: time="2025-02-13T22:31:37.701405650Z" level=info msg="TearDown network for sandbox \"330d9daa8badcff355b361f528d0a25d2b63d144899139d9e2795e89fd0e7b98\" successfully" Feb 13 22:31:37.701708 containerd[1615]: time="2025-02-13T22:31:37.701427135Z" level=info msg="StopPodSandbox for \"330d9daa8badcff355b361f528d0a25d2b63d144899139d9e2795e89fd0e7b98\" returns successfully" Feb 13 22:31:37.702050 containerd[1615]: time="2025-02-13T22:31:37.701846146Z" level=info msg="RemovePodSandbox for \"330d9daa8badcff355b361f528d0a25d2b63d144899139d9e2795e89fd0e7b98\"" Feb 13 22:31:37.702050 containerd[1615]: time="2025-02-13T22:31:37.701878092Z" level=info msg="Forcibly stopping sandbox \"330d9daa8badcff355b361f528d0a25d2b63d144899139d9e2795e89fd0e7b98\"" Feb 13 22:31:37.702050 containerd[1615]: time="2025-02-13T22:31:37.701956350Z" level=info msg="TearDown network for sandbox \"330d9daa8badcff355b361f528d0a25d2b63d144899139d9e2795e89fd0e7b98\" successfully" Feb 13 22:31:37.717852 containerd[1615]: time="2025-02-13T22:31:37.716785125Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"330d9daa8badcff355b361f528d0a25d2b63d144899139d9e2795e89fd0e7b98\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 22:31:37.717852 containerd[1615]: time="2025-02-13T22:31:37.716869538Z" level=info msg="RemovePodSandbox \"330d9daa8badcff355b361f528d0a25d2b63d144899139d9e2795e89fd0e7b98\" returns successfully" Feb 13 22:31:37.717852 containerd[1615]: time="2025-02-13T22:31:37.718023560Z" level=info msg="StopPodSandbox for \"80e55e8253138db8b470427ef3f09749c815d0c396b8d9b2d5170ba7da81f987\"" Feb 13 22:31:37.717852 containerd[1615]: time="2025-02-13T22:31:37.718138282Z" level=info msg="TearDown network for sandbox \"80e55e8253138db8b470427ef3f09749c815d0c396b8d9b2d5170ba7da81f987\" successfully" Feb 13 22:31:37.717852 containerd[1615]: time="2025-02-13T22:31:37.718156836Z" level=info msg="StopPodSandbox for \"80e55e8253138db8b470427ef3f09749c815d0c396b8d9b2d5170ba7da81f987\" returns successfully" Feb 13 22:31:37.717852 containerd[1615]: time="2025-02-13T22:31:37.719108362Z" level=info msg="RemovePodSandbox for \"80e55e8253138db8b470427ef3f09749c815d0c396b8d9b2d5170ba7da81f987\"" Feb 13 22:31:37.717852 containerd[1615]: time="2025-02-13T22:31:37.719142397Z" level=info msg="Forcibly stopping sandbox \"80e55e8253138db8b470427ef3f09749c815d0c396b8d9b2d5170ba7da81f987\"" Feb 13 22:31:37.717852 containerd[1615]: time="2025-02-13T22:31:37.719950719Z" level=info msg="TearDown network for sandbox \"80e55e8253138db8b470427ef3f09749c815d0c396b8d9b2d5170ba7da81f987\" successfully" Feb 13 22:31:37.756502 containerd[1615]: time="2025-02-13T22:31:37.756416531Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"80e55e8253138db8b470427ef3f09749c815d0c396b8d9b2d5170ba7da81f987\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 22:31:37.756502 containerd[1615]: time="2025-02-13T22:31:37.756508473Z" level=info msg="RemovePodSandbox \"80e55e8253138db8b470427ef3f09749c815d0c396b8d9b2d5170ba7da81f987\" returns successfully" Feb 13 22:31:37.757319 containerd[1615]: time="2025-02-13T22:31:37.757017688Z" level=info msg="StopPodSandbox for \"fcb7816869d35d40c234be503f1fffbc139adc3e8f3115249c7cd598c7376d64\"" Feb 13 22:31:37.757319 containerd[1615]: time="2025-02-13T22:31:37.757164811Z" level=info msg="TearDown network for sandbox \"fcb7816869d35d40c234be503f1fffbc139adc3e8f3115249c7cd598c7376d64\" successfully" Feb 13 22:31:37.757319 containerd[1615]: time="2025-02-13T22:31:37.757219015Z" level=info msg="StopPodSandbox for \"fcb7816869d35d40c234be503f1fffbc139adc3e8f3115249c7cd598c7376d64\" returns successfully" Feb 13 22:31:37.757768 containerd[1615]: time="2025-02-13T22:31:37.757739886Z" level=info msg="RemovePodSandbox for \"fcb7816869d35d40c234be503f1fffbc139adc3e8f3115249c7cd598c7376d64\"" Feb 13 22:31:37.757968 containerd[1615]: time="2025-02-13T22:31:37.757913381Z" level=info msg="Forcibly stopping sandbox \"fcb7816869d35d40c234be503f1fffbc139adc3e8f3115249c7cd598c7376d64\"" Feb 13 22:31:37.758175 containerd[1615]: time="2025-02-13T22:31:37.758125277Z" level=info msg="TearDown network for sandbox \"fcb7816869d35d40c234be503f1fffbc139adc3e8f3115249c7cd598c7376d64\" successfully" Feb 13 22:31:37.789018 containerd[1615]: time="2025-02-13T22:31:37.788633778Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fcb7816869d35d40c234be503f1fffbc139adc3e8f3115249c7cd598c7376d64\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 22:31:37.789018 containerd[1615]: time="2025-02-13T22:31:37.788721206Z" level=info msg="RemovePodSandbox \"fcb7816869d35d40c234be503f1fffbc139adc3e8f3115249c7cd598c7376d64\" returns successfully" Feb 13 22:31:38.447143 kubelet[2032]: E0213 22:31:38.447073 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:31:39.447896 kubelet[2032]: E0213 22:31:39.447797 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:31:40.448946 kubelet[2032]: E0213 22:31:40.448846 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:31:41.449324 kubelet[2032]: E0213 22:31:41.449099 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:31:41.684833 kubelet[2032]: I0213 22:31:41.684713 2032 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=10.24903093 podStartE2EDuration="16.684688456s" podCreationTimestamp="2025-02-13 22:31:25 +0000 UTC" firstStartedPulling="2025-02-13 22:31:25.728636193 +0000 UTC m=+49.202032721" lastFinishedPulling="2025-02-13 22:31:32.164293708 +0000 UTC m=+55.637690247" observedRunningTime="2025-02-13 22:31:33.020331385 +0000 UTC m=+56.493727930" watchObservedRunningTime="2025-02-13 22:31:41.684688456 +0000 UTC m=+65.158084996" Feb 13 22:31:41.685246 kubelet[2032]: I0213 22:31:41.685159 2032 topology_manager.go:215] "Topology Admit Handler" podUID="dc9088c3-278e-4f13-a13f-792f4586fbe2" podNamespace="default" podName="test-pod-1" Feb 13 22:31:41.861547 kubelet[2032]: I0213 22:31:41.861376 2032 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6a41a0ac-9bd5-4f68-ac1b-925e60ec34dc\" (UniqueName: \"kubernetes.io/nfs/dc9088c3-278e-4f13-a13f-792f4586fbe2-pvc-6a41a0ac-9bd5-4f68-ac1b-925e60ec34dc\") pod \"test-pod-1\" (UID: \"dc9088c3-278e-4f13-a13f-792f4586fbe2\") " pod="default/test-pod-1" Feb 13 22:31:41.861547 kubelet[2032]: I0213 22:31:41.861467 2032 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffpbm\" (UniqueName: \"kubernetes.io/projected/dc9088c3-278e-4f13-a13f-792f4586fbe2-kube-api-access-ffpbm\") pod \"test-pod-1\" (UID: \"dc9088c3-278e-4f13-a13f-792f4586fbe2\") " pod="default/test-pod-1" Feb 13 22:31:42.013579 kernel: FS-Cache: Loaded Feb 13 22:31:42.106865 kernel: RPC: Registered named UNIX socket transport module. Feb 13 22:31:42.107024 kernel: RPC: Registered udp transport module. Feb 13 22:31:42.107062 kernel: RPC: Registered tcp transport module. Feb 13 22:31:42.107829 kernel: RPC: Registered tcp-with-tls transport module. Feb 13 22:31:42.108791 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 13 22:31:42.449652 kubelet[2032]: E0213 22:31:42.449571 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:31:42.461743 kernel: NFS: Registering the id_resolver key type Feb 13 22:31:42.461868 kernel: Key type id_resolver registered Feb 13 22:31:42.462427 kernel: Key type id_legacy registered Feb 13 22:31:42.520888 nfsidmap[4018]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Feb 13 22:31:42.529968 nfsidmap[4021]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Feb 13 22:31:42.593441 containerd[1615]: time="2025-02-13T22:31:42.593311169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:dc9088c3-278e-4f13-a13f-792f4586fbe2,Namespace:default,Attempt:0,}" Feb 13 22:31:42.785054 systemd-networkd[1261]: cali5ec59c6bf6e: Link UP Feb 13 22:31:42.786941 systemd-networkd[1261]: cali5ec59c6bf6e: Gained carrier Feb 13 22:31:42.799700 containerd[1615]: 2025-02-13 22:31:42.666 [INFO][4025] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.244.31.90-k8s-test--pod--1-eth0 default dc9088c3-278e-4f13-a13f-792f4586fbe2 1337 0 2025-02-13 22:31:27 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.244.31.90 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="cbfc3ec8d1078326f8e377d903292a8c49fa03364a02578e1a33dc49d039acb4" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.244.31.90-k8s-test--pod--1-" Feb 13 22:31:42.799700 containerd[1615]: 2025-02-13 22:31:42.666 [INFO][4025] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cbfc3ec8d1078326f8e377d903292a8c49fa03364a02578e1a33dc49d039acb4" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.244.31.90-k8s-test--pod--1-eth0" Feb 13 22:31:42.799700 containerd[1615]: 2025-02-13 22:31:42.724 [INFO][4035] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cbfc3ec8d1078326f8e377d903292a8c49fa03364a02578e1a33dc49d039acb4" HandleID="k8s-pod-network.cbfc3ec8d1078326f8e377d903292a8c49fa03364a02578e1a33dc49d039acb4" Workload="10.244.31.90-k8s-test--pod--1-eth0" Feb 13 22:31:42.799700 containerd[1615]: 2025-02-13 22:31:42.739 [INFO][4035] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cbfc3ec8d1078326f8e377d903292a8c49fa03364a02578e1a33dc49d039acb4" HandleID="k8s-pod-network.cbfc3ec8d1078326f8e377d903292a8c49fa03364a02578e1a33dc49d039acb4" Workload="10.244.31.90-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003187f0), Attrs:map[string]string{"namespace":"default", "node":"10.244.31.90", "pod":"test-pod-1", "timestamp":"2025-02-13 22:31:42.724301074 +0000 UTC"}, Hostname:"10.244.31.90", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 22:31:42.799700 containerd[1615]: 2025-02-13 22:31:42.739 [INFO][4035] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 22:31:42.799700 containerd[1615]: 2025-02-13 22:31:42.740 [INFO][4035] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 22:31:42.799700 containerd[1615]: 2025-02-13 22:31:42.740 [INFO][4035] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.244.31.90' Feb 13 22:31:42.799700 containerd[1615]: 2025-02-13 22:31:42.742 [INFO][4035] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cbfc3ec8d1078326f8e377d903292a8c49fa03364a02578e1a33dc49d039acb4" host="10.244.31.90" Feb 13 22:31:42.799700 containerd[1615]: 2025-02-13 22:31:42.749 [INFO][4035] ipam/ipam.go 372: Looking up existing affinities for host host="10.244.31.90" Feb 13 22:31:42.799700 containerd[1615]: 2025-02-13 22:31:42.755 [INFO][4035] ipam/ipam.go 489: Trying affinity for 192.168.13.192/26 host="10.244.31.90" Feb 13 22:31:42.799700 containerd[1615]: 2025-02-13 22:31:42.757 [INFO][4035] ipam/ipam.go 155: Attempting to load block cidr=192.168.13.192/26 host="10.244.31.90" Feb 13 22:31:42.799700 containerd[1615]: 2025-02-13 22:31:42.761 [INFO][4035] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.13.192/26 host="10.244.31.90" Feb 13 22:31:42.799700 containerd[1615]: 2025-02-13 22:31:42.761 [INFO][4035] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.13.192/26 handle="k8s-pod-network.cbfc3ec8d1078326f8e377d903292a8c49fa03364a02578e1a33dc49d039acb4" host="10.244.31.90" Feb 13 22:31:42.799700 containerd[1615]: 2025-02-13 22:31:42.763 [INFO][4035] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.cbfc3ec8d1078326f8e377d903292a8c49fa03364a02578e1a33dc49d039acb4 Feb 13 22:31:42.799700 containerd[1615]: 2025-02-13 22:31:42.769 [INFO][4035] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.13.192/26 handle="k8s-pod-network.cbfc3ec8d1078326f8e377d903292a8c49fa03364a02578e1a33dc49d039acb4" host="10.244.31.90" Feb 13 22:31:42.799700 containerd[1615]: 2025-02-13 22:31:42.777 [INFO][4035] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.13.196/26] block=192.168.13.192/26 handle="k8s-pod-network.cbfc3ec8d1078326f8e377d903292a8c49fa03364a02578e1a33dc49d039acb4" host="10.244.31.90" Feb 13 22:31:42.799700 containerd[1615]: 2025-02-13 22:31:42.777 [INFO][4035] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.13.196/26] handle="k8s-pod-network.cbfc3ec8d1078326f8e377d903292a8c49fa03364a02578e1a33dc49d039acb4" host="10.244.31.90" Feb 13 22:31:42.799700 containerd[1615]: 2025-02-13 22:31:42.777 [INFO][4035] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 22:31:42.799700 containerd[1615]: 2025-02-13 22:31:42.777 [INFO][4035] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.13.196/26] IPv6=[] ContainerID="cbfc3ec8d1078326f8e377d903292a8c49fa03364a02578e1a33dc49d039acb4" HandleID="k8s-pod-network.cbfc3ec8d1078326f8e377d903292a8c49fa03364a02578e1a33dc49d039acb4" Workload="10.244.31.90-k8s-test--pod--1-eth0" Feb 13 22:31:42.802006 containerd[1615]: 2025-02-13 22:31:42.779 [INFO][4025] cni-plugin/k8s.go 386: Populated endpoint ContainerID="cbfc3ec8d1078326f8e377d903292a8c49fa03364a02578e1a33dc49d039acb4" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.244.31.90-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.244.31.90-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"dc9088c3-278e-4f13-a13f-792f4586fbe2", ResourceVersion:"1337", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 22, 31, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.244.31.90", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.13.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 22:31:42.802006 containerd[1615]: 2025-02-13 22:31:42.780 [INFO][4025] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.13.196/32] ContainerID="cbfc3ec8d1078326f8e377d903292a8c49fa03364a02578e1a33dc49d039acb4" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.244.31.90-k8s-test--pod--1-eth0" Feb 13 22:31:42.802006 containerd[1615]: 2025-02-13 22:31:42.780 [INFO][4025] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="cbfc3ec8d1078326f8e377d903292a8c49fa03364a02578e1a33dc49d039acb4" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.244.31.90-k8s-test--pod--1-eth0" Feb 13 22:31:42.802006 containerd[1615]: 2025-02-13 22:31:42.786 [INFO][4025] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cbfc3ec8d1078326f8e377d903292a8c49fa03364a02578e1a33dc49d039acb4" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.244.31.90-k8s-test--pod--1-eth0" Feb 13 22:31:42.802006 containerd[1615]: 2025-02-13 22:31:42.787 [INFO][4025] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cbfc3ec8d1078326f8e377d903292a8c49fa03364a02578e1a33dc49d039acb4" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.244.31.90-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.244.31.90-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"dc9088c3-278e-4f13-a13f-792f4586fbe2", ResourceVersion:"1337", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 22, 31, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.244.31.90", ContainerID:"cbfc3ec8d1078326f8e377d903292a8c49fa03364a02578e1a33dc49d039acb4", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.13.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"32:9e:53:a1:70:40", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 22:31:42.802006 containerd[1615]: 2025-02-13 22:31:42.796 [INFO][4025] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="cbfc3ec8d1078326f8e377d903292a8c49fa03364a02578e1a33dc49d039acb4" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.244.31.90-k8s-test--pod--1-eth0" Feb 13 22:31:42.842249 containerd[1615]: time="2025-02-13T22:31:42.842028694Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 22:31:42.842249 containerd[1615]: time="2025-02-13T22:31:42.842159451Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 22:31:42.842249 containerd[1615]: time="2025-02-13T22:31:42.842209810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 22:31:42.842841 containerd[1615]: time="2025-02-13T22:31:42.842417215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 22:31:42.932051 containerd[1615]: time="2025-02-13T22:31:42.931984119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:dc9088c3-278e-4f13-a13f-792f4586fbe2,Namespace:default,Attempt:0,} returns sandbox id \"cbfc3ec8d1078326f8e377d903292a8c49fa03364a02578e1a33dc49d039acb4\"" Feb 13 22:31:42.935704 containerd[1615]: time="2025-02-13T22:31:42.935492460Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 22:31:43.342780 containerd[1615]: time="2025-02-13T22:31:43.342589207Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:31:43.343541 containerd[1615]: time="2025-02-13T22:31:43.343479342Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Feb 13 22:31:43.347796 containerd[1615]: time="2025-02-13T22:31:43.347750547Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 412.196548ms" Feb 13 22:31:43.347905 containerd[1615]: time="2025-02-13T22:31:43.347796564Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 22:31:43.351220 containerd[1615]: time="2025-02-13T22:31:43.351100310Z" level=info msg="CreateContainer within sandbox \"cbfc3ec8d1078326f8e377d903292a8c49fa03364a02578e1a33dc49d039acb4\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 13 22:31:43.374809 containerd[1615]: time="2025-02-13T22:31:43.374723138Z" level=info msg="CreateContainer within sandbox \"cbfc3ec8d1078326f8e377d903292a8c49fa03364a02578e1a33dc49d039acb4\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"c55ecb5ba6cf8bb04bfe6b0770a32f598dd7ccd4db4b35921abcf1ded0dde2e1\"" Feb 13 22:31:43.375956 containerd[1615]: time="2025-02-13T22:31:43.375887458Z" level=info msg="StartContainer for \"c55ecb5ba6cf8bb04bfe6b0770a32f598dd7ccd4db4b35921abcf1ded0dde2e1\"" Feb 13 22:31:43.450485 kubelet[2032]: E0213 22:31:43.450382 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:31:43.452072 containerd[1615]: time="2025-02-13T22:31:43.451464721Z" level=info msg="StartContainer for \"c55ecb5ba6cf8bb04bfe6b0770a32f598dd7ccd4db4b35921abcf1ded0dde2e1\" returns successfully" Feb 13 22:31:44.451179 kubelet[2032]: E0213 22:31:44.451057 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:31:44.793672 systemd-networkd[1261]: cali5ec59c6bf6e: Gained IPv6LL Feb 13 22:31:45.452223 kubelet[2032]: E0213 22:31:45.452114 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:31:46.452456 kubelet[2032]: E0213 22:31:46.452373 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:31:47.452801 kubelet[2032]: E0213 22:31:47.452720 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:31:48.453097 kubelet[2032]: E0213 22:31:48.452898 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:31:49.454092 kubelet[2032]: E0213 22:31:49.453997 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:31:50.454785 kubelet[2032]: E0213 22:31:50.454680 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:31:51.011660 systemd[1]: Started sshd@9-10.244.31.90:22-218.92.0.219:33280.service - OpenSSH per-connection server daemon (218.92.0.219:33280). Feb 13 22:31:51.455595 kubelet[2032]: E0213 22:31:51.455511 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:31:52.426659 sshd-session[4160]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.219 user=root Feb 13 22:31:52.456505 kubelet[2032]: E0213 22:31:52.456440 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:31:53.457690 kubelet[2032]: E0213 22:31:53.457614 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:31:54.089551 sshd[4158]: PAM: Permission denied for root from 218.92.0.219 Feb 13 22:31:54.458521 kubelet[2032]: E0213 22:31:54.458310 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 22:31:54.472413 sshd-session[4163]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.219 user=root