Jan 29 13:05:15.033022 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 09:36:13 -00 2025 Jan 29 13:05:15.033073 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=519b8fded83181f8e61f734d5291f916d7548bfba9487c78bcb50d002d81719d Jan 29 13:05:15.033088 kernel: BIOS-provided physical RAM map: Jan 29 13:05:15.033104 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 29 13:05:15.033115 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 29 13:05:15.033125 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 29 13:05:15.033137 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jan 29 13:05:15.033147 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jan 29 13:05:15.033158 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 29 13:05:15.033168 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 29 13:05:15.033178 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 29 13:05:15.033188 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 29 13:05:15.033204 kernel: NX (Execute Disable) protection: active Jan 29 13:05:15.033215 kernel: APIC: Static calls initialized Jan 29 13:05:15.033228 kernel: SMBIOS 2.8 present. Jan 29 13:05:15.033239 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Jan 29 13:05:15.033251 kernel: Hypervisor detected: KVM Jan 29 13:05:15.033267 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 13:05:15.033279 kernel: kvm-clock: using sched offset of 4546247524 cycles Jan 29 13:05:15.034371 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 13:05:15.034384 kernel: tsc: Detected 2499.998 MHz processor Jan 29 13:05:15.034396 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 13:05:15.034408 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 13:05:15.034420 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 29 13:05:15.034432 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 29 13:05:15.034443 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 13:05:15.034475 kernel: Using GB pages for direct mapping Jan 29 13:05:15.034487 kernel: ACPI: Early table checksum verification disabled Jan 29 13:05:15.034499 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Jan 29 13:05:15.034511 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 13:05:15.034523 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 13:05:15.034534 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 13:05:15.034546 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jan 29 13:05:15.034558 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 13:05:15.034569 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 13:05:15.034586 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 13:05:15.034598 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 13:05:15.034610 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jan 29 13:05:15.034621 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jan 29 13:05:15.034633 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jan 29 13:05:15.034651 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jan 29 13:05:15.034663 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jan 29 13:05:15.034694 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jan 29 13:05:15.034707 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jan 29 13:05:15.034719 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 29 13:05:15.034731 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 29 13:05:15.034743 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jan 29 13:05:15.034754 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Jan 29 13:05:15.034766 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jan 29 13:05:15.034778 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Jan 29 13:05:15.034796 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jan 29 13:05:15.034808 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Jan 29 13:05:15.034820 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jan 29 13:05:15.034832 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Jan 29 13:05:15.034844 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jan 29 13:05:15.034867 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Jan 29 13:05:15.034891 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jan 29 13:05:15.034904 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Jan 29 13:05:15.034916 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jan 29 13:05:15.034933 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Jan 29 13:05:15.034946 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 29 13:05:15.034962 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 29 13:05:15.034976 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jan 29 13:05:15.034989 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Jan 29 13:05:15.035001 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Jan 29 13:05:15.035013 kernel: Zone ranges: Jan 29 13:05:15.035025 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 13:05:15.035038 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jan 29 13:05:15.035055 kernel: Normal empty Jan 29 13:05:15.035067 kernel: Movable zone start for each node Jan 29 13:05:15.035079 kernel: Early memory node ranges Jan 29 13:05:15.035092 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 29 13:05:15.035103 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jan 29 13:05:15.035116 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jan 29 13:05:15.035128 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 13:05:15.035140 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 29 13:05:15.035152 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jan 29 13:05:15.035164 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 29 13:05:15.035181 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 13:05:15.035193 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 13:05:15.035206 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 29 13:05:15.035218 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 13:05:15.035230 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 13:05:15.035242 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 13:05:15.035254 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 13:05:15.035266 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 13:05:15.035278 kernel: TSC deadline timer available Jan 29 13:05:15.035315 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Jan 29 13:05:15.036361 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 13:05:15.036378 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 29 13:05:15.036391 kernel: Booting paravirtualized kernel on KVM Jan 29 13:05:15.036403 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 13:05:15.036416 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 29 13:05:15.036428 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 29 13:05:15.036440 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 29 13:05:15.036452 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 29 13:05:15.036473 kernel: kvm-guest: PV spinlocks enabled Jan 29 13:05:15.036485 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 29 13:05:15.036499 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=519b8fded83181f8e61f734d5291f916d7548bfba9487c78bcb50d002d81719d Jan 29 13:05:15.036512 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 13:05:15.036524 kernel: random: crng init done Jan 29 13:05:15.036536 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 13:05:15.036549 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 29 13:05:15.036561 kernel: Fallback order for Node 0: 0 Jan 29 13:05:15.036578 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Jan 29 13:05:15.036591 kernel: Policy zone: DMA32 Jan 29 13:05:15.036603 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 13:05:15.036615 kernel: software IO TLB: area num 16. Jan 29 13:05:15.036627 kernel: Memory: 1901528K/2096616K available (12288K kernel code, 2301K rwdata, 22736K rodata, 42972K init, 2220K bss, 194828K reserved, 0K cma-reserved) Jan 29 13:05:15.036640 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 29 13:05:15.036652 kernel: Kernel/User page tables isolation: enabled Jan 29 13:05:15.036664 kernel: ftrace: allocating 37923 entries in 149 pages Jan 29 13:05:15.036689 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 13:05:15.036708 kernel: Dynamic Preempt: voluntary Jan 29 13:05:15.036720 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 13:05:15.036734 kernel: rcu: RCU event tracing is enabled. Jan 29 13:05:15.036746 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 29 13:05:15.036759 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 13:05:15.036783 kernel: Rude variant of Tasks RCU enabled. Jan 29 13:05:15.036801 kernel: Tracing variant of Tasks RCU enabled. Jan 29 13:05:15.036814 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 13:05:15.036826 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 29 13:05:15.036839 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jan 29 13:05:15.036852 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 13:05:15.036864 kernel: Console: colour VGA+ 80x25 Jan 29 13:05:15.036882 kernel: printk: console [tty0] enabled Jan 29 13:05:15.036895 kernel: printk: console [ttyS0] enabled Jan 29 13:05:15.036908 kernel: ACPI: Core revision 20230628 Jan 29 13:05:15.036920 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 13:05:15.036941 kernel: x2apic enabled Jan 29 13:05:15.036965 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 13:05:15.036979 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 29 13:05:15.036992 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Jan 29 13:05:15.037010 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 29 13:05:15.037023 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 29 13:05:15.037036 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 29 13:05:15.037048 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 13:05:15.037061 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 13:05:15.037073 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 13:05:15.037091 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 13:05:15.037105 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 29 13:05:15.037117 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 13:05:15.037130 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 13:05:15.037143 kernel: MDS: Mitigation: Clear CPU buffers Jan 29 13:05:15.037155 kernel: MMIO Stale Data: Unknown: No mitigations Jan 29 13:05:15.037168 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 29 13:05:15.037180 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 13:05:15.037194 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 13:05:15.037206 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 13:05:15.037219 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 13:05:15.037237 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 29 13:05:15.037250 kernel: Freeing SMP alternatives memory: 32K Jan 29 13:05:15.037263 kernel: pid_max: default: 32768 minimum: 301 Jan 29 13:05:15.037275 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 13:05:15.038324 kernel: landlock: Up and running. Jan 29 13:05:15.038341 kernel: SELinux: Initializing. Jan 29 13:05:15.038354 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 13:05:15.038367 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 13:05:15.038380 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Jan 29 13:05:15.038393 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 29 13:05:15.038406 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 29 13:05:15.038428 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 29 13:05:15.038441 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Jan 29 13:05:15.038454 kernel: signal: max sigframe size: 1776 Jan 29 13:05:15.038467 kernel: rcu: Hierarchical SRCU implementation. Jan 29 13:05:15.038480 kernel: rcu: Max phase no-delay instances is 400. Jan 29 13:05:15.038493 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 29 13:05:15.038506 kernel: smp: Bringing up secondary CPUs ... Jan 29 13:05:15.038519 kernel: smpboot: x86: Booting SMP configuration: Jan 29 13:05:15.038532 kernel: .... node #0, CPUs: #1 Jan 29 13:05:15.038550 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jan 29 13:05:15.038563 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 13:05:15.038576 kernel: smpboot: Max logical packages: 16 Jan 29 13:05:15.038589 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Jan 29 13:05:15.038602 kernel: devtmpfs: initialized Jan 29 13:05:15.038614 kernel: x86/mm: Memory block size: 128MB Jan 29 13:05:15.038627 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 13:05:15.038640 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 29 13:05:15.038653 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 13:05:15.038683 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 13:05:15.038697 kernel: audit: initializing netlink subsys (disabled) Jan 29 13:05:15.038710 kernel: audit: type=2000 audit(1738155913.124:1): state=initialized audit_enabled=0 res=1 Jan 29 13:05:15.038722 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 13:05:15.038735 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 13:05:15.038748 kernel: cpuidle: using governor menu Jan 29 13:05:15.038760 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 13:05:15.038773 kernel: dca service started, version 1.12.1 Jan 29 13:05:15.038786 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 29 13:05:15.038804 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 29 13:05:15.038818 kernel: PCI: Using configuration type 1 for base access Jan 29 13:05:15.038830 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 13:05:15.038843 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 13:05:15.038856 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 13:05:15.038869 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 13:05:15.038881 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 13:05:15.038894 kernel: ACPI: Added _OSI(Module Device) Jan 29 13:05:15.038907 kernel: ACPI: Added _OSI(Processor Device) Jan 29 13:05:15.038925 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 13:05:15.038938 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 13:05:15.038951 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 13:05:15.038963 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 13:05:15.038976 kernel: ACPI: Interpreter enabled Jan 29 13:05:15.038989 kernel: ACPI: PM: (supports S0 S5) Jan 29 13:05:15.039001 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 13:05:15.039014 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 13:05:15.039027 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 13:05:15.039045 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 29 13:05:15.039058 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 13:05:15.040391 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 13:05:15.040597 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 29 13:05:15.040792 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 29 13:05:15.040813 kernel: PCI host bridge to bus 0000:00 Jan 29 13:05:15.040996 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 13:05:15.041165 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 13:05:15.044328 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 13:05:15.044491 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 29 13:05:15.044646 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 29 13:05:15.044817 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jan 29 13:05:15.044971 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 13:05:15.045175 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 29 13:05:15.045389 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Jan 29 13:05:15.045562 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Jan 29 13:05:15.045745 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Jan 29 13:05:15.045913 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Jan 29 13:05:15.046081 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 13:05:15.046260 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 29 13:05:15.046475 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Jan 29 13:05:15.046659 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 29 13:05:15.046842 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Jan 29 13:05:15.047032 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 29 13:05:15.047204 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Jan 29 13:05:15.049455 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 29 13:05:15.049646 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Jan 29 13:05:15.049843 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 29 13:05:15.050014 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Jan 29 13:05:15.050193 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 29 13:05:15.050439 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Jan 29 13:05:15.050627 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 29 13:05:15.050816 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Jan 29 13:05:15.050990 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 29 13:05:15.051156 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Jan 29 13:05:15.051348 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 29 13:05:15.051515 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 29 13:05:15.051695 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Jan 29 13:05:15.051863 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jan 29 13:05:15.052038 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Jan 29 13:05:15.052218 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 29 13:05:15.053189 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 29 13:05:15.053379 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Jan 29 13:05:15.053547 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Jan 29 13:05:15.053742 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 29 13:05:15.053911 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 29 13:05:15.054098 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 29 13:05:15.054265 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Jan 29 13:05:15.054450 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Jan 29 13:05:15.054630 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 29 13:05:15.054812 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 29 13:05:15.055001 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Jan 29 13:05:15.055187 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Jan 29 13:05:15.055373 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 29 13:05:15.055543 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 29 13:05:15.055726 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 29 13:05:15.055917 kernel: pci_bus 0000:02: extended config space not accessible Jan 29 13:05:15.056107 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Jan 29 13:05:15.056330 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Jan 29 13:05:15.056508 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 29 13:05:15.056695 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 29 13:05:15.056887 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 29 13:05:15.057073 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Jan 29 13:05:15.057257 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 29 13:05:15.057475 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 29 13:05:15.057650 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 29 13:05:15.057844 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 29 13:05:15.058016 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jan 29 13:05:15.058182 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 29 13:05:15.058419 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 29 13:05:15.058584 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 29 13:05:15.058764 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 29 13:05:15.058931 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 29 13:05:15.059104 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 29 13:05:15.059272 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 29 13:05:15.059451 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 29 13:05:15.059615 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 29 13:05:15.059792 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 29 13:05:15.059958 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 29 13:05:15.060123 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 29 13:05:15.060323 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 29 13:05:15.060500 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 29 13:05:15.060663 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 29 13:05:15.060844 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 29 13:05:15.061008 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 29 13:05:15.061171 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 29 13:05:15.061191 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 13:05:15.061205 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 13:05:15.061219 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 13:05:15.061240 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 13:05:15.061253 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 29 13:05:15.061266 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 29 13:05:15.061279 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 29 13:05:15.061306 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 29 13:05:15.061319 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 29 13:05:15.061332 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 29 13:05:15.061345 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 29 13:05:15.061359 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 29 13:05:15.061378 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 29 13:05:15.061392 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 29 13:05:15.061404 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 29 13:05:15.061417 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 29 13:05:15.061430 kernel: iommu: Default domain type: Translated Jan 29 13:05:15.061443 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 13:05:15.061456 kernel: PCI: Using ACPI for IRQ routing Jan 29 13:05:15.061469 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 13:05:15.061482 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 29 13:05:15.061501 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jan 29 13:05:15.061678 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 29 13:05:15.061848 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 29 13:05:15.062013 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 13:05:15.062032 kernel: vgaarb: loaded Jan 29 13:05:15.062046 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 13:05:15.062059 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 13:05:15.062072 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 13:05:15.062085 kernel: pnp: PnP ACPI init Jan 29 13:05:15.062264 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 29 13:05:15.062297 kernel: pnp: PnP ACPI: found 5 devices Jan 29 13:05:15.062312 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 13:05:15.062325 kernel: NET: Registered PF_INET protocol family Jan 29 13:05:15.062338 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 13:05:15.062351 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 29 13:05:15.062365 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 13:05:15.062378 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 29 13:05:15.062399 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 29 13:05:15.062412 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 29 13:05:15.062425 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 13:05:15.062438 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 13:05:15.062451 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 13:05:15.062465 kernel: NET: Registered PF_XDP protocol family Jan 29 13:05:15.062628 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Jan 29 13:05:15.062809 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 29 13:05:15.062985 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 29 13:05:15.063152 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 29 13:05:15.063332 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 29 13:05:15.063511 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 29 13:05:15.063695 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 29 13:05:15.063863 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 29 13:05:15.064040 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 29 13:05:15.064207 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 29 13:05:15.064389 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 29 13:05:15.064554 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 29 13:05:15.064734 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 29 13:05:15.064904 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 29 13:05:15.065071 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 29 13:05:15.065249 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 29 13:05:15.065497 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 29 13:05:15.065687 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 29 13:05:15.065855 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 29 13:05:15.066019 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 29 13:05:15.066183 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 29 13:05:15.066364 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 29 13:05:15.066529 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 29 13:05:15.066707 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 29 13:05:15.066886 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 29 13:05:15.067051 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 29 13:05:15.067215 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 29 13:05:15.067455 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 29 13:05:15.067620 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 29 13:05:15.067807 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 29 13:05:15.067979 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 29 13:05:15.068144 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 29 13:05:15.068322 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 29 13:05:15.068489 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 29 13:05:15.068653 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 29 13:05:15.068834 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 29 13:05:15.069001 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 29 13:05:15.069168 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 29 13:05:15.069350 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 29 13:05:15.069525 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 29 13:05:15.069703 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 29 13:05:15.069871 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 29 13:05:15.070037 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 29 13:05:15.070203 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 29 13:05:15.070428 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 29 13:05:15.070595 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 29 13:05:15.070773 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 29 13:05:15.070936 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 29 13:05:15.071099 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 29 13:05:15.071262 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 29 13:05:15.071432 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 13:05:15.071584 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 13:05:15.071747 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 13:05:15.071907 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 29 13:05:15.072057 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 29 13:05:15.072206 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jan 29 13:05:15.072390 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 29 13:05:15.072550 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jan 29 13:05:15.072722 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 29 13:05:15.072890 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 29 13:05:15.073066 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Jan 29 13:05:15.073225 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 29 13:05:15.073398 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 29 13:05:15.073578 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Jan 29 13:05:15.073753 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 29 13:05:15.073913 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 29 13:05:15.074095 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jan 29 13:05:15.074256 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 29 13:05:15.074472 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 29 13:05:15.074645 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Jan 29 13:05:15.074814 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 29 13:05:15.074969 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 29 13:05:15.075145 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Jan 29 13:05:15.075349 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 29 13:05:15.075515 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 29 13:05:15.075693 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Jan 29 13:05:15.075851 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jan 29 13:05:15.076007 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 29 13:05:15.076170 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Jan 29 13:05:15.076355 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 29 13:05:15.076522 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 29 13:05:15.076544 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 29 13:05:15.076558 kernel: PCI: CLS 0 bytes, default 64 Jan 29 13:05:15.076572 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 29 13:05:15.076586 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jan 29 13:05:15.076600 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 29 13:05:15.076614 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 29 13:05:15.076628 kernel: Initialise system trusted keyrings Jan 29 13:05:15.076649 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 29 13:05:15.076663 kernel: Key type asymmetric registered Jan 29 13:05:15.076688 kernel: Asymmetric key parser 'x509' registered Jan 29 13:05:15.076701 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 13:05:15.076715 kernel: io scheduler mq-deadline registered Jan 29 13:05:15.076729 kernel: io scheduler kyber registered Jan 29 13:05:15.076743 kernel: io scheduler bfq registered Jan 29 13:05:15.076910 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 29 13:05:15.077078 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 29 13:05:15.077254 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 13:05:15.077472 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 29 13:05:15.077665 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 29 13:05:15.077846 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 13:05:15.078012 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 29 13:05:15.078176 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 29 13:05:15.078382 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 13:05:15.078550 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 29 13:05:15.078731 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 29 13:05:15.078899 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 13:05:15.079067 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 29 13:05:15.079232 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 29 13:05:15.079450 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 13:05:15.079616 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 29 13:05:15.079795 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 29 13:05:15.079961 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 13:05:15.080125 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 29 13:05:15.080300 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 29 13:05:15.080476 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 13:05:15.080641 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 29 13:05:15.080817 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 29 13:05:15.080983 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 13:05:15.081005 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 13:05:15.081020 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 29 13:05:15.081041 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 29 13:05:15.081055 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 13:05:15.081069 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 13:05:15.081083 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 13:05:15.081097 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 13:05:15.081111 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 13:05:15.081125 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 13:05:15.081337 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 29 13:05:15.081504 kernel: rtc_cmos 00:03: registered as rtc0 Jan 29 13:05:15.081682 kernel: rtc_cmos 00:03: setting system clock to 2025-01-29T13:05:14 UTC (1738155914) Jan 29 13:05:15.081844 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 29 13:05:15.081864 kernel: intel_pstate: CPU model not supported Jan 29 13:05:15.081886 kernel: NET: Registered PF_INET6 protocol family Jan 29 13:05:15.081900 kernel: Segment Routing with IPv6 Jan 29 13:05:15.081913 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 13:05:15.081927 kernel: NET: Registered PF_PACKET protocol family Jan 29 13:05:15.081941 kernel: Key type dns_resolver registered Jan 29 13:05:15.081959 kernel: IPI shorthand broadcast: enabled Jan 29 13:05:15.081973 kernel: sched_clock: Marking stable (1243003644, 231462938)->(1597687811, -123221229) Jan 29 13:05:15.081987 kernel: registered taskstats version 1 Jan 29 13:05:15.082001 kernel: Loading compiled-in X.509 certificates Jan 29 13:05:15.082014 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: de92a621108c58f5771c86c5c3ccb1aa0728ed55' Jan 29 13:05:15.082028 kernel: Key type .fscrypt registered Jan 29 13:05:15.082041 kernel: Key type fscrypt-provisioning registered Jan 29 13:05:15.082054 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 13:05:15.082068 kernel: ima: Allocated hash algorithm: sha1 Jan 29 13:05:15.082087 kernel: ima: No architecture policies found Jan 29 13:05:15.082101 kernel: clk: Disabling unused clocks Jan 29 13:05:15.082115 kernel: Freeing unused kernel image (initmem) memory: 42972K Jan 29 13:05:15.082128 kernel: Write protecting the kernel read-only data: 36864k Jan 29 13:05:15.082142 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Jan 29 13:05:15.082156 kernel: Run /init as init process Jan 29 13:05:15.082169 kernel: with arguments: Jan 29 13:05:15.082183 kernel: /init Jan 29 13:05:15.082196 kernel: with environment: Jan 29 13:05:15.082214 kernel: HOME=/ Jan 29 13:05:15.082227 kernel: TERM=linux Jan 29 13:05:15.082241 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 13:05:15.082266 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 13:05:15.082309 systemd[1]: Detected virtualization kvm. Jan 29 13:05:15.082325 systemd[1]: Detected architecture x86-64. Jan 29 13:05:15.082340 systemd[1]: Running in initrd. Jan 29 13:05:15.082353 systemd[1]: No hostname configured, using default hostname. Jan 29 13:05:15.082375 systemd[1]: Hostname set to . Jan 29 13:05:15.082390 systemd[1]: Initializing machine ID from VM UUID. Jan 29 13:05:15.082404 systemd[1]: Queued start job for default target initrd.target. Jan 29 13:05:15.082419 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 13:05:15.082434 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 13:05:15.082449 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 13:05:15.082463 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 13:05:15.082483 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 13:05:15.082498 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 13:05:15.082515 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 13:05:15.082529 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 13:05:15.082544 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 13:05:15.082559 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 13:05:15.082573 systemd[1]: Reached target paths.target - Path Units. Jan 29 13:05:15.082593 systemd[1]: Reached target slices.target - Slice Units. Jan 29 13:05:15.082608 systemd[1]: Reached target swap.target - Swaps. Jan 29 13:05:15.082623 systemd[1]: Reached target timers.target - Timer Units. Jan 29 13:05:15.082637 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 13:05:15.082651 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 13:05:15.082666 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 13:05:15.082692 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 13:05:15.082707 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 13:05:15.082722 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 13:05:15.082742 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 13:05:15.082757 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 13:05:15.082772 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 13:05:15.082786 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 13:05:15.082801 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 13:05:15.082815 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 13:05:15.082830 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 13:05:15.082844 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 13:05:15.082858 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 13:05:15.082878 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 13:05:15.082893 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 13:05:15.082950 systemd-journald[201]: Collecting audit messages is disabled. Jan 29 13:05:15.082989 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 13:05:15.083010 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 13:05:15.083026 systemd-journald[201]: Journal started Jan 29 13:05:15.083057 systemd-journald[201]: Runtime Journal (/run/log/journal/dcb915945a914dc787eb290fc95e20f0) is 4.7M, max 38.0M, 33.2M free. Jan 29 13:05:15.032361 systemd-modules-load[202]: Inserted module 'overlay' Jan 29 13:05:15.140237 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 13:05:15.140269 kernel: Bridge firewalling registered Jan 29 13:05:15.087464 systemd-modules-load[202]: Inserted module 'br_netfilter' Jan 29 13:05:15.144300 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 13:05:15.145528 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 13:05:15.146566 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 13:05:15.156516 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 13:05:15.160453 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 13:05:15.169473 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 13:05:15.171741 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 13:05:15.179192 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 13:05:15.189472 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 13:05:15.196345 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 13:05:15.206481 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 13:05:15.207610 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 13:05:15.211066 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 13:05:15.221523 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 13:05:15.227105 dracut-cmdline[235]: dracut-dracut-053 Jan 29 13:05:15.230798 dracut-cmdline[235]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=519b8fded83181f8e61f734d5291f916d7548bfba9487c78bcb50d002d81719d Jan 29 13:05:15.264363 systemd-resolved[242]: Positive Trust Anchors: Jan 29 13:05:15.265435 systemd-resolved[242]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 13:05:15.265490 systemd-resolved[242]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 13:05:15.273593 systemd-resolved[242]: Defaulting to hostname 'linux'. Jan 29 13:05:15.275501 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 13:05:15.276644 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 13:05:15.329346 kernel: SCSI subsystem initialized Jan 29 13:05:15.340376 kernel: Loading iSCSI transport class v2.0-870. Jan 29 13:05:15.354353 kernel: iscsi: registered transport (tcp) Jan 29 13:05:15.379701 kernel: iscsi: registered transport (qla4xxx) Jan 29 13:05:15.379768 kernel: QLogic iSCSI HBA Driver Jan 29 13:05:15.437373 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 13:05:15.444543 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 13:05:15.479399 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 13:05:15.479465 kernel: device-mapper: uevent: version 1.0.3 Jan 29 13:05:15.482306 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 13:05:15.529358 kernel: raid6: sse2x4 gen() 13971 MB/s Jan 29 13:05:15.547316 kernel: raid6: sse2x2 gen() 9583 MB/s Jan 29 13:05:15.565986 kernel: raid6: sse2x1 gen() 9873 MB/s Jan 29 13:05:15.566048 kernel: raid6: using algorithm sse2x4 gen() 13971 MB/s Jan 29 13:05:15.584951 kernel: raid6: .... xor() 7875 MB/s, rmw enabled Jan 29 13:05:15.585030 kernel: raid6: using ssse3x2 recovery algorithm Jan 29 13:05:15.610322 kernel: xor: automatically using best checksumming function avx Jan 29 13:05:15.806339 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 13:05:15.821276 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 13:05:15.827704 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 13:05:15.851499 systemd-udevd[421]: Using default interface naming scheme 'v255'. Jan 29 13:05:15.858719 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 13:05:15.866461 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 13:05:15.896321 dracut-pre-trigger[429]: rd.md=0: removing MD RAID activation Jan 29 13:05:15.935009 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 13:05:15.942495 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 13:05:16.062238 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 13:05:16.073493 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 13:05:16.096083 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 13:05:16.099182 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 13:05:16.101513 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 13:05:16.103376 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 13:05:16.116502 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 13:05:16.138247 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 13:05:16.188339 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jan 29 13:05:16.284157 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 13:05:16.284189 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 29 13:05:16.284431 kernel: libata version 3.00 loaded. Jan 29 13:05:16.284467 kernel: AVX version of gcm_enc/dec engaged. Jan 29 13:05:16.284487 kernel: ahci 0000:00:1f.2: version 3.0 Jan 29 13:05:16.284729 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 29 13:05:16.284751 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 29 13:05:16.284955 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 29 13:05:16.285173 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 13:05:16.285194 kernel: GPT:17805311 != 125829119 Jan 29 13:05:16.285219 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 13:05:16.285244 kernel: GPT:17805311 != 125829119 Jan 29 13:05:16.285262 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 13:05:16.285303 kernel: scsi host0: ahci Jan 29 13:05:16.285514 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 13:05:16.285536 kernel: scsi host1: ahci Jan 29 13:05:16.285764 kernel: scsi host2: ahci Jan 29 13:05:16.285974 kernel: scsi host3: ahci Jan 29 13:05:16.286189 kernel: scsi host4: ahci Jan 29 13:05:16.290495 kernel: scsi host5: ahci Jan 29 13:05:16.290744 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Jan 29 13:05:16.290768 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Jan 29 13:05:16.290786 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Jan 29 13:05:16.290805 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Jan 29 13:05:16.290830 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Jan 29 13:05:16.290849 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Jan 29 13:05:16.290878 kernel: AES CTR mode by8 optimization enabled Jan 29 13:05:16.226917 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 13:05:16.374294 kernel: ACPI: bus type USB registered Jan 29 13:05:16.374337 kernel: usbcore: registered new interface driver usbfs Jan 29 13:05:16.374357 kernel: usbcore: registered new interface driver hub Jan 29 13:05:16.374376 kernel: usbcore: registered new device driver usb Jan 29 13:05:16.227092 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 13:05:16.229338 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 13:05:16.230090 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 13:05:16.230362 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 13:05:16.232175 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 13:05:16.242585 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 13:05:16.373728 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 13:05:16.382736 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 13:05:16.408055 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 13:05:16.588327 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 29 13:05:16.588415 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 29 13:05:16.597319 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 29 13:05:16.597358 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 29 13:05:16.600020 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 29 13:05:16.605363 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 29 13:05:16.650580 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (474) Jan 29 13:05:16.657303 kernel: BTRFS: device fsid 5ba3c9ea-61f2-4fe6-a507-2966757f6d44 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (468) Jan 29 13:05:16.664924 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 13:05:16.667958 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 29 13:05:16.694809 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jan 29 13:05:16.695032 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 29 13:05:16.695235 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 29 13:05:16.695459 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jan 29 13:05:16.695675 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jan 29 13:05:16.695875 kernel: hub 1-0:1.0: USB hub found Jan 29 13:05:16.696116 kernel: hub 1-0:1.0: 4 ports detected Jan 29 13:05:16.696719 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 29 13:05:16.696940 kernel: hub 2-0:1.0: USB hub found Jan 29 13:05:16.697151 kernel: hub 2-0:1.0: 4 ports detected Jan 29 13:05:16.674974 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 13:05:16.685216 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 13:05:16.689441 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 13:05:16.701420 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 13:05:16.710474 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 13:05:16.719468 disk-uuid[574]: Primary Header is updated. Jan 29 13:05:16.719468 disk-uuid[574]: Secondary Entries is updated. Jan 29 13:05:16.719468 disk-uuid[574]: Secondary Header is updated. Jan 29 13:05:16.725332 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 13:05:16.732320 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 13:05:16.928379 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 29 13:05:17.069314 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 13:05:17.076485 kernel: usbcore: registered new interface driver usbhid Jan 29 13:05:17.076524 kernel: usbhid: USB HID core driver Jan 29 13:05:17.084990 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 29 13:05:17.085040 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jan 29 13:05:17.736332 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 13:05:17.737943 disk-uuid[575]: The operation has completed successfully. Jan 29 13:05:17.791809 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 13:05:17.791973 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 13:05:17.815499 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 13:05:17.820641 sh[586]: Success Jan 29 13:05:17.839270 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Jan 29 13:05:17.896988 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 13:05:17.904540 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 13:05:17.908179 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 13:05:17.940507 kernel: BTRFS info (device dm-0): first mount of filesystem 5ba3c9ea-61f2-4fe6-a507-2966757f6d44 Jan 29 13:05:17.940577 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 13:05:17.942699 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 13:05:17.946178 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 13:05:17.946218 kernel: BTRFS info (device dm-0): using free space tree Jan 29 13:05:17.956775 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 13:05:17.958210 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 13:05:17.964496 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 13:05:17.969472 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 13:05:17.983186 kernel: BTRFS info (device vda6): first mount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 13:05:17.983243 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 13:05:17.984635 kernel: BTRFS info (device vda6): using free space tree Jan 29 13:05:17.993312 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 13:05:18.006917 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 13:05:18.010374 kernel: BTRFS info (device vda6): last unmount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 13:05:18.018010 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 13:05:18.024509 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 13:05:18.151939 ignition[681]: Ignition 2.20.0 Jan 29 13:05:18.153108 ignition[681]: Stage: fetch-offline Jan 29 13:05:18.153218 ignition[681]: no configs at "/usr/lib/ignition/base.d" Jan 29 13:05:18.153245 ignition[681]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 13:05:18.154541 ignition[681]: parsed url from cmdline: "" Jan 29 13:05:18.154550 ignition[681]: no config URL provided Jan 29 13:05:18.154566 ignition[681]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 13:05:18.154585 ignition[681]: no config at "/usr/lib/ignition/user.ign" Jan 29 13:05:18.154608 ignition[681]: failed to fetch config: resource requires networking Jan 29 13:05:18.161813 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 13:05:18.154935 ignition[681]: Ignition finished successfully Jan 29 13:05:18.163741 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 13:05:18.173519 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 13:05:18.203740 systemd-networkd[775]: lo: Link UP Jan 29 13:05:18.203754 systemd-networkd[775]: lo: Gained carrier Jan 29 13:05:18.207245 systemd-networkd[775]: Enumeration completed Jan 29 13:05:18.207868 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 13:05:18.207874 systemd-networkd[775]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 13:05:18.208403 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 13:05:18.210557 systemd-networkd[775]: eth0: Link UP Jan 29 13:05:18.210564 systemd-networkd[775]: eth0: Gained carrier Jan 29 13:05:18.210576 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 13:05:18.211142 systemd[1]: Reached target network.target - Network. Jan 29 13:05:18.222112 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 13:05:18.237414 ignition[777]: Ignition 2.20.0 Jan 29 13:05:18.237436 ignition[777]: Stage: fetch Jan 29 13:05:18.237741 ignition[777]: no configs at "/usr/lib/ignition/base.d" Jan 29 13:05:18.237762 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 13:05:18.237917 ignition[777]: parsed url from cmdline: "" Jan 29 13:05:18.237925 ignition[777]: no config URL provided Jan 29 13:05:18.237935 ignition[777]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 13:05:18.237951 ignition[777]: no config at "/usr/lib/ignition/user.ign" Jan 29 13:05:18.238120 ignition[777]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 29 13:05:18.238217 ignition[777]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 29 13:05:18.238257 ignition[777]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 29 13:05:18.238467 ignition[777]: GET error: Get "http://169.254.169.254/openstack/latest/user_data": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 29 13:05:18.265464 systemd-networkd[775]: eth0: DHCPv4 address 10.230.23.118/30, gateway 10.230.23.117 acquired from 10.230.23.117 Jan 29 13:05:18.438710 ignition[777]: GET http://169.254.169.254/openstack/latest/user_data: attempt #2 Jan 29 13:05:18.457196 ignition[777]: GET result: OK Jan 29 13:05:18.457857 ignition[777]: parsing config with SHA512: f70b20a1a939068b7f3dbf721a72aef474f24de777a77d233d4b74d07a9022f0d0ad331e26c8a7ea9f2ae2beb381297233a4c7762ddf39c0b2da52fdccd17d23 Jan 29 13:05:18.462096 unknown[777]: fetched base config from "system" Jan 29 13:05:18.462113 unknown[777]: fetched base config from "system" Jan 29 13:05:18.462410 ignition[777]: fetch: fetch complete Jan 29 13:05:18.462123 unknown[777]: fetched user config from "openstack" Jan 29 13:05:18.462418 ignition[777]: fetch: fetch passed Jan 29 13:05:18.464632 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 13:05:18.462485 ignition[777]: Ignition finished successfully Jan 29 13:05:18.478623 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 13:05:18.497658 ignition[784]: Ignition 2.20.0 Jan 29 13:05:18.497674 ignition[784]: Stage: kargs Jan 29 13:05:18.497932 ignition[784]: no configs at "/usr/lib/ignition/base.d" Jan 29 13:05:18.500263 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 13:05:18.497952 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 13:05:18.499043 ignition[784]: kargs: kargs passed Jan 29 13:05:18.499118 ignition[784]: Ignition finished successfully Jan 29 13:05:18.508945 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 13:05:18.525980 ignition[790]: Ignition 2.20.0 Jan 29 13:05:18.526003 ignition[790]: Stage: disks Jan 29 13:05:18.526249 ignition[790]: no configs at "/usr/lib/ignition/base.d" Jan 29 13:05:18.528680 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 13:05:18.526270 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 13:05:18.530794 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 13:05:18.527158 ignition[790]: disks: disks passed Jan 29 13:05:18.531634 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 13:05:18.527229 ignition[790]: Ignition finished successfully Jan 29 13:05:18.533277 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 13:05:18.534903 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 13:05:18.536181 systemd[1]: Reached target basic.target - Basic System. Jan 29 13:05:18.544515 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 13:05:18.564026 systemd-fsck[798]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 29 13:05:18.567566 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 13:05:18.577471 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 13:05:18.708307 kernel: EXT4-fs (vda9): mounted filesystem 2fbf9359-701e-4995-b3f7-74280bd2b1c9 r/w with ordered data mode. Quota mode: none. Jan 29 13:05:18.709135 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 13:05:18.710672 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 13:05:18.719472 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 13:05:18.722692 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 13:05:18.723872 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 13:05:18.725830 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 29 13:05:18.731502 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 13:05:18.743470 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (806) Jan 29 13:05:18.743508 kernel: BTRFS info (device vda6): first mount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 13:05:18.743528 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 13:05:18.743547 kernel: BTRFS info (device vda6): using free space tree Jan 29 13:05:18.731568 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 13:05:18.748315 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 13:05:18.749861 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 13:05:18.753513 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 13:05:18.764645 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 13:05:18.850093 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 13:05:18.858934 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Jan 29 13:05:18.867267 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 13:05:18.875201 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 13:05:18.980590 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 13:05:18.991844 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 13:05:18.995762 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 13:05:19.007244 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 13:05:19.008826 kernel: BTRFS info (device vda6): last unmount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 13:05:19.037376 ignition[922]: INFO : Ignition 2.20.0 Jan 29 13:05:19.037376 ignition[922]: INFO : Stage: mount Jan 29 13:05:19.039990 ignition[922]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 13:05:19.039990 ignition[922]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 13:05:19.039990 ignition[922]: INFO : mount: mount passed Jan 29 13:05:19.039990 ignition[922]: INFO : Ignition finished successfully Jan 29 13:05:19.039960 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 13:05:19.042941 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 13:05:19.580606 systemd-networkd[775]: eth0: Gained IPv6LL Jan 29 13:05:21.093204 systemd-networkd[775]: eth0: Ignoring DHCPv6 address 2a02:1348:179:85dd:24:19ff:fee6:1776/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:85dd:24:19ff:fee6:1776/64 assigned by NDisc. Jan 29 13:05:21.093224 systemd-networkd[775]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 29 13:05:25.909039 coreos-metadata[808]: Jan 29 13:05:25.908 WARN failed to locate config-drive, using the metadata service API instead Jan 29 13:05:25.933349 coreos-metadata[808]: Jan 29 13:05:25.933 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 29 13:05:25.948159 coreos-metadata[808]: Jan 29 13:05:25.948 INFO Fetch successful Jan 29 13:05:25.949016 coreos-metadata[808]: Jan 29 13:05:25.948 INFO wrote hostname srv-pt0sn.gb1.brightbox.com to /sysroot/etc/hostname Jan 29 13:05:25.950586 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 29 13:05:25.950758 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 29 13:05:25.967455 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 13:05:25.981539 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 13:05:25.993319 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (939) Jan 29 13:05:25.997700 kernel: BTRFS info (device vda6): first mount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 13:05:25.997750 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 13:05:25.999565 kernel: BTRFS info (device vda6): using free space tree Jan 29 13:05:26.005346 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 13:05:26.007879 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 13:05:26.032981 ignition[957]: INFO : Ignition 2.20.0 Jan 29 13:05:26.035360 ignition[957]: INFO : Stage: files Jan 29 13:05:26.035360 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 13:05:26.035360 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 13:05:26.035360 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Jan 29 13:05:26.038831 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 13:05:26.039912 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 13:05:26.044800 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 13:05:26.046127 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 13:05:26.047585 unknown[957]: wrote ssh authorized keys file for user: core Jan 29 13:05:26.048730 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 13:05:26.050848 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 29 13:05:26.052122 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 13:05:26.052122 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 13:05:26.052122 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 13:05:26.052122 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 13:05:26.052122 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 13:05:26.052122 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 13:05:26.065808 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 29 13:05:26.648012 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 29 13:05:29.549197 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 13:05:29.549197 ignition[957]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 13:05:29.556653 ignition[957]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 13:05:29.556653 ignition[957]: INFO : files: files passed Jan 29 13:05:29.556653 ignition[957]: INFO : Ignition finished successfully Jan 29 13:05:29.551969 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 13:05:29.564609 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 13:05:29.566976 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 13:05:29.576699 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 13:05:29.577601 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 13:05:29.586312 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 13:05:29.586312 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 13:05:29.589044 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 13:05:29.590428 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 13:05:29.591607 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 13:05:29.598506 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 13:05:29.628309 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 13:05:29.628503 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 13:05:29.630662 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 13:05:29.631930 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 13:05:29.633711 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 13:05:29.640510 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 13:05:29.657112 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 13:05:29.662499 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 13:05:29.688384 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 13:05:29.690249 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 13:05:29.691354 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 13:05:29.693108 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 13:05:29.693319 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 13:05:29.695164 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 13:05:29.696091 systemd[1]: Stopped target basic.target - Basic System. Jan 29 13:05:29.697682 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 13:05:29.699237 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 13:05:29.700104 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 13:05:29.701128 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 13:05:29.702033 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 13:05:29.703567 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 13:05:29.704477 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 13:05:29.706234 systemd[1]: Stopped target swap.target - Swaps. Jan 29 13:05:29.707702 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 13:05:29.707950 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 13:05:29.709054 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 13:05:29.709975 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 13:05:29.710872 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 13:05:29.713644 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 13:05:29.714924 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 13:05:29.715177 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 13:05:29.716890 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 13:05:29.717130 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 13:05:29.718830 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 13:05:29.719055 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 13:05:29.728552 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 13:05:29.733317 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 13:05:29.733530 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 13:05:29.737512 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 13:05:29.738348 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 13:05:29.738690 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 13:05:29.740631 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 13:05:29.741303 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 13:05:29.754620 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 13:05:29.755575 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 13:05:29.758375 ignition[1009]: INFO : Ignition 2.20.0 Jan 29 13:05:29.758375 ignition[1009]: INFO : Stage: umount Jan 29 13:05:29.758375 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 13:05:29.758375 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 13:05:29.761722 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 13:05:29.765753 ignition[1009]: INFO : umount: umount passed Jan 29 13:05:29.765753 ignition[1009]: INFO : Ignition finished successfully Jan 29 13:05:29.761882 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 13:05:29.763419 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 13:05:29.763550 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 13:05:29.764651 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 13:05:29.764728 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 13:05:29.768669 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 13:05:29.768736 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 13:05:29.770238 systemd[1]: Stopped target network.target - Network. Jan 29 13:05:29.772363 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 13:05:29.772455 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 13:05:29.774731 systemd[1]: Stopped target paths.target - Path Units. Jan 29 13:05:29.776031 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 13:05:29.779333 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 13:05:29.788389 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 13:05:29.790846 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 13:05:29.791565 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 13:05:29.791638 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 13:05:29.793447 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 13:05:29.793517 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 13:05:29.794942 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 13:05:29.795015 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 13:05:29.796441 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 13:05:29.796519 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 13:05:29.798188 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 13:05:29.799927 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 13:05:29.802954 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 13:05:29.803448 systemd-networkd[775]: eth0: DHCPv6 lease lost Jan 29 13:05:29.806127 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 13:05:29.806265 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 13:05:29.807271 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 13:05:29.807526 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 13:05:29.811822 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 13:05:29.812415 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 13:05:29.813737 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 13:05:29.813817 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 13:05:29.820440 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 13:05:29.821129 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 13:05:29.821201 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 13:05:29.822153 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 13:05:29.825743 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 13:05:29.825910 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 13:05:29.836778 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 13:05:29.836877 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 13:05:29.838755 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 13:05:29.838839 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 13:05:29.840709 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 13:05:29.840781 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 13:05:29.844958 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 13:05:29.845198 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 13:05:29.847916 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 13:05:29.848053 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 13:05:29.850268 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 13:05:29.850794 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 13:05:29.852471 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 13:05:29.852529 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 13:05:29.854108 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 13:05:29.854183 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 13:05:29.856407 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 13:05:29.856477 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 13:05:29.857825 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 13:05:29.857892 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 13:05:29.867566 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 13:05:29.868334 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 13:05:29.868423 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 13:05:29.869221 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 13:05:29.871329 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 13:05:29.875264 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 13:05:29.875479 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 13:05:29.877866 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 13:05:29.887548 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 13:05:29.897182 systemd[1]: Switching root. Jan 29 13:05:29.936337 systemd-journald[201]: Journal stopped Jan 29 13:05:31.375025 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Jan 29 13:05:31.375209 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 13:05:31.375260 kernel: SELinux: policy capability open_perms=1 Jan 29 13:05:31.376312 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 13:05:31.376353 kernel: SELinux: policy capability always_check_network=0 Jan 29 13:05:31.376388 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 13:05:31.376427 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 13:05:31.376447 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 13:05:31.376475 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 13:05:31.376519 kernel: audit: type=1403 audit(1738155930.152:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 13:05:31.376556 systemd[1]: Successfully loaded SELinux policy in 48.134ms. Jan 29 13:05:31.376611 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.896ms. Jan 29 13:05:31.376643 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 13:05:31.376674 systemd[1]: Detected virtualization kvm. Jan 29 13:05:31.376702 systemd[1]: Detected architecture x86-64. Jan 29 13:05:31.376732 systemd[1]: Detected first boot. Jan 29 13:05:31.376753 systemd[1]: Hostname set to . Jan 29 13:05:31.376795 systemd[1]: Initializing machine ID from VM UUID. Jan 29 13:05:31.377591 zram_generator::config[1051]: No configuration found. Jan 29 13:05:31.377644 systemd[1]: Populated /etc with preset unit settings. Jan 29 13:05:31.377681 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 13:05:31.377713 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 13:05:31.377762 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 13:05:31.377811 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 13:05:31.377841 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 13:05:31.377872 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 13:05:31.377907 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 13:05:31.377936 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 13:05:31.377957 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 13:05:31.377978 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 13:05:31.377997 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 13:05:31.378043 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 13:05:31.378078 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 13:05:31.378107 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 13:05:31.378128 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 13:05:31.378156 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 13:05:31.378177 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 13:05:31.378207 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 13:05:31.378228 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 13:05:31.378260 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 13:05:31.378323 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 13:05:31.378346 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 13:05:31.378378 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 13:05:31.378401 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 13:05:31.378432 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 13:05:31.378453 systemd[1]: Reached target slices.target - Slice Units. Jan 29 13:05:31.378492 systemd[1]: Reached target swap.target - Swaps. Jan 29 13:05:31.378514 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 13:05:31.378547 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 13:05:31.378568 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 13:05:31.378600 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 13:05:31.378629 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 13:05:31.378651 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 13:05:31.378676 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 13:05:31.378697 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 13:05:31.378729 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 13:05:31.378759 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 13:05:31.378780 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 13:05:31.378800 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 13:05:31.378820 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 13:05:31.378841 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 13:05:31.378862 systemd[1]: Reached target machines.target - Containers. Jan 29 13:05:31.378882 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 13:05:31.378915 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 13:05:31.378937 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 13:05:31.378958 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 13:05:31.378978 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 13:05:31.379018 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 13:05:31.379059 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 13:05:31.379100 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 13:05:31.379122 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 13:05:31.379142 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 13:05:31.379169 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 13:05:31.379190 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 13:05:31.379216 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 13:05:31.379236 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 13:05:31.379257 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 13:05:31.379277 kernel: fuse: init (API version 7.39) Jan 29 13:05:31.379414 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 13:05:31.379443 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 13:05:31.379464 kernel: loop: module loaded Jan 29 13:05:31.379495 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 13:05:31.379515 kernel: ACPI: bus type drm_connector registered Jan 29 13:05:31.379534 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 13:05:31.379561 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 13:05:31.379582 systemd[1]: Stopped verity-setup.service. Jan 29 13:05:31.379613 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 13:05:31.379686 systemd-journald[1144]: Collecting audit messages is disabled. Jan 29 13:05:31.379742 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 13:05:31.379764 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 13:05:31.379784 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 13:05:31.379805 systemd-journald[1144]: Journal started Jan 29 13:05:31.379849 systemd-journald[1144]: Runtime Journal (/run/log/journal/dcb915945a914dc787eb290fc95e20f0) is 4.7M, max 38.0M, 33.2M free. Jan 29 13:05:30.943645 systemd[1]: Queued start job for default target multi-user.target. Jan 29 13:05:30.964528 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 13:05:30.965900 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 13:05:31.384314 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 13:05:31.385621 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 13:05:31.386545 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 13:05:31.387446 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 13:05:31.388551 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 13:05:31.389722 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 13:05:31.390987 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 13:05:31.391206 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 13:05:31.392435 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 13:05:31.392671 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 13:05:31.393960 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 13:05:31.394172 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 13:05:31.395424 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 13:05:31.395656 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 13:05:31.396840 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 13:05:31.397044 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 13:05:31.398278 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 13:05:31.398548 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 13:05:31.399749 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 13:05:31.400933 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 13:05:31.402084 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 13:05:31.417172 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 13:05:31.429385 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 13:05:31.437933 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 13:05:31.440385 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 13:05:31.440441 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 13:05:31.443894 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 13:05:31.451506 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 13:05:31.457437 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 13:05:31.458358 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 13:05:31.468611 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 13:05:31.494532 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 13:05:31.495435 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 13:05:31.503594 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 13:05:31.504862 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 13:05:31.508523 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 13:05:31.517518 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 13:05:31.522438 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 13:05:31.530770 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 13:05:31.532779 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 13:05:31.535806 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 13:05:31.557178 systemd-journald[1144]: Time spent on flushing to /var/log/journal/dcb915945a914dc787eb290fc95e20f0 is 79.870ms for 1124 entries. Jan 29 13:05:31.557178 systemd-journald[1144]: System Journal (/var/log/journal/dcb915945a914dc787eb290fc95e20f0) is 8.0M, max 584.8M, 576.8M free. Jan 29 13:05:31.670415 systemd-journald[1144]: Received client request to flush runtime journal. Jan 29 13:05:31.670490 kernel: loop0: detected capacity change from 0 to 138184 Jan 29 13:05:31.670520 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 13:05:31.581391 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 13:05:31.583586 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 13:05:31.593634 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 13:05:31.656484 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 13:05:31.674698 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 13:05:31.679161 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 13:05:31.687559 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 13:05:31.703339 kernel: loop1: detected capacity change from 0 to 8 Jan 29 13:05:31.705533 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 13:05:31.706502 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 13:05:31.710626 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 13:05:31.718735 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 13:05:31.754321 kernel: loop2: detected capacity change from 0 to 205544 Jan 29 13:05:31.779136 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. Jan 29 13:05:31.780341 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. Jan 29 13:05:31.782827 udevadm[1204]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 29 13:05:31.797130 kernel: loop3: detected capacity change from 0 to 140992 Jan 29 13:05:31.795832 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 13:05:31.854341 kernel: loop4: detected capacity change from 0 to 138184 Jan 29 13:05:31.880318 kernel: loop5: detected capacity change from 0 to 8 Jan 29 13:05:31.888332 kernel: loop6: detected capacity change from 0 to 205544 Jan 29 13:05:31.917321 kernel: loop7: detected capacity change from 0 to 140992 Jan 29 13:05:31.945277 (sd-merge)[1209]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 29 13:05:31.948053 (sd-merge)[1209]: Merged extensions into '/usr'. Jan 29 13:05:31.958615 systemd[1]: Reloading requested from client PID 1184 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 13:05:31.958643 systemd[1]: Reloading... Jan 29 13:05:32.129380 zram_generator::config[1239]: No configuration found. Jan 29 13:05:32.315672 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 13:05:32.381358 ldconfig[1179]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 13:05:32.389782 systemd[1]: Reloading finished in 426 ms. Jan 29 13:05:32.421322 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 13:05:32.423210 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 13:05:32.439653 systemd[1]: Starting ensure-sysext.service... Jan 29 13:05:32.448611 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 13:05:32.470364 systemd[1]: Reloading requested from client PID 1291 ('systemctl') (unit ensure-sysext.service)... Jan 29 13:05:32.470401 systemd[1]: Reloading... Jan 29 13:05:32.523425 systemd-tmpfiles[1292]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 13:05:32.524019 systemd-tmpfiles[1292]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 13:05:32.527553 systemd-tmpfiles[1292]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 13:05:32.527958 systemd-tmpfiles[1292]: ACLs are not supported, ignoring. Jan 29 13:05:32.528063 systemd-tmpfiles[1292]: ACLs are not supported, ignoring. Jan 29 13:05:32.542269 systemd-tmpfiles[1292]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 13:05:32.542305 systemd-tmpfiles[1292]: Skipping /boot Jan 29 13:05:32.585149 systemd-tmpfiles[1292]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 13:05:32.585172 systemd-tmpfiles[1292]: Skipping /boot Jan 29 13:05:32.638977 zram_generator::config[1318]: No configuration found. Jan 29 13:05:32.823996 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 13:05:32.891455 systemd[1]: Reloading finished in 420 ms. Jan 29 13:05:32.916425 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 13:05:32.922904 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 13:05:32.936623 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 13:05:32.941516 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 13:05:32.945593 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 13:05:32.954727 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 13:05:32.960537 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 13:05:32.963605 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 13:05:32.970254 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 13:05:32.970569 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 13:05:32.980669 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 13:05:32.994641 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 13:05:32.999631 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 13:05:33.000540 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 13:05:33.000696 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 13:05:33.005959 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 13:05:33.006231 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 13:05:33.006486 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 13:05:33.017414 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 13:05:33.018550 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 13:05:33.020665 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 13:05:33.020911 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 13:05:33.034381 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 13:05:33.037393 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 13:05:33.044407 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 13:05:33.052905 systemd-udevd[1381]: Using default interface naming scheme 'v255'. Jan 29 13:05:33.059630 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 13:05:33.060558 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 13:05:33.060749 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 13:05:33.063169 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 13:05:33.064942 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 13:05:33.065141 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 13:05:33.071352 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 13:05:33.071578 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 13:05:33.077250 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 13:05:33.077970 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 13:05:33.084356 systemd[1]: Finished ensure-sysext.service. Jan 29 13:05:33.092462 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 13:05:33.092681 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 13:05:33.103666 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 13:05:33.108397 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 13:05:33.111999 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 13:05:33.113341 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 13:05:33.113583 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 13:05:33.136391 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 13:05:33.168776 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 13:05:33.183000 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 13:05:33.188796 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 13:05:33.190083 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 13:05:33.208411 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 13:05:33.214942 augenrules[1429]: No rules Jan 29 13:05:33.221784 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 13:05:33.222187 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 13:05:33.348074 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 13:05:33.349206 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 13:05:33.372779 systemd-networkd[1422]: lo: Link UP Jan 29 13:05:33.372794 systemd-networkd[1422]: lo: Gained carrier Jan 29 13:05:33.373891 systemd-networkd[1422]: Enumeration completed Jan 29 13:05:33.374035 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 13:05:33.374972 systemd-resolved[1380]: Positive Trust Anchors: Jan 29 13:05:33.374993 systemd-resolved[1380]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 13:05:33.375054 systemd-resolved[1380]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 13:05:33.383202 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 13:05:33.389230 systemd-resolved[1380]: Using system hostname 'srv-pt0sn.gb1.brightbox.com'. Jan 29 13:05:33.394608 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 13:05:33.395553 systemd[1]: Reached target network.target - Network. Jan 29 13:05:33.396237 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 13:05:33.421503 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 13:05:33.470597 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1439) Jan 29 13:05:33.520014 systemd-networkd[1422]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 13:05:33.521205 systemd-networkd[1422]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 13:05:33.525591 systemd-networkd[1422]: eth0: Link UP Jan 29 13:05:33.525605 systemd-networkd[1422]: eth0: Gained carrier Jan 29 13:05:33.525635 systemd-networkd[1422]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 13:05:33.574664 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 13:05:33.586093 systemd-networkd[1422]: eth0: DHCPv4 address 10.230.23.118/30, gateway 10.230.23.117 acquired from 10.230.23.117 Jan 29 13:05:33.587866 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 13:05:33.593196 systemd-timesyncd[1407]: Network configuration changed, trying to establish connection. Jan 29 13:05:33.612335 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 29 13:05:33.617957 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 13:05:33.623309 kernel: ACPI: button: Power Button [PWRF] Jan 29 13:05:33.637639 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 13:05:33.689317 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 29 13:05:33.700718 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 29 13:05:33.700771 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 29 13:05:33.701074 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 29 13:05:33.780001 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 13:05:33.920426 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 13:05:33.931243 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 13:05:33.961363 lvm[1466]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 13:05:34.003171 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 13:05:34.015252 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 13:05:34.017466 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 13:05:34.018404 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 13:05:34.019568 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 13:05:34.020464 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 13:05:34.021878 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 13:05:34.022897 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 13:05:34.023786 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 13:05:34.024622 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 13:05:34.024678 systemd[1]: Reached target paths.target - Path Units. Jan 29 13:05:34.025368 systemd[1]: Reached target timers.target - Timer Units. Jan 29 13:05:34.028185 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 13:05:34.031814 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 13:05:34.039143 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 13:05:34.043587 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 13:05:34.047368 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 13:05:34.048234 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 13:05:34.049021 systemd[1]: Reached target basic.target - Basic System. Jan 29 13:05:34.055903 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 13:05:34.055955 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 13:05:34.059449 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 13:05:34.063133 lvm[1472]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 13:05:34.068712 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 13:05:34.074505 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 13:05:34.078486 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 13:05:34.082576 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 13:05:34.084451 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 13:05:34.093029 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 13:05:34.110623 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 13:05:34.120563 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 13:05:34.130581 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 13:05:34.133306 jq[1476]: false Jan 29 13:05:34.133258 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 13:05:34.134146 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 13:05:34.141577 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 13:05:34.151497 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 13:05:34.153923 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 13:05:34.163036 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 13:05:34.164435 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 13:05:34.164957 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 13:05:34.165184 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 13:05:34.178304 extend-filesystems[1477]: Found loop4 Jan 29 13:05:34.178304 extend-filesystems[1477]: Found loop5 Jan 29 13:05:34.178304 extend-filesystems[1477]: Found loop6 Jan 29 13:05:34.178304 extend-filesystems[1477]: Found loop7 Jan 29 13:05:34.178304 extend-filesystems[1477]: Found vda Jan 29 13:05:34.178304 extend-filesystems[1477]: Found vda1 Jan 29 13:05:34.178304 extend-filesystems[1477]: Found vda2 Jan 29 13:05:34.178304 extend-filesystems[1477]: Found vda3 Jan 29 13:05:34.178304 extend-filesystems[1477]: Found usr Jan 29 13:05:34.178304 extend-filesystems[1477]: Found vda4 Jan 29 13:05:34.178304 extend-filesystems[1477]: Found vda6 Jan 29 13:05:34.178304 extend-filesystems[1477]: Found vda7 Jan 29 13:05:34.178304 extend-filesystems[1477]: Found vda9 Jan 29 13:05:34.178304 extend-filesystems[1477]: Checking size of /dev/vda9 Jan 29 13:05:34.214489 extend-filesystems[1477]: Resized partition /dev/vda9 Jan 29 13:05:34.181276 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 13:05:34.180996 dbus-daemon[1475]: [system] SELinux support is enabled Jan 29 13:05:34.190120 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 13:05:34.191103 dbus-daemon[1475]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1422 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 29 13:05:34.190166 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 13:05:34.191910 dbus-daemon[1475]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 29 13:05:34.219804 jq[1487]: true Jan 29 13:05:34.197807 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 13:05:34.197844 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 13:05:34.225386 extend-filesystems[1503]: resize2fs 1.47.1 (20-May-2024) Jan 29 13:05:34.234212 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Jan 29 13:05:34.227634 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 29 13:05:34.230488 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 13:05:34.230789 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 13:05:34.269627 update_engine[1484]: I20250129 13:05:34.268880 1484 main.cc:92] Flatcar Update Engine starting Jan 29 13:05:34.269858 (ntainerd)[1508]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 13:05:34.279548 update_engine[1484]: I20250129 13:05:34.277704 1484 update_check_scheduler.cc:74] Next update check in 3m48s Jan 29 13:05:34.278325 systemd[1]: Started update-engine.service - Update Engine. Jan 29 13:05:34.283706 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 13:05:34.290347 jq[1506]: true Jan 29 13:05:34.326376 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1439) Jan 29 13:05:34.411996 systemd-logind[1483]: Watching system buttons on /dev/input/event2 (Power Button) Jan 29 13:05:34.412761 systemd-logind[1483]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 13:05:34.413117 systemd-logind[1483]: New seat seat0. Jan 29 13:05:34.414675 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 13:05:34.486901 locksmithd[1512]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 13:05:34.568632 dbus-daemon[1475]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 29 13:05:34.569345 dbus-daemon[1475]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1505 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 29 13:05:34.573538 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 29 13:05:34.590662 systemd[1]: Starting polkit.service - Authorization Manager... Jan 29 13:05:34.606240 bash[1534]: Updated "/home/core/.ssh/authorized_keys" Jan 29 13:05:34.611197 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 13:05:34.624445 polkitd[1539]: Started polkitd version 121 Jan 29 13:05:34.628419 systemd[1]: Starting sshkeys.service... Jan 29 13:05:34.638821 polkitd[1539]: Loading rules from directory /etc/polkit-1/rules.d Jan 29 13:05:34.643889 polkitd[1539]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 29 13:05:34.645233 polkitd[1539]: Finished loading, compiling and executing 2 rules Jan 29 13:05:34.646007 dbus-daemon[1475]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 29 13:05:34.646727 systemd[1]: Started polkit.service - Authorization Manager. Jan 29 13:05:34.647435 polkitd[1539]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 29 13:05:34.675771 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 13:05:34.686398 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 13:05:34.716332 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 29 13:05:34.730143 systemd-hostnamed[1505]: Hostname set to (static) Jan 29 13:05:34.740629 extend-filesystems[1503]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 13:05:34.740629 extend-filesystems[1503]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 29 13:05:34.740629 extend-filesystems[1503]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 29 13:05:34.753241 extend-filesystems[1477]: Resized filesystem in /dev/vda9 Jan 29 13:05:34.741068 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 13:05:34.743407 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 13:05:34.749538 systemd-networkd[1422]: eth0: Gained IPv6LL Jan 29 13:05:34.750698 systemd-timesyncd[1407]: Network configuration changed, trying to establish connection. Jan 29 13:05:34.759449 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 13:05:34.763745 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 13:05:34.777727 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 13:05:34.789457 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 13:05:34.815564 containerd[1508]: time="2025-01-29T13:05:34.814712661Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 13:05:34.858673 containerd[1508]: time="2025-01-29T13:05:34.858597612Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 13:05:34.869969 containerd[1508]: time="2025-01-29T13:05:34.869884591Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 13:05:34.869969 containerd[1508]: time="2025-01-29T13:05:34.869960401Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 13:05:34.870154 containerd[1508]: time="2025-01-29T13:05:34.869992965Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 13:05:34.870344 containerd[1508]: time="2025-01-29T13:05:34.870316808Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 13:05:34.870464 containerd[1508]: time="2025-01-29T13:05:34.870359553Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 13:05:34.870506 containerd[1508]: time="2025-01-29T13:05:34.870480392Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 13:05:34.870543 containerd[1508]: time="2025-01-29T13:05:34.870503617Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 13:05:34.870771 containerd[1508]: time="2025-01-29T13:05:34.870740714Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 13:05:34.870822 containerd[1508]: time="2025-01-29T13:05:34.870773164Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 13:05:34.870859 containerd[1508]: time="2025-01-29T13:05:34.870821897Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 13:05:34.870859 containerd[1508]: time="2025-01-29T13:05:34.870843037Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 13:05:34.871002 containerd[1508]: time="2025-01-29T13:05:34.870976681Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 13:05:34.873235 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 13:05:34.873519 containerd[1508]: time="2025-01-29T13:05:34.872619949Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 13:05:34.873519 containerd[1508]: time="2025-01-29T13:05:34.872777502Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 13:05:34.873519 containerd[1508]: time="2025-01-29T13:05:34.872802777Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 13:05:34.875560 containerd[1508]: time="2025-01-29T13:05:34.874657435Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 13:05:34.875560 containerd[1508]: time="2025-01-29T13:05:34.874774885Z" level=info msg="metadata content store policy set" policy=shared Jan 29 13:05:34.884736 containerd[1508]: time="2025-01-29T13:05:34.884671352Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 13:05:34.884920 containerd[1508]: time="2025-01-29T13:05:34.884782140Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 13:05:34.884920 containerd[1508]: time="2025-01-29T13:05:34.884812832Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 13:05:34.884920 containerd[1508]: time="2025-01-29T13:05:34.884842041Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 13:05:34.884920 containerd[1508]: time="2025-01-29T13:05:34.884867674Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 13:05:34.886294 containerd[1508]: time="2025-01-29T13:05:34.885181455Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 13:05:34.886294 containerd[1508]: time="2025-01-29T13:05:34.885564253Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 13:05:34.886294 containerd[1508]: time="2025-01-29T13:05:34.885768602Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 13:05:34.886294 containerd[1508]: time="2025-01-29T13:05:34.885796623Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 13:05:34.886294 containerd[1508]: time="2025-01-29T13:05:34.885819089Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 13:05:34.886294 containerd[1508]: time="2025-01-29T13:05:34.885840969Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 13:05:34.886294 containerd[1508]: time="2025-01-29T13:05:34.885865996Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 13:05:34.886294 containerd[1508]: time="2025-01-29T13:05:34.885886048Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 13:05:34.886294 containerd[1508]: time="2025-01-29T13:05:34.885907354Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 13:05:34.886294 containerd[1508]: time="2025-01-29T13:05:34.885929279Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 13:05:34.886294 containerd[1508]: time="2025-01-29T13:05:34.885957641Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 13:05:34.886294 containerd[1508]: time="2025-01-29T13:05:34.885979926Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 13:05:34.886294 containerd[1508]: time="2025-01-29T13:05:34.885998705Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 13:05:34.886294 containerd[1508]: time="2025-01-29T13:05:34.886061494Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 13:05:34.886805 containerd[1508]: time="2025-01-29T13:05:34.886088380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 13:05:34.886805 containerd[1508]: time="2025-01-29T13:05:34.886121035Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 13:05:34.886805 containerd[1508]: time="2025-01-29T13:05:34.886146519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 13:05:34.886805 containerd[1508]: time="2025-01-29T13:05:34.886167420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 13:05:34.886805 containerd[1508]: time="2025-01-29T13:05:34.886189439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 13:05:34.886805 containerd[1508]: time="2025-01-29T13:05:34.886208994Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 13:05:34.886805 containerd[1508]: time="2025-01-29T13:05:34.886229651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 13:05:34.886805 containerd[1508]: time="2025-01-29T13:05:34.886249183Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 13:05:34.886805 containerd[1508]: time="2025-01-29T13:05:34.886270816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 13:05:34.886805 containerd[1508]: time="2025-01-29T13:05:34.886602956Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 13:05:34.886805 containerd[1508]: time="2025-01-29T13:05:34.886630117Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 13:05:34.886805 containerd[1508]: time="2025-01-29T13:05:34.886650280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 13:05:34.886805 containerd[1508]: time="2025-01-29T13:05:34.886672677Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 13:05:34.886805 containerd[1508]: time="2025-01-29T13:05:34.886714533Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 13:05:34.888403 containerd[1508]: time="2025-01-29T13:05:34.888355664Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 13:05:34.888403 containerd[1508]: time="2025-01-29T13:05:34.888392963Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 13:05:34.888617 containerd[1508]: time="2025-01-29T13:05:34.888508195Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 13:05:34.889540 containerd[1508]: time="2025-01-29T13:05:34.889455518Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 13:05:34.889540 containerd[1508]: time="2025-01-29T13:05:34.889490358Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 13:05:34.889540 containerd[1508]: time="2025-01-29T13:05:34.889517926Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 13:05:34.889540 containerd[1508]: time="2025-01-29T13:05:34.889535184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 13:05:34.890074 containerd[1508]: time="2025-01-29T13:05:34.889563057Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 13:05:34.890074 containerd[1508]: time="2025-01-29T13:05:34.889594363Z" level=info msg="NRI interface is disabled by configuration." Jan 29 13:05:34.890074 containerd[1508]: time="2025-01-29T13:05:34.889628458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 13:05:34.890546 containerd[1508]: time="2025-01-29T13:05:34.890026737Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 13:05:34.890546 containerd[1508]: time="2025-01-29T13:05:34.890109208Z" level=info msg="Connect containerd service" Jan 29 13:05:34.890546 containerd[1508]: time="2025-01-29T13:05:34.890175120Z" level=info msg="using legacy CRI server" Jan 29 13:05:34.890546 containerd[1508]: time="2025-01-29T13:05:34.890192161Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 13:05:34.891668 containerd[1508]: time="2025-01-29T13:05:34.891145328Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 13:05:34.893316 containerd[1508]: time="2025-01-29T13:05:34.892733768Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 13:05:34.893316 containerd[1508]: time="2025-01-29T13:05:34.892892658Z" level=info msg="Start subscribing containerd event" Jan 29 13:05:34.893316 containerd[1508]: time="2025-01-29T13:05:34.892960734Z" level=info msg="Start recovering state" Jan 29 13:05:34.893316 containerd[1508]: time="2025-01-29T13:05:34.893083079Z" level=info msg="Start event monitor" Jan 29 13:05:34.893593 containerd[1508]: time="2025-01-29T13:05:34.893567038Z" level=info msg="Start snapshots syncer" Jan 29 13:05:34.893647 containerd[1508]: time="2025-01-29T13:05:34.893602536Z" level=info msg="Start cni network conf syncer for default" Jan 29 13:05:34.893647 containerd[1508]: time="2025-01-29T13:05:34.893622506Z" level=info msg="Start streaming server" Jan 29 13:05:34.895201 containerd[1508]: time="2025-01-29T13:05:34.895170527Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 13:05:34.895280 containerd[1508]: time="2025-01-29T13:05:34.895257589Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 13:05:34.897375 containerd[1508]: time="2025-01-29T13:05:34.896463224Z" level=info msg="containerd successfully booted in 0.083042s" Jan 29 13:05:34.896583 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 13:05:34.926262 sshd_keygen[1507]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 13:05:34.959099 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 13:05:34.970007 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 13:05:34.980586 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 13:05:34.981054 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 13:05:34.992543 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 13:05:35.005968 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 13:05:35.016919 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 13:05:35.026925 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 13:05:35.028273 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 13:05:35.727485 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 13:05:35.732194 (kubelet)[1592]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 13:05:36.256808 systemd-timesyncd[1407]: Network configuration changed, trying to establish connection. Jan 29 13:05:36.258973 systemd-networkd[1422]: eth0: Ignoring DHCPv6 address 2a02:1348:179:85dd:24:19ff:fee6:1776/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:85dd:24:19ff:fee6:1776/64 assigned by NDisc. Jan 29 13:05:36.258986 systemd-networkd[1422]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 29 13:05:36.333996 kubelet[1592]: E0129 13:05:36.333912 1592 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 13:05:36.336743 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 13:05:36.336998 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 13:05:36.337502 systemd[1]: kubelet.service: Consumed 1.005s CPU time. Jan 29 13:05:38.013622 systemd-timesyncd[1407]: Network configuration changed, trying to establish connection. Jan 29 13:05:40.052042 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 13:05:40.065891 systemd[1]: Started sshd@0-10.230.23.118:22-147.75.109.163:52210.service - OpenSSH per-connection server daemon (147.75.109.163:52210). Jan 29 13:05:40.156889 login[1584]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 29 13:05:40.161793 login[1585]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 29 13:05:40.178952 systemd-logind[1483]: New session 1 of user core. Jan 29 13:05:40.182414 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 13:05:40.189898 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 13:05:40.194496 systemd-logind[1483]: New session 2 of user core. Jan 29 13:05:40.217651 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 13:05:40.226083 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 13:05:40.243673 (systemd)[1611]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 13:05:40.384898 systemd[1611]: Queued start job for default target default.target. Jan 29 13:05:40.393124 systemd[1611]: Created slice app.slice - User Application Slice. Jan 29 13:05:40.393171 systemd[1611]: Reached target paths.target - Paths. Jan 29 13:05:40.393208 systemd[1611]: Reached target timers.target - Timers. Jan 29 13:05:40.395256 systemd[1611]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 13:05:40.411129 systemd[1611]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 13:05:40.411364 systemd[1611]: Reached target sockets.target - Sockets. Jan 29 13:05:40.411391 systemd[1611]: Reached target basic.target - Basic System. Jan 29 13:05:40.411463 systemd[1611]: Reached target default.target - Main User Target. Jan 29 13:05:40.411541 systemd[1611]: Startup finished in 157ms. Jan 29 13:05:40.411766 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 13:05:40.425562 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 13:05:40.427007 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 13:05:41.036689 sshd[1603]: Accepted publickey for core from 147.75.109.163 port 52210 ssh2: RSA SHA256:N4m0UAGAVL0aGRQpLGyvungYkW8dGkNI4mN5vZ/Bmd0 Jan 29 13:05:41.039363 sshd-session[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 13:05:41.046575 systemd-logind[1483]: New session 3 of user core. Jan 29 13:05:41.058668 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 13:05:41.242749 coreos-metadata[1474]: Jan 29 13:05:41.242 WARN failed to locate config-drive, using the metadata service API instead Jan 29 13:05:41.271256 coreos-metadata[1474]: Jan 29 13:05:41.271 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 29 13:05:41.281166 coreos-metadata[1474]: Jan 29 13:05:41.281 INFO Fetch failed with 404: resource not found Jan 29 13:05:41.281404 coreos-metadata[1474]: Jan 29 13:05:41.281 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 29 13:05:41.281857 coreos-metadata[1474]: Jan 29 13:05:41.281 INFO Fetch successful Jan 29 13:05:41.282016 coreos-metadata[1474]: Jan 29 13:05:41.281 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 29 13:05:41.294650 coreos-metadata[1474]: Jan 29 13:05:41.294 INFO Fetch successful Jan 29 13:05:41.294790 coreos-metadata[1474]: Jan 29 13:05:41.294 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 29 13:05:41.308659 coreos-metadata[1474]: Jan 29 13:05:41.308 INFO Fetch successful Jan 29 13:05:41.308831 coreos-metadata[1474]: Jan 29 13:05:41.308 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 29 13:05:41.322600 coreos-metadata[1474]: Jan 29 13:05:41.322 INFO Fetch successful Jan 29 13:05:41.322768 coreos-metadata[1474]: Jan 29 13:05:41.322 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 29 13:05:41.343227 coreos-metadata[1474]: Jan 29 13:05:41.343 INFO Fetch successful Jan 29 13:05:41.373592 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 13:05:41.374689 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 13:05:41.779496 coreos-metadata[1549]: Jan 29 13:05:41.779 WARN failed to locate config-drive, using the metadata service API instead Jan 29 13:05:41.803097 coreos-metadata[1549]: Jan 29 13:05:41.803 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 29 13:05:41.824861 systemd[1]: Started sshd@1-10.230.23.118:22-147.75.109.163:52224.service - OpenSSH per-connection server daemon (147.75.109.163:52224). Jan 29 13:05:41.826614 coreos-metadata[1549]: Jan 29 13:05:41.826 INFO Fetch successful Jan 29 13:05:41.826614 coreos-metadata[1549]: Jan 29 13:05:41.826 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 29 13:05:41.861035 coreos-metadata[1549]: Jan 29 13:05:41.860 INFO Fetch successful Jan 29 13:05:41.863403 unknown[1549]: wrote ssh authorized keys file for user: core Jan 29 13:05:41.894185 update-ssh-keys[1655]: Updated "/home/core/.ssh/authorized_keys" Jan 29 13:05:41.895093 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 13:05:41.898422 systemd[1]: Finished sshkeys.service. Jan 29 13:05:41.899892 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 13:05:41.900110 systemd[1]: Startup finished in 1.419s (kernel) + 15.401s (initrd) + 11.795s (userspace) = 28.616s. Jan 29 13:05:42.721489 sshd[1653]: Accepted publickey for core from 147.75.109.163 port 52224 ssh2: RSA SHA256:N4m0UAGAVL0aGRQpLGyvungYkW8dGkNI4mN5vZ/Bmd0 Jan 29 13:05:42.723720 sshd-session[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 13:05:42.733775 systemd-logind[1483]: New session 4 of user core. Jan 29 13:05:42.753872 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 13:05:43.337982 sshd[1659]: Connection closed by 147.75.109.163 port 52224 Jan 29 13:05:43.338920 sshd-session[1653]: pam_unix(sshd:session): session closed for user core Jan 29 13:05:43.344117 systemd-logind[1483]: Session 4 logged out. Waiting for processes to exit. Jan 29 13:05:43.344602 systemd[1]: sshd@1-10.230.23.118:22-147.75.109.163:52224.service: Deactivated successfully. Jan 29 13:05:43.346629 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 13:05:43.347784 systemd-logind[1483]: Removed session 4. Jan 29 13:05:43.490554 systemd[1]: Started sshd@2-10.230.23.118:22-147.75.109.163:52238.service - OpenSSH per-connection server daemon (147.75.109.163:52238). Jan 29 13:05:44.392277 sshd[1664]: Accepted publickey for core from 147.75.109.163 port 52238 ssh2: RSA SHA256:N4m0UAGAVL0aGRQpLGyvungYkW8dGkNI4mN5vZ/Bmd0 Jan 29 13:05:44.394213 sshd-session[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 13:05:44.402346 systemd-logind[1483]: New session 5 of user core. Jan 29 13:05:44.408561 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 13:05:45.001353 sshd[1666]: Connection closed by 147.75.109.163 port 52238 Jan 29 13:05:45.002562 sshd-session[1664]: pam_unix(sshd:session): session closed for user core Jan 29 13:05:45.008234 systemd[1]: sshd@2-10.230.23.118:22-147.75.109.163:52238.service: Deactivated successfully. Jan 29 13:05:45.010207 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 13:05:45.011071 systemd-logind[1483]: Session 5 logged out. Waiting for processes to exit. Jan 29 13:05:45.012707 systemd-logind[1483]: Removed session 5. Jan 29 13:05:45.160634 systemd[1]: Started sshd@3-10.230.23.118:22-147.75.109.163:52248.service - OpenSSH per-connection server daemon (147.75.109.163:52248). Jan 29 13:05:46.054742 sshd[1671]: Accepted publickey for core from 147.75.109.163 port 52248 ssh2: RSA SHA256:N4m0UAGAVL0aGRQpLGyvungYkW8dGkNI4mN5vZ/Bmd0 Jan 29 13:05:46.056697 sshd-session[1671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 13:05:46.063563 systemd-logind[1483]: New session 6 of user core. Jan 29 13:05:46.073708 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 13:05:46.531336 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 13:05:46.537524 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 13:05:46.674337 sshd[1673]: Connection closed by 147.75.109.163 port 52248 Jan 29 13:05:46.677013 sshd-session[1671]: pam_unix(sshd:session): session closed for user core Jan 29 13:05:46.682652 systemd[1]: sshd@3-10.230.23.118:22-147.75.109.163:52248.service: Deactivated successfully. Jan 29 13:05:46.685223 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 13:05:46.688920 systemd-logind[1483]: Session 6 logged out. Waiting for processes to exit. Jan 29 13:05:46.691205 systemd-logind[1483]: Removed session 6. Jan 29 13:05:46.692555 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 13:05:46.703762 (kubelet)[1684]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 13:05:46.755429 kubelet[1684]: E0129 13:05:46.755049 1684 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 13:05:46.759966 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 13:05:46.760313 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 13:05:46.831642 systemd[1]: Started sshd@4-10.230.23.118:22-147.75.109.163:52260.service - OpenSSH per-connection server daemon (147.75.109.163:52260). Jan 29 13:05:47.720471 sshd[1694]: Accepted publickey for core from 147.75.109.163 port 52260 ssh2: RSA SHA256:N4m0UAGAVL0aGRQpLGyvungYkW8dGkNI4mN5vZ/Bmd0 Jan 29 13:05:47.722253 sshd-session[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 13:05:47.729889 systemd-logind[1483]: New session 7 of user core. Jan 29 13:05:47.739554 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 13:05:48.207801 sudo[1697]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 13:05:48.209015 sudo[1697]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 13:05:48.229442 sudo[1697]: pam_unix(sudo:session): session closed for user root Jan 29 13:05:48.372455 sshd[1696]: Connection closed by 147.75.109.163 port 52260 Jan 29 13:05:48.373601 sshd-session[1694]: pam_unix(sshd:session): session closed for user core Jan 29 13:05:48.378941 systemd[1]: sshd@4-10.230.23.118:22-147.75.109.163:52260.service: Deactivated successfully. Jan 29 13:05:48.382141 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 13:05:48.384555 systemd-logind[1483]: Session 7 logged out. Waiting for processes to exit. Jan 29 13:05:48.386451 systemd-logind[1483]: Removed session 7. Jan 29 13:05:48.530648 systemd[1]: Started sshd@5-10.230.23.118:22-147.75.109.163:45458.service - OpenSSH per-connection server daemon (147.75.109.163:45458). Jan 29 13:05:49.431554 sshd[1702]: Accepted publickey for core from 147.75.109.163 port 45458 ssh2: RSA SHA256:N4m0UAGAVL0aGRQpLGyvungYkW8dGkNI4mN5vZ/Bmd0 Jan 29 13:05:49.433546 sshd-session[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 13:05:49.441135 systemd-logind[1483]: New session 8 of user core. Jan 29 13:05:49.451587 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 13:05:49.907722 sudo[1706]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 13:05:49.908269 sudo[1706]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 13:05:49.915686 sudo[1706]: pam_unix(sudo:session): session closed for user root Jan 29 13:05:49.924699 sudo[1705]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 13:05:49.925214 sudo[1705]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 13:05:49.952869 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 13:05:49.990773 augenrules[1728]: No rules Jan 29 13:05:49.991656 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 13:05:49.991925 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 13:05:49.993215 sudo[1705]: pam_unix(sudo:session): session closed for user root Jan 29 13:05:50.136583 sshd[1704]: Connection closed by 147.75.109.163 port 45458 Jan 29 13:05:50.137519 sshd-session[1702]: pam_unix(sshd:session): session closed for user core Jan 29 13:05:50.143159 systemd[1]: sshd@5-10.230.23.118:22-147.75.109.163:45458.service: Deactivated successfully. Jan 29 13:05:50.146654 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 13:05:50.147958 systemd-logind[1483]: Session 8 logged out. Waiting for processes to exit. Jan 29 13:05:50.149535 systemd-logind[1483]: Removed session 8. Jan 29 13:05:50.290404 systemd[1]: Started sshd@6-10.230.23.118:22-147.75.109.163:45470.service - OpenSSH per-connection server daemon (147.75.109.163:45470). Jan 29 13:05:51.190211 sshd[1736]: Accepted publickey for core from 147.75.109.163 port 45470 ssh2: RSA SHA256:N4m0UAGAVL0aGRQpLGyvungYkW8dGkNI4mN5vZ/Bmd0 Jan 29 13:05:51.192204 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 13:05:51.199179 systemd-logind[1483]: New session 9 of user core. Jan 29 13:05:51.207642 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 13:05:51.668144 sudo[1739]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 13:05:51.668689 sudo[1739]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 13:05:52.424263 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 13:05:52.432623 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 13:05:52.467396 systemd[1]: Reloading requested from client PID 1771 ('systemctl') (unit session-9.scope)... Jan 29 13:05:52.467652 systemd[1]: Reloading... Jan 29 13:05:52.613361 zram_generator::config[1810]: No configuration found. Jan 29 13:05:52.810863 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 13:05:52.923316 systemd[1]: Reloading finished in 454 ms. Jan 29 13:05:53.004729 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 13:05:53.005215 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 13:05:53.005896 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 13:05:53.016894 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 13:05:53.180352 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 13:05:53.195801 (kubelet)[1878]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 13:05:53.270922 kubelet[1878]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 13:05:53.270922 kubelet[1878]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 13:05:53.270922 kubelet[1878]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 13:05:53.271603 kubelet[1878]: I0129 13:05:53.271000 1878 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 13:05:53.853086 kubelet[1878]: I0129 13:05:53.852279 1878 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 13:05:53.853086 kubelet[1878]: I0129 13:05:53.852366 1878 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 13:05:53.853086 kubelet[1878]: I0129 13:05:53.852870 1878 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 13:05:53.887330 kubelet[1878]: I0129 13:05:53.887256 1878 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 13:05:53.897091 kubelet[1878]: E0129 13:05:53.897012 1878 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 13:05:53.897427 kubelet[1878]: I0129 13:05:53.897399 1878 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 13:05:53.906507 kubelet[1878]: I0129 13:05:53.906461 1878 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 13:05:53.908421 kubelet[1878]: I0129 13:05:53.908385 1878 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 13:05:53.908898 kubelet[1878]: I0129 13:05:53.908846 1878 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 13:05:53.909435 kubelet[1878]: I0129 13:05:53.909119 1878 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.230.23.118","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 13:05:53.910225 kubelet[1878]: I0129 13:05:53.909738 1878 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 13:05:53.910225 kubelet[1878]: I0129 13:05:53.909764 1878 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 13:05:53.910225 kubelet[1878]: I0129 13:05:53.909999 1878 state_mem.go:36] "Initialized new in-memory state store" Jan 29 13:05:53.913324 kubelet[1878]: I0129 13:05:53.913031 1878 kubelet.go:408] "Attempting to sync node with API server" Jan 29 13:05:53.913324 kubelet[1878]: I0129 13:05:53.913086 1878 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 13:05:53.913324 kubelet[1878]: I0129 13:05:53.913157 1878 kubelet.go:314] "Adding apiserver pod source" Jan 29 13:05:53.913324 kubelet[1878]: I0129 13:05:53.913205 1878 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 13:05:53.914676 kubelet[1878]: E0129 13:05:53.914376 1878 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:05:53.914676 kubelet[1878]: E0129 13:05:53.914468 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:05:53.919582 kubelet[1878]: I0129 13:05:53.919304 1878 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 13:05:53.921956 kubelet[1878]: I0129 13:05:53.921841 1878 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 13:05:53.926257 kubelet[1878]: W0129 13:05:53.925568 1878 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 13:05:53.926257 kubelet[1878]: W0129 13:05:53.925878 1878 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.230.23.118" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 29 13:05:53.926257 kubelet[1878]: E0129 13:05:53.925981 1878 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.230.23.118\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 29 13:05:53.926257 kubelet[1878]: W0129 13:05:53.926153 1878 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 29 13:05:53.926257 kubelet[1878]: E0129 13:05:53.926181 1878 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 29 13:05:53.926986 kubelet[1878]: I0129 13:05:53.926959 1878 server.go:1269] "Started kubelet" Jan 29 13:05:53.929887 kubelet[1878]: I0129 13:05:53.929839 1878 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 13:05:53.937733 kubelet[1878]: I0129 13:05:53.937507 1878 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 13:05:53.940274 kubelet[1878]: I0129 13:05:53.940244 1878 server.go:460] "Adding debug handlers to kubelet server" Jan 29 13:05:53.944311 kubelet[1878]: I0129 13:05:53.944184 1878 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 13:05:53.946324 kubelet[1878]: E0129 13:05:53.945766 1878 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.230.23.118\" not found" Jan 29 13:05:53.947307 kubelet[1878]: I0129 13:05:53.947215 1878 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 13:05:53.948644 kubelet[1878]: I0129 13:05:53.948611 1878 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 13:05:53.949536 kubelet[1878]: I0129 13:05:53.947911 1878 reconciler.go:26] "Reconciler: start to sync state" Jan 29 13:05:53.953761 kubelet[1878]: I0129 13:05:53.951597 1878 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 13:05:53.971777 kubelet[1878]: I0129 13:05:53.947805 1878 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 13:05:53.971777 kubelet[1878]: I0129 13:05:53.957238 1878 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 13:05:53.975229 kubelet[1878]: E0129 13:05:53.972261 1878 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.230.23.118.181f2ba1860b40e8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.230.23.118,UID:10.230.23.118,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.230.23.118,},FirstTimestamp:2025-01-29 13:05:53.92691428 +0000 UTC m=+0.725230495,LastTimestamp:2025-01-29 13:05:53.92691428 +0000 UTC m=+0.725230495,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.230.23.118,}" Jan 29 13:05:53.975706 kubelet[1878]: E0129 13:05:53.975672 1878 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.230.23.118\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 29 13:05:53.977218 kubelet[1878]: E0129 13:05:53.977190 1878 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 13:05:53.977743 kubelet[1878]: I0129 13:05:53.977715 1878 factory.go:221] Registration of the containerd container factory successfully Jan 29 13:05:53.978206 kubelet[1878]: I0129 13:05:53.978186 1878 factory.go:221] Registration of the systemd container factory successfully Jan 29 13:05:53.988470 kubelet[1878]: W0129 13:05:53.977955 1878 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 29 13:05:53.988703 kubelet[1878]: E0129 13:05:53.988672 1878 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Jan 29 13:05:54.005275 kubelet[1878]: E0129 13:05:54.004776 1878 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.230.23.118.181f2ba1890a2547 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.230.23.118,UID:10.230.23.118,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.230.23.118,},FirstTimestamp:2025-01-29 13:05:53.977173319 +0000 UTC m=+0.775489534,LastTimestamp:2025-01-29 13:05:53.977173319 +0000 UTC m=+0.775489534,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.230.23.118,}" Jan 29 13:05:54.012925 kubelet[1878]: I0129 13:05:54.012867 1878 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 13:05:54.012925 kubelet[1878]: I0129 13:05:54.012903 1878 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 13:05:54.013194 kubelet[1878]: I0129 13:05:54.012936 1878 state_mem.go:36] "Initialized new in-memory state store" Jan 29 13:05:54.019759 kubelet[1878]: I0129 13:05:54.016346 1878 policy_none.go:49] "None policy: Start" Jan 29 13:05:54.019759 kubelet[1878]: I0129 13:05:54.019069 1878 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 13:05:54.019759 kubelet[1878]: I0129 13:05:54.019115 1878 state_mem.go:35] "Initializing new in-memory state store" Jan 29 13:05:54.031011 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 13:05:54.046301 kubelet[1878]: E0129 13:05:54.046251 1878 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.230.23.118\" not found" Jan 29 13:05:54.048854 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 13:05:54.054156 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 13:05:54.057069 kubelet[1878]: I0129 13:05:54.056758 1878 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 13:05:54.059395 kubelet[1878]: I0129 13:05:54.059199 1878 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 13:05:54.059395 kubelet[1878]: I0129 13:05:54.059333 1878 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 13:05:54.059395 kubelet[1878]: I0129 13:05:54.059375 1878 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 13:05:54.060394 kubelet[1878]: E0129 13:05:54.059587 1878 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 13:05:54.063695 kubelet[1878]: I0129 13:05:54.063661 1878 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 13:05:54.064178 kubelet[1878]: I0129 13:05:54.064150 1878 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 13:05:54.064248 kubelet[1878]: I0129 13:05:54.064201 1878 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 13:05:54.066145 kubelet[1878]: I0129 13:05:54.066117 1878 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 13:05:54.071579 kubelet[1878]: E0129 13:05:54.071507 1878 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.230.23.118\" not found" Jan 29 13:05:54.166485 kubelet[1878]: I0129 13:05:54.166428 1878 kubelet_node_status.go:72] "Attempting to register node" node="10.230.23.118" Jan 29 13:05:54.176642 kubelet[1878]: I0129 13:05:54.176495 1878 kubelet_node_status.go:75] "Successfully registered node" node="10.230.23.118" Jan 29 13:05:54.176642 kubelet[1878]: E0129 13:05:54.176542 1878 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"10.230.23.118\": node \"10.230.23.118\" not found" Jan 29 13:05:54.206656 kubelet[1878]: E0129 13:05:54.206598 1878 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.230.23.118\" not found" Jan 29 13:05:54.231850 sudo[1739]: pam_unix(sudo:session): session closed for user root Jan 29 13:05:54.307752 kubelet[1878]: E0129 13:05:54.307654 1878 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.230.23.118\" not found" Jan 29 13:05:54.376355 sshd[1738]: Connection closed by 147.75.109.163 port 45470 Jan 29 13:05:54.377384 sshd-session[1736]: pam_unix(sshd:session): session closed for user core Jan 29 13:05:54.382769 systemd-logind[1483]: Session 9 logged out. Waiting for processes to exit. Jan 29 13:05:54.383594 systemd[1]: sshd@6-10.230.23.118:22-147.75.109.163:45470.service: Deactivated successfully. Jan 29 13:05:54.386805 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 13:05:54.389066 systemd-logind[1483]: Removed session 9. Jan 29 13:05:54.407986 kubelet[1878]: E0129 13:05:54.407875 1878 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.230.23.118\" not found" Jan 29 13:05:54.508660 kubelet[1878]: E0129 13:05:54.508403 1878 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.230.23.118\" not found" Jan 29 13:05:54.609626 kubelet[1878]: E0129 13:05:54.609540 1878 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.230.23.118\" not found" Jan 29 13:05:54.710337 kubelet[1878]: E0129 13:05:54.710186 1878 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.230.23.118\" not found" Jan 29 13:05:54.810798 kubelet[1878]: E0129 13:05:54.810607 1878 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.230.23.118\" not found" Jan 29 13:05:54.857543 kubelet[1878]: I0129 13:05:54.857439 1878 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 29 13:05:54.857899 kubelet[1878]: W0129 13:05:54.857757 1878 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 13:05:54.911721 kubelet[1878]: E0129 13:05:54.911642 1878 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.230.23.118\" not found" Jan 29 13:05:54.915057 kubelet[1878]: E0129 13:05:54.914954 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:05:55.012736 kubelet[1878]: E0129 13:05:55.012665 1878 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.230.23.118\" not found" Jan 29 13:05:55.113978 kubelet[1878]: E0129 13:05:55.113768 1878 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.230.23.118\" not found" Jan 29 13:05:55.214266 kubelet[1878]: E0129 13:05:55.214164 1878 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.230.23.118\" not found" Jan 29 13:05:55.315315 kubelet[1878]: E0129 13:05:55.315207 1878 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.230.23.118\" not found" Jan 29 13:05:55.415566 kubelet[1878]: E0129 13:05:55.415478 1878 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.230.23.118\" not found" Jan 29 13:05:55.516271 kubelet[1878]: E0129 13:05:55.516180 1878 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.230.23.118\" not found" Jan 29 13:05:55.618632 kubelet[1878]: I0129 13:05:55.618525 1878 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 29 13:05:55.619687 containerd[1508]: time="2025-01-29T13:05:55.619174236Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 13:05:55.620209 kubelet[1878]: I0129 13:05:55.619472 1878 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 29 13:05:55.915677 kubelet[1878]: I0129 13:05:55.915567 1878 apiserver.go:52] "Watching apiserver" Jan 29 13:05:55.915677 kubelet[1878]: E0129 13:05:55.915602 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:05:55.921476 kubelet[1878]: E0129 13:05:55.920786 1878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tcnjs" podUID="ac21dc14-23e0-4c74-8492-563d8d3aeeb5" Jan 29 13:05:55.932788 systemd[1]: Created slice kubepods-besteffort-podf8e18081_b95f_4118_9cb0_2634e456ec46.slice - libcontainer container kubepods-besteffort-podf8e18081_b95f_4118_9cb0_2634e456ec46.slice. Jan 29 13:05:55.951501 systemd[1]: Created slice kubepods-besteffort-pod3de4a356_3475_47a7_9da6_23a17e41c39f.slice - libcontainer container kubepods-besteffort-pod3de4a356_3475_47a7_9da6_23a17e41c39f.slice. Jan 29 13:05:55.970994 kubelet[1878]: I0129 13:05:55.970960 1878 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 13:05:56.064518 kubelet[1878]: I0129 13:05:56.063642 1878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3de4a356-3475-47a7-9da6-23a17e41c39f-xtables-lock\") pod \"kube-proxy-hrbwt\" (UID: \"3de4a356-3475-47a7-9da6-23a17e41c39f\") " pod="kube-system/kube-proxy-hrbwt" Jan 29 13:05:56.064518 kubelet[1878]: I0129 13:05:56.063698 1878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3de4a356-3475-47a7-9da6-23a17e41c39f-lib-modules\") pod \"kube-proxy-hrbwt\" (UID: \"3de4a356-3475-47a7-9da6-23a17e41c39f\") " pod="kube-system/kube-proxy-hrbwt" Jan 29 13:05:56.064518 kubelet[1878]: I0129 13:05:56.063727 1878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f8e18081-b95f-4118-9cb0-2634e456ec46-var-lib-calico\") pod \"calico-node-jb472\" (UID: \"f8e18081-b95f-4118-9cb0-2634e456ec46\") " pod="calico-system/calico-node-jb472" Jan 29 13:05:56.064518 kubelet[1878]: I0129 13:05:56.063761 1878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f8e18081-b95f-4118-9cb0-2634e456ec46-cni-bin-dir\") pod \"calico-node-jb472\" (UID: \"f8e18081-b95f-4118-9cb0-2634e456ec46\") " pod="calico-system/calico-node-jb472" Jan 29 13:05:56.064518 kubelet[1878]: I0129 13:05:56.063787 1878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f8e18081-b95f-4118-9cb0-2634e456ec46-cni-log-dir\") pod \"calico-node-jb472\" (UID: \"f8e18081-b95f-4118-9cb0-2634e456ec46\") " pod="calico-system/calico-node-jb472" Jan 29 13:05:56.064866 kubelet[1878]: I0129 13:05:56.063814 1878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ac21dc14-23e0-4c74-8492-563d8d3aeeb5-varrun\") pod \"csi-node-driver-tcnjs\" (UID: \"ac21dc14-23e0-4c74-8492-563d8d3aeeb5\") " pod="calico-system/csi-node-driver-tcnjs" Jan 29 13:05:56.064866 kubelet[1878]: I0129 13:05:56.063838 1878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ac21dc14-23e0-4c74-8492-563d8d3aeeb5-kubelet-dir\") pod \"csi-node-driver-tcnjs\" (UID: \"ac21dc14-23e0-4c74-8492-563d8d3aeeb5\") " pod="calico-system/csi-node-driver-tcnjs" Jan 29 13:05:56.064866 kubelet[1878]: I0129 13:05:56.063895 1878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj7xr\" (UniqueName: \"kubernetes.io/projected/3de4a356-3475-47a7-9da6-23a17e41c39f-kube-api-access-nj7xr\") pod \"kube-proxy-hrbwt\" (UID: \"3de4a356-3475-47a7-9da6-23a17e41c39f\") " pod="kube-system/kube-proxy-hrbwt" Jan 29 13:05:56.064866 kubelet[1878]: I0129 13:05:56.063944 1878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f8e18081-b95f-4118-9cb0-2634e456ec46-policysync\") pod \"calico-node-jb472\" (UID: \"f8e18081-b95f-4118-9cb0-2634e456ec46\") " pod="calico-system/calico-node-jb472" Jan 29 13:05:56.064866 kubelet[1878]: I0129 13:05:56.063969 1878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f8e18081-b95f-4118-9cb0-2634e456ec46-node-certs\") pod \"calico-node-jb472\" (UID: \"f8e18081-b95f-4118-9cb0-2634e456ec46\") " pod="calico-system/calico-node-jb472" Jan 29 13:05:56.065119 kubelet[1878]: I0129 13:05:56.064010 1878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f8e18081-b95f-4118-9cb0-2634e456ec46-var-run-calico\") pod \"calico-node-jb472\" (UID: \"f8e18081-b95f-4118-9cb0-2634e456ec46\") " pod="calico-system/calico-node-jb472" Jan 29 13:05:56.065119 kubelet[1878]: I0129 13:05:56.064039 1878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f8e18081-b95f-4118-9cb0-2634e456ec46-cni-net-dir\") pod \"calico-node-jb472\" (UID: \"f8e18081-b95f-4118-9cb0-2634e456ec46\") " pod="calico-system/calico-node-jb472" Jan 29 13:05:56.065119 kubelet[1878]: I0129 13:05:56.064065 1878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sn2q\" (UniqueName: \"kubernetes.io/projected/f8e18081-b95f-4118-9cb0-2634e456ec46-kube-api-access-5sn2q\") pod \"calico-node-jb472\" (UID: \"f8e18081-b95f-4118-9cb0-2634e456ec46\") " pod="calico-system/calico-node-jb472" Jan 29 13:05:56.065119 kubelet[1878]: I0129 13:05:56.064090 1878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ac21dc14-23e0-4c74-8492-563d8d3aeeb5-registration-dir\") pod \"csi-node-driver-tcnjs\" (UID: \"ac21dc14-23e0-4c74-8492-563d8d3aeeb5\") " pod="calico-system/csi-node-driver-tcnjs" Jan 29 13:05:56.065119 kubelet[1878]: I0129 13:05:56.064114 1878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d965m\" (UniqueName: \"kubernetes.io/projected/ac21dc14-23e0-4c74-8492-563d8d3aeeb5-kube-api-access-d965m\") pod \"csi-node-driver-tcnjs\" (UID: \"ac21dc14-23e0-4c74-8492-563d8d3aeeb5\") " pod="calico-system/csi-node-driver-tcnjs" Jan 29 13:05:56.065365 kubelet[1878]: I0129 13:05:56.064141 1878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3de4a356-3475-47a7-9da6-23a17e41c39f-kube-proxy\") pod \"kube-proxy-hrbwt\" (UID: \"3de4a356-3475-47a7-9da6-23a17e41c39f\") " pod="kube-system/kube-proxy-hrbwt" Jan 29 13:05:56.065365 kubelet[1878]: I0129 13:05:56.064168 1878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f8e18081-b95f-4118-9cb0-2634e456ec46-lib-modules\") pod \"calico-node-jb472\" (UID: \"f8e18081-b95f-4118-9cb0-2634e456ec46\") " pod="calico-system/calico-node-jb472" Jan 29 13:05:56.065365 kubelet[1878]: I0129 13:05:56.064196 1878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f8e18081-b95f-4118-9cb0-2634e456ec46-xtables-lock\") pod \"calico-node-jb472\" (UID: \"f8e18081-b95f-4118-9cb0-2634e456ec46\") " pod="calico-system/calico-node-jb472" Jan 29 13:05:56.065365 kubelet[1878]: I0129 13:05:56.064222 1878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f8e18081-b95f-4118-9cb0-2634e456ec46-tigera-ca-bundle\") pod \"calico-node-jb472\" (UID: \"f8e18081-b95f-4118-9cb0-2634e456ec46\") " pod="calico-system/calico-node-jb472" Jan 29 13:05:56.065365 kubelet[1878]: I0129 13:05:56.064250 1878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f8e18081-b95f-4118-9cb0-2634e456ec46-flexvol-driver-host\") pod \"calico-node-jb472\" (UID: \"f8e18081-b95f-4118-9cb0-2634e456ec46\") " pod="calico-system/calico-node-jb472" Jan 29 13:05:56.065612 kubelet[1878]: I0129 13:05:56.064276 1878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ac21dc14-23e0-4c74-8492-563d8d3aeeb5-socket-dir\") pod \"csi-node-driver-tcnjs\" (UID: \"ac21dc14-23e0-4c74-8492-563d8d3aeeb5\") " pod="calico-system/csi-node-driver-tcnjs" Jan 29 13:05:56.169977 kubelet[1878]: E0129 13:05:56.169765 1878 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:56.169977 kubelet[1878]: W0129 13:05:56.169862 1878 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:56.174433 kubelet[1878]: E0129 13:05:56.172541 1878 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:56.174433 kubelet[1878]: E0129 13:05:56.173273 1878 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:56.174433 kubelet[1878]: W0129 13:05:56.173304 1878 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:56.174433 kubelet[1878]: E0129 13:05:56.173333 1878 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:56.174433 kubelet[1878]: E0129 13:05:56.173599 1878 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:56.174433 kubelet[1878]: W0129 13:05:56.173613 1878 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:56.174433 kubelet[1878]: E0129 13:05:56.173629 1878 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:56.174871 kubelet[1878]: E0129 13:05:56.174486 1878 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:56.174871 kubelet[1878]: W0129 13:05:56.174520 1878 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:56.174871 kubelet[1878]: E0129 13:05:56.174538 1878 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:56.197667 kubelet[1878]: E0129 13:05:56.197603 1878 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:56.197667 kubelet[1878]: W0129 13:05:56.197657 1878 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:56.197872 kubelet[1878]: E0129 13:05:56.197686 1878 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:56.202975 kubelet[1878]: E0129 13:05:56.201368 1878 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:56.202975 kubelet[1878]: W0129 13:05:56.201395 1878 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:56.202975 kubelet[1878]: E0129 13:05:56.201430 1878 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:56.203594 kubelet[1878]: E0129 13:05:56.203501 1878 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 13:05:56.203594 kubelet[1878]: W0129 13:05:56.203523 1878 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 13:05:56.203594 kubelet[1878]: E0129 13:05:56.203549 1878 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 13:05:56.250404 containerd[1508]: time="2025-01-29T13:05:56.250346174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jb472,Uid:f8e18081-b95f-4118-9cb0-2634e456ec46,Namespace:calico-system,Attempt:0,}" Jan 29 13:05:56.255300 containerd[1508]: time="2025-01-29T13:05:56.255249160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hrbwt,Uid:3de4a356-3475-47a7-9da6-23a17e41c39f,Namespace:kube-system,Attempt:0,}" Jan 29 13:05:56.916568 kubelet[1878]: E0129 13:05:56.916495 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:05:57.102744 containerd[1508]: time="2025-01-29T13:05:57.102662968Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 13:05:57.105413 containerd[1508]: time="2025-01-29T13:05:57.105325431Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 29 13:05:57.107128 containerd[1508]: time="2025-01-29T13:05:57.107070400Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 13:05:57.108218 containerd[1508]: time="2025-01-29T13:05:57.108156317Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 13:05:57.109092 containerd[1508]: time="2025-01-29T13:05:57.109019003Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 13:05:57.114082 containerd[1508]: time="2025-01-29T13:05:57.113980289Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 13:05:57.115527 containerd[1508]: time="2025-01-29T13:05:57.115149857Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 859.765538ms" Jan 29 13:05:57.116651 containerd[1508]: time="2025-01-29T13:05:57.116614657Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 865.894679ms" Jan 29 13:05:57.177126 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount771484421.mount: Deactivated successfully. Jan 29 13:05:57.249239 containerd[1508]: time="2025-01-29T13:05:57.248113649Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 13:05:57.250164 containerd[1508]: time="2025-01-29T13:05:57.249528776Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 13:05:57.250164 containerd[1508]: time="2025-01-29T13:05:57.249600300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 13:05:57.250164 containerd[1508]: time="2025-01-29T13:05:57.249762686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 13:05:57.254788 containerd[1508]: time="2025-01-29T13:05:57.254549171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 13:05:57.256031 containerd[1508]: time="2025-01-29T13:05:57.255365352Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 13:05:57.256548 containerd[1508]: time="2025-01-29T13:05:57.256065450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 13:05:57.257525 containerd[1508]: time="2025-01-29T13:05:57.257258873Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 13:05:57.354500 systemd[1]: Started cri-containerd-0d2113a633548da87d66cf96d633b08f5f0b6750b20d047c8c0f2e78a7237de4.scope - libcontainer container 0d2113a633548da87d66cf96d633b08f5f0b6750b20d047c8c0f2e78a7237de4. Jan 29 13:05:57.357087 systemd[1]: Started cri-containerd-1f351d11a461f1edc8325d8fc6418f089501df1d8c6b3bad6d9385c6f84cd902.scope - libcontainer container 1f351d11a461f1edc8325d8fc6418f089501df1d8c6b3bad6d9385c6f84cd902. Jan 29 13:05:57.402037 containerd[1508]: time="2025-01-29T13:05:57.401697853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jb472,Uid:f8e18081-b95f-4118-9cb0-2634e456ec46,Namespace:calico-system,Attempt:0,} returns sandbox id \"0d2113a633548da87d66cf96d633b08f5f0b6750b20d047c8c0f2e78a7237de4\"" Jan 29 13:05:57.405762 containerd[1508]: time="2025-01-29T13:05:57.405716306Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 29 13:05:57.407589 containerd[1508]: time="2025-01-29T13:05:57.407527584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hrbwt,Uid:3de4a356-3475-47a7-9da6-23a17e41c39f,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f351d11a461f1edc8325d8fc6418f089501df1d8c6b3bad6d9385c6f84cd902\"" Jan 29 13:05:57.917180 kubelet[1878]: E0129 13:05:57.917097 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:05:58.061114 kubelet[1878]: E0129 13:05:58.060624 1878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tcnjs" podUID="ac21dc14-23e0-4c74-8492-563d8d3aeeb5" Jan 29 13:05:58.705699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3270278396.mount: Deactivated successfully. Jan 29 13:05:58.870230 containerd[1508]: time="2025-01-29T13:05:58.870152067Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:05:58.872123 containerd[1508]: time="2025-01-29T13:05:58.872084162Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 29 13:05:58.876959 containerd[1508]: time="2025-01-29T13:05:58.876699582Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:05:58.881670 containerd[1508]: time="2025-01-29T13:05:58.880430408Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:05:58.881670 containerd[1508]: time="2025-01-29T13:05:58.881428719Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.475669103s" Jan 29 13:05:58.881670 containerd[1508]: time="2025-01-29T13:05:58.881470099Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 29 13:05:58.883758 containerd[1508]: time="2025-01-29T13:05:58.883727956Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 29 13:05:58.885753 containerd[1508]: time="2025-01-29T13:05:58.885721945Z" level=info msg="CreateContainer within sandbox \"0d2113a633548da87d66cf96d633b08f5f0b6750b20d047c8c0f2e78a7237de4\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 13:05:58.902217 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3094627875.mount: Deactivated successfully. Jan 29 13:05:58.907244 containerd[1508]: time="2025-01-29T13:05:58.907186269Z" level=info msg="CreateContainer within sandbox \"0d2113a633548da87d66cf96d633b08f5f0b6750b20d047c8c0f2e78a7237de4\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c65b0251dcbb419bb47d22224b74f1d17a62172187cd6425a14f5cc75aa5b590\"" Jan 29 13:05:58.909976 containerd[1508]: time="2025-01-29T13:05:58.908651860Z" level=info msg="StartContainer for \"c65b0251dcbb419bb47d22224b74f1d17a62172187cd6425a14f5cc75aa5b590\"" Jan 29 13:05:58.917896 kubelet[1878]: E0129 13:05:58.917837 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:05:58.962576 systemd[1]: Started cri-containerd-c65b0251dcbb419bb47d22224b74f1d17a62172187cd6425a14f5cc75aa5b590.scope - libcontainer container c65b0251dcbb419bb47d22224b74f1d17a62172187cd6425a14f5cc75aa5b590. Jan 29 13:05:59.004594 containerd[1508]: time="2025-01-29T13:05:59.004469045Z" level=info msg="StartContainer for \"c65b0251dcbb419bb47d22224b74f1d17a62172187cd6425a14f5cc75aa5b590\" returns successfully" Jan 29 13:05:59.019119 systemd[1]: cri-containerd-c65b0251dcbb419bb47d22224b74f1d17a62172187cd6425a14f5cc75aa5b590.scope: Deactivated successfully. Jan 29 13:05:59.100142 containerd[1508]: time="2025-01-29T13:05:59.100029109Z" level=info msg="shim disconnected" id=c65b0251dcbb419bb47d22224b74f1d17a62172187cd6425a14f5cc75aa5b590 namespace=k8s.io Jan 29 13:05:59.100142 containerd[1508]: time="2025-01-29T13:05:59.100140127Z" level=warning msg="cleaning up after shim disconnected" id=c65b0251dcbb419bb47d22224b74f1d17a62172187cd6425a14f5cc75aa5b590 namespace=k8s.io Jan 29 13:05:59.100142 containerd[1508]: time="2025-01-29T13:05:59.100163322Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 13:05:59.653888 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c65b0251dcbb419bb47d22224b74f1d17a62172187cd6425a14f5cc75aa5b590-rootfs.mount: Deactivated successfully. Jan 29 13:05:59.919233 kubelet[1878]: E0129 13:05:59.918436 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:00.063070 kubelet[1878]: E0129 13:06:00.062045 1878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tcnjs" podUID="ac21dc14-23e0-4c74-8492-563d8d3aeeb5" Jan 29 13:06:00.506681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3941263500.mount: Deactivated successfully. Jan 29 13:06:00.918885 kubelet[1878]: E0129 13:06:00.918784 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:01.232480 containerd[1508]: time="2025-01-29T13:06:01.231197819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:06:01.233360 containerd[1508]: time="2025-01-29T13:06:01.233310290Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=30231136" Jan 29 13:06:01.236110 containerd[1508]: time="2025-01-29T13:06:01.236045104Z" level=info msg="ImageCreate event name:\"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:06:01.246852 containerd[1508]: time="2025-01-29T13:06:01.246781489Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:06:01.248464 containerd[1508]: time="2025-01-29T13:06:01.247802023Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"30230147\" in 2.363859944s" Jan 29 13:06:01.248464 containerd[1508]: time="2025-01-29T13:06:01.247919689Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\"" Jan 29 13:06:01.250279 containerd[1508]: time="2025-01-29T13:06:01.250104470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 29 13:06:01.251312 containerd[1508]: time="2025-01-29T13:06:01.251115098Z" level=info msg="CreateContainer within sandbox \"1f351d11a461f1edc8325d8fc6418f089501df1d8c6b3bad6d9385c6f84cd902\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 13:06:01.269557 containerd[1508]: time="2025-01-29T13:06:01.269491296Z" level=info msg="CreateContainer within sandbox \"1f351d11a461f1edc8325d8fc6418f089501df1d8c6b3bad6d9385c6f84cd902\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c1af757a51d6f516cb4565c8e5f1b43d4f5fa23008545e914a890531b8cc24b6\"" Jan 29 13:06:01.270437 containerd[1508]: time="2025-01-29T13:06:01.270402247Z" level=info msg="StartContainer for \"c1af757a51d6f516cb4565c8e5f1b43d4f5fa23008545e914a890531b8cc24b6\"" Jan 29 13:06:01.312268 systemd[1]: run-containerd-runc-k8s.io-c1af757a51d6f516cb4565c8e5f1b43d4f5fa23008545e914a890531b8cc24b6-runc.jTvJ5P.mount: Deactivated successfully. Jan 29 13:06:01.318515 systemd[1]: Started cri-containerd-c1af757a51d6f516cb4565c8e5f1b43d4f5fa23008545e914a890531b8cc24b6.scope - libcontainer container c1af757a51d6f516cb4565c8e5f1b43d4f5fa23008545e914a890531b8cc24b6. Jan 29 13:06:01.357949 containerd[1508]: time="2025-01-29T13:06:01.357894212Z" level=info msg="StartContainer for \"c1af757a51d6f516cb4565c8e5f1b43d4f5fa23008545e914a890531b8cc24b6\" returns successfully" Jan 29 13:06:01.919576 kubelet[1878]: E0129 13:06:01.919480 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:02.064230 kubelet[1878]: E0129 13:06:02.062863 1878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tcnjs" podUID="ac21dc14-23e0-4c74-8492-563d8d3aeeb5" Jan 29 13:06:02.114226 kubelet[1878]: I0129 13:06:02.114125 1878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hrbwt" podStartSLOduration=4.27454809 podStartE2EDuration="8.114075243s" podCreationTimestamp="2025-01-29 13:05:54 +0000 UTC" firstStartedPulling="2025-01-29 13:05:57.40971498 +0000 UTC m=+4.208031188" lastFinishedPulling="2025-01-29 13:06:01.249242115 +0000 UTC m=+8.047558341" observedRunningTime="2025-01-29 13:06:02.107489805 +0000 UTC m=+8.905806029" watchObservedRunningTime="2025-01-29 13:06:02.114075243 +0000 UTC m=+8.912391470" Jan 29 13:06:02.920082 kubelet[1878]: E0129 13:06:02.919969 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:03.921269 kubelet[1878]: E0129 13:06:03.921182 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:04.062491 kubelet[1878]: E0129 13:06:04.061807 1878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tcnjs" podUID="ac21dc14-23e0-4c74-8492-563d8d3aeeb5" Jan 29 13:06:04.923953 kubelet[1878]: E0129 13:06:04.922847 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:05.923252 kubelet[1878]: E0129 13:06:05.923195 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:06.060257 kubelet[1878]: E0129 13:06:06.060190 1878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tcnjs" podUID="ac21dc14-23e0-4c74-8492-563d8d3aeeb5" Jan 29 13:06:06.275042 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 29 13:06:06.637011 containerd[1508]: time="2025-01-29T13:06:06.635914470Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:06:06.637988 containerd[1508]: time="2025-01-29T13:06:06.637935370Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 29 13:06:06.639108 containerd[1508]: time="2025-01-29T13:06:06.639049129Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:06:06.641976 containerd[1508]: time="2025-01-29T13:06:06.641905802Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:06:06.643250 containerd[1508]: time="2025-01-29T13:06:06.643101108Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.392955587s" Jan 29 13:06:06.643250 containerd[1508]: time="2025-01-29T13:06:06.643140788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 29 13:06:06.645903 containerd[1508]: time="2025-01-29T13:06:06.645858704Z" level=info msg="CreateContainer within sandbox \"0d2113a633548da87d66cf96d633b08f5f0b6750b20d047c8c0f2e78a7237de4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 13:06:06.660100 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3641871186.mount: Deactivated successfully. Jan 29 13:06:06.662592 containerd[1508]: time="2025-01-29T13:06:06.662541571Z" level=info msg="CreateContainer within sandbox \"0d2113a633548da87d66cf96d633b08f5f0b6750b20d047c8c0f2e78a7237de4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8a9fa2abdff0af03e4405a35d40cec4324d8ccbada36264f634e0965dbe02c7d\"" Jan 29 13:06:06.663335 containerd[1508]: time="2025-01-29T13:06:06.663272593Z" level=info msg="StartContainer for \"8a9fa2abdff0af03e4405a35d40cec4324d8ccbada36264f634e0965dbe02c7d\"" Jan 29 13:06:06.705619 systemd[1]: Started cri-containerd-8a9fa2abdff0af03e4405a35d40cec4324d8ccbada36264f634e0965dbe02c7d.scope - libcontainer container 8a9fa2abdff0af03e4405a35d40cec4324d8ccbada36264f634e0965dbe02c7d. Jan 29 13:06:06.747350 containerd[1508]: time="2025-01-29T13:06:06.747220870Z" level=info msg="StartContainer for \"8a9fa2abdff0af03e4405a35d40cec4324d8ccbada36264f634e0965dbe02c7d\" returns successfully" Jan 29 13:06:06.924003 kubelet[1878]: E0129 13:06:06.923828 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:07.572661 containerd[1508]: time="2025-01-29T13:06:07.572577666Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 13:06:07.575396 systemd[1]: cri-containerd-8a9fa2abdff0af03e4405a35d40cec4324d8ccbada36264f634e0965dbe02c7d.scope: Deactivated successfully. Jan 29 13:06:07.603152 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a9fa2abdff0af03e4405a35d40cec4324d8ccbada36264f634e0965dbe02c7d-rootfs.mount: Deactivated successfully. Jan 29 13:06:07.642718 kubelet[1878]: I0129 13:06:07.642375 1878 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 29 13:06:07.828035 containerd[1508]: time="2025-01-29T13:06:07.827520306Z" level=info msg="shim disconnected" id=8a9fa2abdff0af03e4405a35d40cec4324d8ccbada36264f634e0965dbe02c7d namespace=k8s.io Jan 29 13:06:07.828035 containerd[1508]: time="2025-01-29T13:06:07.827627826Z" level=warning msg="cleaning up after shim disconnected" id=8a9fa2abdff0af03e4405a35d40cec4324d8ccbada36264f634e0965dbe02c7d namespace=k8s.io Jan 29 13:06:07.828035 containerd[1508]: time="2025-01-29T13:06:07.827645753Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 13:06:07.925034 kubelet[1878]: E0129 13:06:07.924950 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:08.069386 systemd[1]: Created slice kubepods-besteffort-podac21dc14_23e0_4c74_8492_563d8d3aeeb5.slice - libcontainer container kubepods-besteffort-podac21dc14_23e0_4c74_8492_563d8d3aeeb5.slice. Jan 29 13:06:08.073615 containerd[1508]: time="2025-01-29T13:06:08.073562733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tcnjs,Uid:ac21dc14-23e0-4c74-8492-563d8d3aeeb5,Namespace:calico-system,Attempt:0,}" Jan 29 13:06:08.126747 containerd[1508]: time="2025-01-29T13:06:08.126101409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 29 13:06:08.185655 containerd[1508]: time="2025-01-29T13:06:08.180392639Z" level=error msg="Failed to destroy network for sandbox \"68c05b2e3e6fc50790a3d480d0e8d3ad8f8efdb9bf5ec0c4f5439a284af61cfd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:08.185655 containerd[1508]: time="2025-01-29T13:06:08.182840432Z" level=error msg="encountered an error cleaning up failed sandbox \"68c05b2e3e6fc50790a3d480d0e8d3ad8f8efdb9bf5ec0c4f5439a284af61cfd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:08.185655 containerd[1508]: time="2025-01-29T13:06:08.182944144Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tcnjs,Uid:ac21dc14-23e0-4c74-8492-563d8d3aeeb5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"68c05b2e3e6fc50790a3d480d0e8d3ad8f8efdb9bf5ec0c4f5439a284af61cfd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:08.183406 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-68c05b2e3e6fc50790a3d480d0e8d3ad8f8efdb9bf5ec0c4f5439a284af61cfd-shm.mount: Deactivated successfully. Jan 29 13:06:08.186165 kubelet[1878]: E0129 13:06:08.185750 1878 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68c05b2e3e6fc50790a3d480d0e8d3ad8f8efdb9bf5ec0c4f5439a284af61cfd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:08.186165 kubelet[1878]: E0129 13:06:08.185854 1878 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68c05b2e3e6fc50790a3d480d0e8d3ad8f8efdb9bf5ec0c4f5439a284af61cfd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tcnjs" Jan 29 13:06:08.186165 kubelet[1878]: E0129 13:06:08.185895 1878 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68c05b2e3e6fc50790a3d480d0e8d3ad8f8efdb9bf5ec0c4f5439a284af61cfd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tcnjs" Jan 29 13:06:08.186435 kubelet[1878]: E0129 13:06:08.185970 1878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tcnjs_calico-system(ac21dc14-23e0-4c74-8492-563d8d3aeeb5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tcnjs_calico-system(ac21dc14-23e0-4c74-8492-563d8d3aeeb5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"68c05b2e3e6fc50790a3d480d0e8d3ad8f8efdb9bf5ec0c4f5439a284af61cfd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tcnjs" podUID="ac21dc14-23e0-4c74-8492-563d8d3aeeb5" Jan 29 13:06:08.400388 systemd-timesyncd[1407]: Contacted time server [2a00:2381:19c6::100]:123 (2.flatcar.pool.ntp.org). Jan 29 13:06:08.400529 systemd-timesyncd[1407]: Initial clock synchronization to Wed 2025-01-29 13:06:08.513719 UTC. Jan 29 13:06:08.930389 kubelet[1878]: E0129 13:06:08.925632 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:09.128585 kubelet[1878]: I0129 13:06:09.128518 1878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68c05b2e3e6fc50790a3d480d0e8d3ad8f8efdb9bf5ec0c4f5439a284af61cfd" Jan 29 13:06:09.135756 containerd[1508]: time="2025-01-29T13:06:09.129585698Z" level=info msg="StopPodSandbox for \"68c05b2e3e6fc50790a3d480d0e8d3ad8f8efdb9bf5ec0c4f5439a284af61cfd\"" Jan 29 13:06:09.135756 containerd[1508]: time="2025-01-29T13:06:09.129869598Z" level=info msg="Ensure that sandbox 68c05b2e3e6fc50790a3d480d0e8d3ad8f8efdb9bf5ec0c4f5439a284af61cfd in task-service has been cleanup successfully" Jan 29 13:06:09.135756 containerd[1508]: time="2025-01-29T13:06:09.132108123Z" level=info msg="TearDown network for sandbox \"68c05b2e3e6fc50790a3d480d0e8d3ad8f8efdb9bf5ec0c4f5439a284af61cfd\" successfully" Jan 29 13:06:09.135756 containerd[1508]: time="2025-01-29T13:06:09.132135623Z" level=info msg="StopPodSandbox for \"68c05b2e3e6fc50790a3d480d0e8d3ad8f8efdb9bf5ec0c4f5439a284af61cfd\" returns successfully" Jan 29 13:06:09.133998 systemd[1]: run-netns-cni\x2dad689f73\x2d4e41\x2d78d9\x2dad2a\x2d6f063b9b1606.mount: Deactivated successfully. Jan 29 13:06:09.137176 containerd[1508]: time="2025-01-29T13:06:09.137051357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tcnjs,Uid:ac21dc14-23e0-4c74-8492-563d8d3aeeb5,Namespace:calico-system,Attempt:1,}" Jan 29 13:06:09.240948 containerd[1508]: time="2025-01-29T13:06:09.240664718Z" level=error msg="Failed to destroy network for sandbox \"f61d6ee5389e43c43a84d46d0c771edb2ff088040550efc66d97a815bcfa91a4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:09.243795 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f61d6ee5389e43c43a84d46d0c771edb2ff088040550efc66d97a815bcfa91a4-shm.mount: Deactivated successfully. Jan 29 13:06:09.246439 containerd[1508]: time="2025-01-29T13:06:09.245858234Z" level=error msg="encountered an error cleaning up failed sandbox \"f61d6ee5389e43c43a84d46d0c771edb2ff088040550efc66d97a815bcfa91a4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:09.246439 containerd[1508]: time="2025-01-29T13:06:09.245956563Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tcnjs,Uid:ac21dc14-23e0-4c74-8492-563d8d3aeeb5,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"f61d6ee5389e43c43a84d46d0c771edb2ff088040550efc66d97a815bcfa91a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:09.248110 kubelet[1878]: E0129 13:06:09.247733 1878 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f61d6ee5389e43c43a84d46d0c771edb2ff088040550efc66d97a815bcfa91a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:09.248110 kubelet[1878]: E0129 13:06:09.247852 1878 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f61d6ee5389e43c43a84d46d0c771edb2ff088040550efc66d97a815bcfa91a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tcnjs" Jan 29 13:06:09.248110 kubelet[1878]: E0129 13:06:09.247888 1878 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f61d6ee5389e43c43a84d46d0c771edb2ff088040550efc66d97a815bcfa91a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tcnjs" Jan 29 13:06:09.248664 kubelet[1878]: E0129 13:06:09.248406 1878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tcnjs_calico-system(ac21dc14-23e0-4c74-8492-563d8d3aeeb5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tcnjs_calico-system(ac21dc14-23e0-4c74-8492-563d8d3aeeb5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f61d6ee5389e43c43a84d46d0c771edb2ff088040550efc66d97a815bcfa91a4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tcnjs" podUID="ac21dc14-23e0-4c74-8492-563d8d3aeeb5" Jan 29 13:06:09.926992 kubelet[1878]: E0129 13:06:09.926483 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:10.135185 kubelet[1878]: I0129 13:06:10.134968 1878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f61d6ee5389e43c43a84d46d0c771edb2ff088040550efc66d97a815bcfa91a4" Jan 29 13:06:10.138263 containerd[1508]: time="2025-01-29T13:06:10.137920174Z" level=info msg="StopPodSandbox for \"f61d6ee5389e43c43a84d46d0c771edb2ff088040550efc66d97a815bcfa91a4\"" Jan 29 13:06:10.140177 containerd[1508]: time="2025-01-29T13:06:10.138596080Z" level=info msg="Ensure that sandbox f61d6ee5389e43c43a84d46d0c771edb2ff088040550efc66d97a815bcfa91a4 in task-service has been cleanup successfully" Jan 29 13:06:10.142237 containerd[1508]: time="2025-01-29T13:06:10.141465130Z" level=info msg="TearDown network for sandbox \"f61d6ee5389e43c43a84d46d0c771edb2ff088040550efc66d97a815bcfa91a4\" successfully" Jan 29 13:06:10.142237 containerd[1508]: time="2025-01-29T13:06:10.141504381Z" level=info msg="StopPodSandbox for \"f61d6ee5389e43c43a84d46d0c771edb2ff088040550efc66d97a815bcfa91a4\" returns successfully" Jan 29 13:06:10.144726 containerd[1508]: time="2025-01-29T13:06:10.142973285Z" level=info msg="StopPodSandbox for \"68c05b2e3e6fc50790a3d480d0e8d3ad8f8efdb9bf5ec0c4f5439a284af61cfd\"" Jan 29 13:06:10.144726 containerd[1508]: time="2025-01-29T13:06:10.143278526Z" level=info msg="TearDown network for sandbox \"68c05b2e3e6fc50790a3d480d0e8d3ad8f8efdb9bf5ec0c4f5439a284af61cfd\" successfully" Jan 29 13:06:10.144726 containerd[1508]: time="2025-01-29T13:06:10.143357360Z" level=info msg="StopPodSandbox for \"68c05b2e3e6fc50790a3d480d0e8d3ad8f8efdb9bf5ec0c4f5439a284af61cfd\" returns successfully" Jan 29 13:06:10.145705 containerd[1508]: time="2025-01-29T13:06:10.145623707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tcnjs,Uid:ac21dc14-23e0-4c74-8492-563d8d3aeeb5,Namespace:calico-system,Attempt:2,}" Jan 29 13:06:10.147849 systemd[1]: run-netns-cni\x2d35a2eeba\x2d452e\x2d0dd0\x2d5eab\x2dca7344d21730.mount: Deactivated successfully. Jan 29 13:06:10.259630 containerd[1508]: time="2025-01-29T13:06:10.259263821Z" level=error msg="Failed to destroy network for sandbox \"bea178fe28cf855ef27058adce84845b8f5abe30355d68106f97d1f715ffb2aa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:10.263697 containerd[1508]: time="2025-01-29T13:06:10.260353363Z" level=error msg="encountered an error cleaning up failed sandbox \"bea178fe28cf855ef27058adce84845b8f5abe30355d68106f97d1f715ffb2aa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:10.263697 containerd[1508]: time="2025-01-29T13:06:10.260478010Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tcnjs,Uid:ac21dc14-23e0-4c74-8492-563d8d3aeeb5,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"bea178fe28cf855ef27058adce84845b8f5abe30355d68106f97d1f715ffb2aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:10.263858 kubelet[1878]: E0129 13:06:10.262534 1878 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bea178fe28cf855ef27058adce84845b8f5abe30355d68106f97d1f715ffb2aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:10.263858 kubelet[1878]: E0129 13:06:10.262626 1878 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bea178fe28cf855ef27058adce84845b8f5abe30355d68106f97d1f715ffb2aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tcnjs" Jan 29 13:06:10.263858 kubelet[1878]: E0129 13:06:10.262665 1878 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bea178fe28cf855ef27058adce84845b8f5abe30355d68106f97d1f715ffb2aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tcnjs" Jan 29 13:06:10.264084 kubelet[1878]: E0129 13:06:10.262739 1878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tcnjs_calico-system(ac21dc14-23e0-4c74-8492-563d8d3aeeb5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tcnjs_calico-system(ac21dc14-23e0-4c74-8492-563d8d3aeeb5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bea178fe28cf855ef27058adce84845b8f5abe30355d68106f97d1f715ffb2aa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tcnjs" podUID="ac21dc14-23e0-4c74-8492-563d8d3aeeb5" Jan 29 13:06:10.264157 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bea178fe28cf855ef27058adce84845b8f5abe30355d68106f97d1f715ffb2aa-shm.mount: Deactivated successfully. Jan 29 13:06:10.927537 kubelet[1878]: E0129 13:06:10.927436 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:11.141332 kubelet[1878]: I0129 13:06:11.140819 1878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bea178fe28cf855ef27058adce84845b8f5abe30355d68106f97d1f715ffb2aa" Jan 29 13:06:11.151026 containerd[1508]: time="2025-01-29T13:06:11.150248497Z" level=info msg="StopPodSandbox for \"bea178fe28cf855ef27058adce84845b8f5abe30355d68106f97d1f715ffb2aa\"" Jan 29 13:06:11.152976 containerd[1508]: time="2025-01-29T13:06:11.152914137Z" level=info msg="Ensure that sandbox bea178fe28cf855ef27058adce84845b8f5abe30355d68106f97d1f715ffb2aa in task-service has been cleanup successfully" Jan 29 13:06:11.155715 containerd[1508]: time="2025-01-29T13:06:11.153410219Z" level=info msg="TearDown network for sandbox \"bea178fe28cf855ef27058adce84845b8f5abe30355d68106f97d1f715ffb2aa\" successfully" Jan 29 13:06:11.155715 containerd[1508]: time="2025-01-29T13:06:11.153441480Z" level=info msg="StopPodSandbox for \"bea178fe28cf855ef27058adce84845b8f5abe30355d68106f97d1f715ffb2aa\" returns successfully" Jan 29 13:06:11.156273 systemd[1]: run-netns-cni\x2d21396ef1\x2dc861\x2d2f78\x2d08b3\x2deb85b9393b5b.mount: Deactivated successfully. Jan 29 13:06:11.156696 containerd[1508]: time="2025-01-29T13:06:11.156502181Z" level=info msg="StopPodSandbox for \"f61d6ee5389e43c43a84d46d0c771edb2ff088040550efc66d97a815bcfa91a4\"" Jan 29 13:06:11.156696 containerd[1508]: time="2025-01-29T13:06:11.156645704Z" level=info msg="TearDown network for sandbox \"f61d6ee5389e43c43a84d46d0c771edb2ff088040550efc66d97a815bcfa91a4\" successfully" Jan 29 13:06:11.156696 containerd[1508]: time="2025-01-29T13:06:11.156665599Z" level=info msg="StopPodSandbox for \"f61d6ee5389e43c43a84d46d0c771edb2ff088040550efc66d97a815bcfa91a4\" returns successfully" Jan 29 13:06:11.157084 containerd[1508]: time="2025-01-29T13:06:11.157047618Z" level=info msg="StopPodSandbox for \"68c05b2e3e6fc50790a3d480d0e8d3ad8f8efdb9bf5ec0c4f5439a284af61cfd\"" Jan 29 13:06:11.157185 containerd[1508]: time="2025-01-29T13:06:11.157160501Z" level=info msg="TearDown network for sandbox \"68c05b2e3e6fc50790a3d480d0e8d3ad8f8efdb9bf5ec0c4f5439a284af61cfd\" successfully" Jan 29 13:06:11.157256 containerd[1508]: time="2025-01-29T13:06:11.157185124Z" level=info msg="StopPodSandbox for \"68c05b2e3e6fc50790a3d480d0e8d3ad8f8efdb9bf5ec0c4f5439a284af61cfd\" returns successfully" Jan 29 13:06:11.157910 containerd[1508]: time="2025-01-29T13:06:11.157870218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tcnjs,Uid:ac21dc14-23e0-4c74-8492-563d8d3aeeb5,Namespace:calico-system,Attempt:3,}" Jan 29 13:06:11.263426 containerd[1508]: time="2025-01-29T13:06:11.263095421Z" level=error msg="Failed to destroy network for sandbox \"a3c5920ccc742c50f2cb190847e8db5db50a928a9c848c3b3182315c3345bca0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:11.266618 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a3c5920ccc742c50f2cb190847e8db5db50a928a9c848c3b3182315c3345bca0-shm.mount: Deactivated successfully. Jan 29 13:06:11.269405 containerd[1508]: time="2025-01-29T13:06:11.269239443Z" level=error msg="encountered an error cleaning up failed sandbox \"a3c5920ccc742c50f2cb190847e8db5db50a928a9c848c3b3182315c3345bca0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:11.269479 containerd[1508]: time="2025-01-29T13:06:11.269411851Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tcnjs,Uid:ac21dc14-23e0-4c74-8492-563d8d3aeeb5,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"a3c5920ccc742c50f2cb190847e8db5db50a928a9c848c3b3182315c3345bca0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:11.269978 kubelet[1878]: E0129 13:06:11.269749 1878 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3c5920ccc742c50f2cb190847e8db5db50a928a9c848c3b3182315c3345bca0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:11.269978 kubelet[1878]: E0129 13:06:11.269870 1878 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3c5920ccc742c50f2cb190847e8db5db50a928a9c848c3b3182315c3345bca0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tcnjs" Jan 29 13:06:11.269978 kubelet[1878]: E0129 13:06:11.269904 1878 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3c5920ccc742c50f2cb190847e8db5db50a928a9c848c3b3182315c3345bca0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tcnjs" Jan 29 13:06:11.270871 kubelet[1878]: E0129 13:06:11.269982 1878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tcnjs_calico-system(ac21dc14-23e0-4c74-8492-563d8d3aeeb5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tcnjs_calico-system(ac21dc14-23e0-4c74-8492-563d8d3aeeb5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a3c5920ccc742c50f2cb190847e8db5db50a928a9c848c3b3182315c3345bca0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tcnjs" podUID="ac21dc14-23e0-4c74-8492-563d8d3aeeb5" Jan 29 13:06:11.928673 kubelet[1878]: E0129 13:06:11.928559 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:12.148030 kubelet[1878]: I0129 13:06:12.147929 1878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3c5920ccc742c50f2cb190847e8db5db50a928a9c848c3b3182315c3345bca0" Jan 29 13:06:12.149779 containerd[1508]: time="2025-01-29T13:06:12.148818401Z" level=info msg="StopPodSandbox for \"a3c5920ccc742c50f2cb190847e8db5db50a928a9c848c3b3182315c3345bca0\"" Jan 29 13:06:12.149779 containerd[1508]: time="2025-01-29T13:06:12.149150654Z" level=info msg="Ensure that sandbox a3c5920ccc742c50f2cb190847e8db5db50a928a9c848c3b3182315c3345bca0 in task-service has been cleanup successfully" Jan 29 13:06:12.151929 containerd[1508]: time="2025-01-29T13:06:12.151436667Z" level=info msg="TearDown network for sandbox \"a3c5920ccc742c50f2cb190847e8db5db50a928a9c848c3b3182315c3345bca0\" successfully" Jan 29 13:06:12.151929 containerd[1508]: time="2025-01-29T13:06:12.151468720Z" level=info msg="StopPodSandbox for \"a3c5920ccc742c50f2cb190847e8db5db50a928a9c848c3b3182315c3345bca0\" returns successfully" Jan 29 13:06:12.153854 containerd[1508]: time="2025-01-29T13:06:12.152011475Z" level=info msg="StopPodSandbox for \"bea178fe28cf855ef27058adce84845b8f5abe30355d68106f97d1f715ffb2aa\"" Jan 29 13:06:12.153854 containerd[1508]: time="2025-01-29T13:06:12.152148433Z" level=info msg="TearDown network for sandbox \"bea178fe28cf855ef27058adce84845b8f5abe30355d68106f97d1f715ffb2aa\" successfully" Jan 29 13:06:12.153854 containerd[1508]: time="2025-01-29T13:06:12.152180893Z" level=info msg="StopPodSandbox for \"bea178fe28cf855ef27058adce84845b8f5abe30355d68106f97d1f715ffb2aa\" returns successfully" Jan 29 13:06:12.153854 containerd[1508]: time="2025-01-29T13:06:12.152671385Z" level=info msg="StopPodSandbox for \"f61d6ee5389e43c43a84d46d0c771edb2ff088040550efc66d97a815bcfa91a4\"" Jan 29 13:06:12.153854 containerd[1508]: time="2025-01-29T13:06:12.152809721Z" level=info msg="TearDown network for sandbox \"f61d6ee5389e43c43a84d46d0c771edb2ff088040550efc66d97a815bcfa91a4\" successfully" Jan 29 13:06:12.153854 containerd[1508]: time="2025-01-29T13:06:12.152830241Z" level=info msg="StopPodSandbox for \"f61d6ee5389e43c43a84d46d0c771edb2ff088040550efc66d97a815bcfa91a4\" returns successfully" Jan 29 13:06:12.152900 systemd[1]: run-netns-cni\x2df487ce32\x2d8c59\x2db521\x2dfa56\x2d71967ceb79ed.mount: Deactivated successfully. Jan 29 13:06:12.154755 containerd[1508]: time="2025-01-29T13:06:12.154546848Z" level=info msg="StopPodSandbox for \"68c05b2e3e6fc50790a3d480d0e8d3ad8f8efdb9bf5ec0c4f5439a284af61cfd\"" Jan 29 13:06:12.154755 containerd[1508]: time="2025-01-29T13:06:12.154661754Z" level=info msg="TearDown network for sandbox \"68c05b2e3e6fc50790a3d480d0e8d3ad8f8efdb9bf5ec0c4f5439a284af61cfd\" successfully" Jan 29 13:06:12.154755 containerd[1508]: time="2025-01-29T13:06:12.154680492Z" level=info msg="StopPodSandbox for \"68c05b2e3e6fc50790a3d480d0e8d3ad8f8efdb9bf5ec0c4f5439a284af61cfd\" returns successfully" Jan 29 13:06:12.155830 containerd[1508]: time="2025-01-29T13:06:12.155431932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tcnjs,Uid:ac21dc14-23e0-4c74-8492-563d8d3aeeb5,Namespace:calico-system,Attempt:4,}" Jan 29 13:06:12.438078 systemd[1]: Created slice kubepods-besteffort-pod0c9c54e0_dec4_4f35_9513_44b910bfb601.slice - libcontainer container kubepods-besteffort-pod0c9c54e0_dec4_4f35_9513_44b910bfb601.slice. Jan 29 13:06:12.456332 containerd[1508]: time="2025-01-29T13:06:12.456255172Z" level=error msg="Failed to destroy network for sandbox \"d25fdc15c38fc8e86faec4088bb463c13036fd3ee0f7c4f819cfce539211e8d2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:12.458548 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d25fdc15c38fc8e86faec4088bb463c13036fd3ee0f7c4f819cfce539211e8d2-shm.mount: Deactivated successfully. Jan 29 13:06:12.459576 containerd[1508]: time="2025-01-29T13:06:12.459372613Z" level=error msg="encountered an error cleaning up failed sandbox \"d25fdc15c38fc8e86faec4088bb463c13036fd3ee0f7c4f819cfce539211e8d2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:12.459576 containerd[1508]: time="2025-01-29T13:06:12.459456210Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tcnjs,Uid:ac21dc14-23e0-4c74-8492-563d8d3aeeb5,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"d25fdc15c38fc8e86faec4088bb463c13036fd3ee0f7c4f819cfce539211e8d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:12.460192 kubelet[1878]: E0129 13:06:12.460147 1878 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d25fdc15c38fc8e86faec4088bb463c13036fd3ee0f7c4f819cfce539211e8d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:12.460594 kubelet[1878]: E0129 13:06:12.460368 1878 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d25fdc15c38fc8e86faec4088bb463c13036fd3ee0f7c4f819cfce539211e8d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tcnjs" Jan 29 13:06:12.460594 kubelet[1878]: E0129 13:06:12.460433 1878 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d25fdc15c38fc8e86faec4088bb463c13036fd3ee0f7c4f819cfce539211e8d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tcnjs" Jan 29 13:06:12.460594 kubelet[1878]: E0129 13:06:12.460516 1878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tcnjs_calico-system(ac21dc14-23e0-4c74-8492-563d8d3aeeb5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tcnjs_calico-system(ac21dc14-23e0-4c74-8492-563d8d3aeeb5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d25fdc15c38fc8e86faec4088bb463c13036fd3ee0f7c4f819cfce539211e8d2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tcnjs" podUID="ac21dc14-23e0-4c74-8492-563d8d3aeeb5" Jan 29 13:06:12.500683 kubelet[1878]: I0129 13:06:12.500628 1878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpffr\" (UniqueName: \"kubernetes.io/projected/0c9c54e0-dec4-4f35-9513-44b910bfb601-kube-api-access-hpffr\") pod \"nginx-deployment-8587fbcb89-9xm5w\" (UID: \"0c9c54e0-dec4-4f35-9513-44b910bfb601\") " pod="default/nginx-deployment-8587fbcb89-9xm5w" Jan 29 13:06:12.745286 containerd[1508]: time="2025-01-29T13:06:12.744667077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-9xm5w,Uid:0c9c54e0-dec4-4f35-9513-44b910bfb601,Namespace:default,Attempt:0,}" Jan 29 13:06:12.874840 containerd[1508]: time="2025-01-29T13:06:12.874777644Z" level=error msg="Failed to destroy network for sandbox \"7e6768f4f4034764ce3dfae9a60bd95c769d0afcb67b58697658a9e2d13a5ecc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:12.875573 containerd[1508]: time="2025-01-29T13:06:12.875523529Z" level=error msg="encountered an error cleaning up failed sandbox \"7e6768f4f4034764ce3dfae9a60bd95c769d0afcb67b58697658a9e2d13a5ecc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:12.875648 containerd[1508]: time="2025-01-29T13:06:12.875616300Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-9xm5w,Uid:0c9c54e0-dec4-4f35-9513-44b910bfb601,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7e6768f4f4034764ce3dfae9a60bd95c769d0afcb67b58697658a9e2d13a5ecc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:12.876045 kubelet[1878]: E0129 13:06:12.875915 1878 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e6768f4f4034764ce3dfae9a60bd95c769d0afcb67b58697658a9e2d13a5ecc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:12.876045 kubelet[1878]: E0129 13:06:12.876011 1878 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e6768f4f4034764ce3dfae9a60bd95c769d0afcb67b58697658a9e2d13a5ecc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-9xm5w" Jan 29 13:06:12.876325 kubelet[1878]: E0129 13:06:12.876053 1878 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e6768f4f4034764ce3dfae9a60bd95c769d0afcb67b58697658a9e2d13a5ecc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-9xm5w" Jan 29 13:06:12.876325 kubelet[1878]: E0129 13:06:12.876123 1878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-9xm5w_default(0c9c54e0-dec4-4f35-9513-44b910bfb601)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-9xm5w_default(0c9c54e0-dec4-4f35-9513-44b910bfb601)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7e6768f4f4034764ce3dfae9a60bd95c769d0afcb67b58697658a9e2d13a5ecc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-9xm5w" podUID="0c9c54e0-dec4-4f35-9513-44b910bfb601" Jan 29 13:06:12.929246 kubelet[1878]: E0129 13:06:12.929180 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:13.153220 kubelet[1878]: I0129 13:06:13.153160 1878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e6768f4f4034764ce3dfae9a60bd95c769d0afcb67b58697658a9e2d13a5ecc" Jan 29 13:06:13.154576 containerd[1508]: time="2025-01-29T13:06:13.154017894Z" level=info msg="StopPodSandbox for \"7e6768f4f4034764ce3dfae9a60bd95c769d0afcb67b58697658a9e2d13a5ecc\"" Jan 29 13:06:13.154576 containerd[1508]: time="2025-01-29T13:06:13.154278197Z" level=info msg="Ensure that sandbox 7e6768f4f4034764ce3dfae9a60bd95c769d0afcb67b58697658a9e2d13a5ecc in task-service has been cleanup successfully" Jan 29 13:06:13.154982 containerd[1508]: time="2025-01-29T13:06:13.154582994Z" level=info msg="TearDown network for sandbox \"7e6768f4f4034764ce3dfae9a60bd95c769d0afcb67b58697658a9e2d13a5ecc\" successfully" Jan 29 13:06:13.154982 containerd[1508]: time="2025-01-29T13:06:13.154605631Z" level=info msg="StopPodSandbox for \"7e6768f4f4034764ce3dfae9a60bd95c769d0afcb67b58697658a9e2d13a5ecc\" returns successfully" Jan 29 13:06:13.156452 containerd[1508]: time="2025-01-29T13:06:13.156385377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-9xm5w,Uid:0c9c54e0-dec4-4f35-9513-44b910bfb601,Namespace:default,Attempt:1,}" Jan 29 13:06:13.176549 kubelet[1878]: I0129 13:06:13.175507 1878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d25fdc15c38fc8e86faec4088bb463c13036fd3ee0f7c4f819cfce539211e8d2" Jan 29 13:06:13.177979 containerd[1508]: time="2025-01-29T13:06:13.177930634Z" level=info msg="StopPodSandbox for \"d25fdc15c38fc8e86faec4088bb463c13036fd3ee0f7c4f819cfce539211e8d2\"" Jan 29 13:06:13.178309 containerd[1508]: time="2025-01-29T13:06:13.178247091Z" level=info msg="Ensure that sandbox d25fdc15c38fc8e86faec4088bb463c13036fd3ee0f7c4f819cfce539211e8d2 in task-service has been cleanup successfully" Jan 29 13:06:13.179479 containerd[1508]: time="2025-01-29T13:06:13.178949026Z" level=info msg="TearDown network for sandbox \"d25fdc15c38fc8e86faec4088bb463c13036fd3ee0f7c4f819cfce539211e8d2\" successfully" Jan 29 13:06:13.179479 containerd[1508]: time="2025-01-29T13:06:13.179007468Z" level=info msg="StopPodSandbox for \"d25fdc15c38fc8e86faec4088bb463c13036fd3ee0f7c4f819cfce539211e8d2\" returns successfully" Jan 29 13:06:13.179834 containerd[1508]: time="2025-01-29T13:06:13.179802731Z" level=info msg="StopPodSandbox for \"a3c5920ccc742c50f2cb190847e8db5db50a928a9c848c3b3182315c3345bca0\"" Jan 29 13:06:13.179932 containerd[1508]: time="2025-01-29T13:06:13.179910141Z" level=info msg="TearDown network for sandbox \"a3c5920ccc742c50f2cb190847e8db5db50a928a9c848c3b3182315c3345bca0\" successfully" Jan 29 13:06:13.179932 containerd[1508]: time="2025-01-29T13:06:13.179928237Z" level=info msg="StopPodSandbox for \"a3c5920ccc742c50f2cb190847e8db5db50a928a9c848c3b3182315c3345bca0\" returns successfully" Jan 29 13:06:13.181230 containerd[1508]: time="2025-01-29T13:06:13.180874974Z" level=info msg="StopPodSandbox for \"bea178fe28cf855ef27058adce84845b8f5abe30355d68106f97d1f715ffb2aa\"" Jan 29 13:06:13.181230 containerd[1508]: time="2025-01-29T13:06:13.180974432Z" level=info msg="TearDown network for sandbox \"bea178fe28cf855ef27058adce84845b8f5abe30355d68106f97d1f715ffb2aa\" successfully" Jan 29 13:06:13.181230 containerd[1508]: time="2025-01-29T13:06:13.180991921Z" level=info msg="StopPodSandbox for \"bea178fe28cf855ef27058adce84845b8f5abe30355d68106f97d1f715ffb2aa\" returns successfully" Jan 29 13:06:13.182153 containerd[1508]: time="2025-01-29T13:06:13.181974799Z" level=info msg="StopPodSandbox for \"f61d6ee5389e43c43a84d46d0c771edb2ff088040550efc66d97a815bcfa91a4\"" Jan 29 13:06:13.182153 containerd[1508]: time="2025-01-29T13:06:13.182083105Z" level=info msg="TearDown network for sandbox \"f61d6ee5389e43c43a84d46d0c771edb2ff088040550efc66d97a815bcfa91a4\" successfully" Jan 29 13:06:13.182153 containerd[1508]: time="2025-01-29T13:06:13.182107584Z" level=info msg="StopPodSandbox for \"f61d6ee5389e43c43a84d46d0c771edb2ff088040550efc66d97a815bcfa91a4\" returns successfully" Jan 29 13:06:13.183782 containerd[1508]: time="2025-01-29T13:06:13.183720748Z" level=info msg="StopPodSandbox for \"68c05b2e3e6fc50790a3d480d0e8d3ad8f8efdb9bf5ec0c4f5439a284af61cfd\"" Jan 29 13:06:13.183920 containerd[1508]: time="2025-01-29T13:06:13.183832576Z" level=info msg="TearDown network for sandbox \"68c05b2e3e6fc50790a3d480d0e8d3ad8f8efdb9bf5ec0c4f5439a284af61cfd\" successfully" Jan 29 13:06:13.183920 containerd[1508]: time="2025-01-29T13:06:13.183850408Z" level=info msg="StopPodSandbox for \"68c05b2e3e6fc50790a3d480d0e8d3ad8f8efdb9bf5ec0c4f5439a284af61cfd\" returns successfully" Jan 29 13:06:13.185638 containerd[1508]: time="2025-01-29T13:06:13.185433254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tcnjs,Uid:ac21dc14-23e0-4c74-8492-563d8d3aeeb5,Namespace:calico-system,Attempt:5,}" Jan 29 13:06:13.311324 containerd[1508]: time="2025-01-29T13:06:13.310357613Z" level=error msg="Failed to destroy network for sandbox \"24bc3d97635d102c8b625be4a39399ee1aeffea51e3942c828350f65913580a8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:13.312095 containerd[1508]: time="2025-01-29T13:06:13.312051766Z" level=error msg="encountered an error cleaning up failed sandbox \"24bc3d97635d102c8b625be4a39399ee1aeffea51e3942c828350f65913580a8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:13.312171 containerd[1508]: time="2025-01-29T13:06:13.312139376Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-9xm5w,Uid:0c9c54e0-dec4-4f35-9513-44b910bfb601,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"24bc3d97635d102c8b625be4a39399ee1aeffea51e3942c828350f65913580a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:13.312522 kubelet[1878]: E0129 13:06:13.312477 1878 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24bc3d97635d102c8b625be4a39399ee1aeffea51e3942c828350f65913580a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:13.313147 kubelet[1878]: E0129 13:06:13.312681 1878 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24bc3d97635d102c8b625be4a39399ee1aeffea51e3942c828350f65913580a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-9xm5w" Jan 29 13:06:13.313147 kubelet[1878]: E0129 13:06:13.312765 1878 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24bc3d97635d102c8b625be4a39399ee1aeffea51e3942c828350f65913580a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-9xm5w" Jan 29 13:06:13.313147 kubelet[1878]: E0129 13:06:13.312838 1878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-9xm5w_default(0c9c54e0-dec4-4f35-9513-44b910bfb601)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-9xm5w_default(0c9c54e0-dec4-4f35-9513-44b910bfb601)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"24bc3d97635d102c8b625be4a39399ee1aeffea51e3942c828350f65913580a8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-9xm5w" podUID="0c9c54e0-dec4-4f35-9513-44b910bfb601" Jan 29 13:06:13.323827 systemd[1]: run-netns-cni\x2da8a70774\x2d5d31\x2da845\x2d3aac\x2db125d1ea2f73.mount: Deactivated successfully. Jan 29 13:06:13.339249 containerd[1508]: time="2025-01-29T13:06:13.339182795Z" level=error msg="Failed to destroy network for sandbox \"95f6bb89a876c6ff7a75821b36f30d7cfcfb6959a13af4d998100d2845c6b912\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:13.339911 containerd[1508]: time="2025-01-29T13:06:13.339874479Z" level=error msg="encountered an error cleaning up failed sandbox \"95f6bb89a876c6ff7a75821b36f30d7cfcfb6959a13af4d998100d2845c6b912\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:13.341335 containerd[1508]: time="2025-01-29T13:06:13.340070706Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tcnjs,Uid:ac21dc14-23e0-4c74-8492-563d8d3aeeb5,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"95f6bb89a876c6ff7a75821b36f30d7cfcfb6959a13af4d998100d2845c6b912\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:13.343135 kubelet[1878]: E0129 13:06:13.341709 1878 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95f6bb89a876c6ff7a75821b36f30d7cfcfb6959a13af4d998100d2845c6b912\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:13.343135 kubelet[1878]: E0129 13:06:13.341791 1878 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95f6bb89a876c6ff7a75821b36f30d7cfcfb6959a13af4d998100d2845c6b912\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tcnjs" Jan 29 13:06:13.343135 kubelet[1878]: E0129 13:06:13.341821 1878 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95f6bb89a876c6ff7a75821b36f30d7cfcfb6959a13af4d998100d2845c6b912\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tcnjs" Jan 29 13:06:13.342804 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-95f6bb89a876c6ff7a75821b36f30d7cfcfb6959a13af4d998100d2845c6b912-shm.mount: Deactivated successfully. Jan 29 13:06:13.343475 kubelet[1878]: E0129 13:06:13.341892 1878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tcnjs_calico-system(ac21dc14-23e0-4c74-8492-563d8d3aeeb5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tcnjs_calico-system(ac21dc14-23e0-4c74-8492-563d8d3aeeb5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"95f6bb89a876c6ff7a75821b36f30d7cfcfb6959a13af4d998100d2845c6b912\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tcnjs" podUID="ac21dc14-23e0-4c74-8492-563d8d3aeeb5" Jan 29 13:06:13.914345 kubelet[1878]: E0129 13:06:13.914270 1878 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:13.929494 kubelet[1878]: E0129 13:06:13.929444 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:14.183278 kubelet[1878]: I0129 13:06:14.183136 1878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95f6bb89a876c6ff7a75821b36f30d7cfcfb6959a13af4d998100d2845c6b912" Jan 29 13:06:14.185624 containerd[1508]: time="2025-01-29T13:06:14.184665090Z" level=info msg="StopPodSandbox for \"95f6bb89a876c6ff7a75821b36f30d7cfcfb6959a13af4d998100d2845c6b912\"" Jan 29 13:06:14.185624 containerd[1508]: time="2025-01-29T13:06:14.184953717Z" level=info msg="Ensure that sandbox 95f6bb89a876c6ff7a75821b36f30d7cfcfb6959a13af4d998100d2845c6b912 in task-service has been cleanup successfully" Jan 29 13:06:14.186459 containerd[1508]: time="2025-01-29T13:06:14.186180932Z" level=info msg="TearDown network for sandbox \"95f6bb89a876c6ff7a75821b36f30d7cfcfb6959a13af4d998100d2845c6b912\" successfully" Jan 29 13:06:14.186459 containerd[1508]: time="2025-01-29T13:06:14.186210037Z" level=info msg="StopPodSandbox for \"95f6bb89a876c6ff7a75821b36f30d7cfcfb6959a13af4d998100d2845c6b912\" returns successfully" Jan 29 13:06:14.191815 containerd[1508]: time="2025-01-29T13:06:14.190689702Z" level=info msg="StopPodSandbox for \"d25fdc15c38fc8e86faec4088bb463c13036fd3ee0f7c4f819cfce539211e8d2\"" Jan 29 13:06:14.191815 containerd[1508]: time="2025-01-29T13:06:14.190841607Z" level=info msg="TearDown network for sandbox \"d25fdc15c38fc8e86faec4088bb463c13036fd3ee0f7c4f819cfce539211e8d2\" successfully" Jan 29 13:06:14.191815 containerd[1508]: time="2025-01-29T13:06:14.190865085Z" level=info msg="StopPodSandbox for \"d25fdc15c38fc8e86faec4088bb463c13036fd3ee0f7c4f819cfce539211e8d2\" returns successfully" Jan 29 13:06:14.190946 systemd[1]: run-netns-cni\x2d18113ec6\x2d0288\x2ddc9e\x2db67b\x2d15d6cfdf466d.mount: Deactivated successfully. Jan 29 13:06:14.194307 containerd[1508]: time="2025-01-29T13:06:14.193451680Z" level=info msg="StopPodSandbox for \"a3c5920ccc742c50f2cb190847e8db5db50a928a9c848c3b3182315c3345bca0\"" Jan 29 13:06:14.194307 containerd[1508]: time="2025-01-29T13:06:14.193568865Z" level=info msg="TearDown network for sandbox \"a3c5920ccc742c50f2cb190847e8db5db50a928a9c848c3b3182315c3345bca0\" successfully" Jan 29 13:06:14.194307 containerd[1508]: time="2025-01-29T13:06:14.193587384Z" level=info msg="StopPodSandbox for \"a3c5920ccc742c50f2cb190847e8db5db50a928a9c848c3b3182315c3345bca0\" returns successfully" Jan 29 13:06:14.194618 kubelet[1878]: I0129 13:06:14.194583 1878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24bc3d97635d102c8b625be4a39399ee1aeffea51e3942c828350f65913580a8" Jan 29 13:06:14.194821 containerd[1508]: time="2025-01-29T13:06:14.194779056Z" level=info msg="StopPodSandbox for \"bea178fe28cf855ef27058adce84845b8f5abe30355d68106f97d1f715ffb2aa\"" Jan 29 13:06:14.195019 containerd[1508]: time="2025-01-29T13:06:14.194992598Z" level=info msg="TearDown network for sandbox \"bea178fe28cf855ef27058adce84845b8f5abe30355d68106f97d1f715ffb2aa\" successfully" Jan 29 13:06:14.195143 containerd[1508]: time="2025-01-29T13:06:14.195118056Z" level=info msg="StopPodSandbox for \"bea178fe28cf855ef27058adce84845b8f5abe30355d68106f97d1f715ffb2aa\" returns successfully" Jan 29 13:06:14.195508 containerd[1508]: time="2025-01-29T13:06:14.195479789Z" level=info msg="StopPodSandbox for \"24bc3d97635d102c8b625be4a39399ee1aeffea51e3942c828350f65913580a8\"" Jan 29 13:06:14.197404 containerd[1508]: time="2025-01-29T13:06:14.197373202Z" level=info msg="Ensure that sandbox 24bc3d97635d102c8b625be4a39399ee1aeffea51e3942c828350f65913580a8 in task-service has been cleanup successfully" Jan 29 13:06:14.197956 containerd[1508]: time="2025-01-29T13:06:14.197714991Z" level=info msg="TearDown network for sandbox \"24bc3d97635d102c8b625be4a39399ee1aeffea51e3942c828350f65913580a8\" successfully" Jan 29 13:06:14.197956 containerd[1508]: time="2025-01-29T13:06:14.197741084Z" level=info msg="StopPodSandbox for \"24bc3d97635d102c8b625be4a39399ee1aeffea51e3942c828350f65913580a8\" returns successfully" Jan 29 13:06:14.201373 systemd[1]: run-netns-cni\x2db6f0670c\x2daf3c\x2d0bd3\x2d4c44\x2d618bfe27023e.mount: Deactivated successfully. Jan 29 13:06:14.203424 containerd[1508]: time="2025-01-29T13:06:14.202962927Z" level=info msg="StopPodSandbox for \"7e6768f4f4034764ce3dfae9a60bd95c769d0afcb67b58697658a9e2d13a5ecc\"" Jan 29 13:06:14.203424 containerd[1508]: time="2025-01-29T13:06:14.203103332Z" level=info msg="TearDown network for sandbox \"7e6768f4f4034764ce3dfae9a60bd95c769d0afcb67b58697658a9e2d13a5ecc\" successfully" Jan 29 13:06:14.203424 containerd[1508]: time="2025-01-29T13:06:14.203122999Z" level=info msg="StopPodSandbox for \"7e6768f4f4034764ce3dfae9a60bd95c769d0afcb67b58697658a9e2d13a5ecc\" returns successfully" Jan 29 13:06:14.203424 containerd[1508]: time="2025-01-29T13:06:14.203209965Z" level=info msg="StopPodSandbox for \"f61d6ee5389e43c43a84d46d0c771edb2ff088040550efc66d97a815bcfa91a4\"" Jan 29 13:06:14.213811 containerd[1508]: time="2025-01-29T13:06:14.203768349Z" level=info msg="TearDown network for sandbox \"f61d6ee5389e43c43a84d46d0c771edb2ff088040550efc66d97a815bcfa91a4\" successfully" Jan 29 13:06:14.213811 containerd[1508]: time="2025-01-29T13:06:14.203793236Z" level=info msg="StopPodSandbox for \"f61d6ee5389e43c43a84d46d0c771edb2ff088040550efc66d97a815bcfa91a4\" returns successfully" Jan 29 13:06:14.214166 containerd[1508]: time="2025-01-29T13:06:14.214117144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-9xm5w,Uid:0c9c54e0-dec4-4f35-9513-44b910bfb601,Namespace:default,Attempt:2,}" Jan 29 13:06:14.224649 containerd[1508]: time="2025-01-29T13:06:14.224332497Z" level=info msg="StopPodSandbox for \"68c05b2e3e6fc50790a3d480d0e8d3ad8f8efdb9bf5ec0c4f5439a284af61cfd\"" Jan 29 13:06:14.224649 containerd[1508]: time="2025-01-29T13:06:14.224508654Z" level=info msg="TearDown network for sandbox \"68c05b2e3e6fc50790a3d480d0e8d3ad8f8efdb9bf5ec0c4f5439a284af61cfd\" successfully" Jan 29 13:06:14.224649 containerd[1508]: time="2025-01-29T13:06:14.224528990Z" level=info msg="StopPodSandbox for \"68c05b2e3e6fc50790a3d480d0e8d3ad8f8efdb9bf5ec0c4f5439a284af61cfd\" returns successfully" Jan 29 13:06:14.225883 containerd[1508]: time="2025-01-29T13:06:14.225559878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tcnjs,Uid:ac21dc14-23e0-4c74-8492-563d8d3aeeb5,Namespace:calico-system,Attempt:6,}" Jan 29 13:06:14.519703 containerd[1508]: time="2025-01-29T13:06:14.519342361Z" level=error msg="Failed to destroy network for sandbox \"64ee480c91dc3eedcc0d154350362904ddb07a18dd99834836b2839d5c0fb1a8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:14.524945 containerd[1508]: time="2025-01-29T13:06:14.522880788Z" level=error msg="encountered an error cleaning up failed sandbox \"64ee480c91dc3eedcc0d154350362904ddb07a18dd99834836b2839d5c0fb1a8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:14.524945 containerd[1508]: time="2025-01-29T13:06:14.522986826Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tcnjs,Uid:ac21dc14-23e0-4c74-8492-563d8d3aeeb5,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"64ee480c91dc3eedcc0d154350362904ddb07a18dd99834836b2839d5c0fb1a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:14.523988 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-64ee480c91dc3eedcc0d154350362904ddb07a18dd99834836b2839d5c0fb1a8-shm.mount: Deactivated successfully. Jan 29 13:06:14.525245 kubelet[1878]: E0129 13:06:14.523459 1878 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64ee480c91dc3eedcc0d154350362904ddb07a18dd99834836b2839d5c0fb1a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:14.525245 kubelet[1878]: E0129 13:06:14.523606 1878 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64ee480c91dc3eedcc0d154350362904ddb07a18dd99834836b2839d5c0fb1a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tcnjs" Jan 29 13:06:14.525245 kubelet[1878]: E0129 13:06:14.523671 1878 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64ee480c91dc3eedcc0d154350362904ddb07a18dd99834836b2839d5c0fb1a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tcnjs" Jan 29 13:06:14.526011 kubelet[1878]: E0129 13:06:14.523753 1878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tcnjs_calico-system(ac21dc14-23e0-4c74-8492-563d8d3aeeb5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tcnjs_calico-system(ac21dc14-23e0-4c74-8492-563d8d3aeeb5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"64ee480c91dc3eedcc0d154350362904ddb07a18dd99834836b2839d5c0fb1a8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tcnjs" podUID="ac21dc14-23e0-4c74-8492-563d8d3aeeb5" Jan 29 13:06:14.536830 containerd[1508]: time="2025-01-29T13:06:14.536641869Z" level=error msg="Failed to destroy network for sandbox \"5ae8539768dde0b4b7d53f0cf55858b9bec886fc086f463dfdfa8a87f703f103\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:14.537463 containerd[1508]: time="2025-01-29T13:06:14.537246321Z" level=error msg="encountered an error cleaning up failed sandbox \"5ae8539768dde0b4b7d53f0cf55858b9bec886fc086f463dfdfa8a87f703f103\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:14.537463 containerd[1508]: time="2025-01-29T13:06:14.537353686Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-9xm5w,Uid:0c9c54e0-dec4-4f35-9513-44b910bfb601,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"5ae8539768dde0b4b7d53f0cf55858b9bec886fc086f463dfdfa8a87f703f103\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:14.537885 kubelet[1878]: E0129 13:06:14.537679 1878 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ae8539768dde0b4b7d53f0cf55858b9bec886fc086f463dfdfa8a87f703f103\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:14.537885 kubelet[1878]: E0129 13:06:14.537780 1878 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ae8539768dde0b4b7d53f0cf55858b9bec886fc086f463dfdfa8a87f703f103\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-9xm5w" Jan 29 13:06:14.537885 kubelet[1878]: E0129 13:06:14.537831 1878 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ae8539768dde0b4b7d53f0cf55858b9bec886fc086f463dfdfa8a87f703f103\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-9xm5w" Jan 29 13:06:14.538147 kubelet[1878]: E0129 13:06:14.537930 1878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-9xm5w_default(0c9c54e0-dec4-4f35-9513-44b910bfb601)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-9xm5w_default(0c9c54e0-dec4-4f35-9513-44b910bfb601)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5ae8539768dde0b4b7d53f0cf55858b9bec886fc086f463dfdfa8a87f703f103\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-9xm5w" podUID="0c9c54e0-dec4-4f35-9513-44b910bfb601" Jan 29 13:06:14.541000 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5ae8539768dde0b4b7d53f0cf55858b9bec886fc086f463dfdfa8a87f703f103-shm.mount: Deactivated successfully. Jan 29 13:06:14.930681 kubelet[1878]: E0129 13:06:14.930597 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:15.209450 kubelet[1878]: I0129 13:06:15.209029 1878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64ee480c91dc3eedcc0d154350362904ddb07a18dd99834836b2839d5c0fb1a8" Jan 29 13:06:15.211304 containerd[1508]: time="2025-01-29T13:06:15.210142339Z" level=info msg="StopPodSandbox for \"64ee480c91dc3eedcc0d154350362904ddb07a18dd99834836b2839d5c0fb1a8\"" Jan 29 13:06:15.211760 containerd[1508]: time="2025-01-29T13:06:15.211726121Z" level=info msg="Ensure that sandbox 64ee480c91dc3eedcc0d154350362904ddb07a18dd99834836b2839d5c0fb1a8 in task-service has been cleanup successfully" Jan 29 13:06:15.212821 containerd[1508]: time="2025-01-29T13:06:15.212746423Z" level=info msg="TearDown network for sandbox \"64ee480c91dc3eedcc0d154350362904ddb07a18dd99834836b2839d5c0fb1a8\" successfully" Jan 29 13:06:15.212909 containerd[1508]: time="2025-01-29T13:06:15.212820959Z" level=info msg="StopPodSandbox for \"64ee480c91dc3eedcc0d154350362904ddb07a18dd99834836b2839d5c0fb1a8\" returns successfully" Jan 29 13:06:15.213514 containerd[1508]: time="2025-01-29T13:06:15.213481581Z" level=info msg="StopPodSandbox for \"95f6bb89a876c6ff7a75821b36f30d7cfcfb6959a13af4d998100d2845c6b912\"" Jan 29 13:06:15.214653 containerd[1508]: time="2025-01-29T13:06:15.214622724Z" level=info msg="TearDown network for sandbox \"95f6bb89a876c6ff7a75821b36f30d7cfcfb6959a13af4d998100d2845c6b912\" successfully" Jan 29 13:06:15.214653 containerd[1508]: time="2025-01-29T13:06:15.214649244Z" level=info msg="StopPodSandbox for \"95f6bb89a876c6ff7a75821b36f30d7cfcfb6959a13af4d998100d2845c6b912\" returns successfully" Jan 29 13:06:15.215933 containerd[1508]: time="2025-01-29T13:06:15.215521226Z" level=info msg="StopPodSandbox for \"d25fdc15c38fc8e86faec4088bb463c13036fd3ee0f7c4f819cfce539211e8d2\"" Jan 29 13:06:15.215933 containerd[1508]: time="2025-01-29T13:06:15.215624307Z" level=info msg="TearDown network for sandbox \"d25fdc15c38fc8e86faec4088bb463c13036fd3ee0f7c4f819cfce539211e8d2\" successfully" Jan 29 13:06:15.215933 containerd[1508]: time="2025-01-29T13:06:15.215641467Z" level=info msg="StopPodSandbox for \"d25fdc15c38fc8e86faec4088bb463c13036fd3ee0f7c4f819cfce539211e8d2\" returns successfully" Jan 29 13:06:15.217597 containerd[1508]: time="2025-01-29T13:06:15.217405162Z" level=info msg="StopPodSandbox for \"a3c5920ccc742c50f2cb190847e8db5db50a928a9c848c3b3182315c3345bca0\"" Jan 29 13:06:15.217981 containerd[1508]: time="2025-01-29T13:06:15.217879300Z" level=info msg="TearDown network for sandbox \"a3c5920ccc742c50f2cb190847e8db5db50a928a9c848c3b3182315c3345bca0\" successfully" Jan 29 13:06:15.218100 containerd[1508]: time="2025-01-29T13:06:15.218075478Z" level=info msg="StopPodSandbox for \"a3c5920ccc742c50f2cb190847e8db5db50a928a9c848c3b3182315c3345bca0\" returns successfully" Jan 29 13:06:15.218593 kubelet[1878]: I0129 13:06:15.218402 1878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ae8539768dde0b4b7d53f0cf55858b9bec886fc086f463dfdfa8a87f703f103" Jan 29 13:06:15.219079 containerd[1508]: time="2025-01-29T13:06:15.219048454Z" level=info msg="StopPodSandbox for \"5ae8539768dde0b4b7d53f0cf55858b9bec886fc086f463dfdfa8a87f703f103\"" Jan 29 13:06:15.219692 containerd[1508]: time="2025-01-29T13:06:15.219615585Z" level=info msg="Ensure that sandbox 5ae8539768dde0b4b7d53f0cf55858b9bec886fc086f463dfdfa8a87f703f103 in task-service has been cleanup successfully" Jan 29 13:06:15.220659 containerd[1508]: time="2025-01-29T13:06:15.219895911Z" level=info msg="StopPodSandbox for \"bea178fe28cf855ef27058adce84845b8f5abe30355d68106f97d1f715ffb2aa\"" Jan 29 13:06:15.220659 containerd[1508]: time="2025-01-29T13:06:15.220479662Z" level=info msg="TearDown network for sandbox \"5ae8539768dde0b4b7d53f0cf55858b9bec886fc086f463dfdfa8a87f703f103\" successfully" Jan 29 13:06:15.220659 containerd[1508]: time="2025-01-29T13:06:15.220500831Z" level=info msg="StopPodSandbox for \"5ae8539768dde0b4b7d53f0cf55858b9bec886fc086f463dfdfa8a87f703f103\" returns successfully" Jan 29 13:06:15.220659 containerd[1508]: time="2025-01-29T13:06:15.220523621Z" level=info msg="TearDown network for sandbox \"bea178fe28cf855ef27058adce84845b8f5abe30355d68106f97d1f715ffb2aa\" successfully" Jan 29 13:06:15.220659 containerd[1508]: time="2025-01-29T13:06:15.220542488Z" level=info msg="StopPodSandbox for \"bea178fe28cf855ef27058adce84845b8f5abe30355d68106f97d1f715ffb2aa\" returns successfully" Jan 29 13:06:15.221724 containerd[1508]: time="2025-01-29T13:06:15.221260813Z" level=info msg="StopPodSandbox for \"f61d6ee5389e43c43a84d46d0c771edb2ff088040550efc66d97a815bcfa91a4\"" Jan 29 13:06:15.221724 containerd[1508]: time="2025-01-29T13:06:15.221439612Z" level=info msg="TearDown network for sandbox \"f61d6ee5389e43c43a84d46d0c771edb2ff088040550efc66d97a815bcfa91a4\" successfully" Jan 29 13:06:15.221724 containerd[1508]: time="2025-01-29T13:06:15.221459413Z" level=info msg="StopPodSandbox for \"f61d6ee5389e43c43a84d46d0c771edb2ff088040550efc66d97a815bcfa91a4\" returns successfully" Jan 29 13:06:15.221724 containerd[1508]: time="2025-01-29T13:06:15.221542111Z" level=info msg="StopPodSandbox for \"24bc3d97635d102c8b625be4a39399ee1aeffea51e3942c828350f65913580a8\"" Jan 29 13:06:15.221724 containerd[1508]: time="2025-01-29T13:06:15.221637336Z" level=info msg="TearDown network for sandbox \"24bc3d97635d102c8b625be4a39399ee1aeffea51e3942c828350f65913580a8\" successfully" Jan 29 13:06:15.221724 containerd[1508]: time="2025-01-29T13:06:15.221654666Z" level=info msg="StopPodSandbox for \"24bc3d97635d102c8b625be4a39399ee1aeffea51e3942c828350f65913580a8\" returns successfully" Jan 29 13:06:15.222915 containerd[1508]: time="2025-01-29T13:06:15.222768194Z" level=info msg="StopPodSandbox for \"68c05b2e3e6fc50790a3d480d0e8d3ad8f8efdb9bf5ec0c4f5439a284af61cfd\"" Jan 29 13:06:15.223244 containerd[1508]: time="2025-01-29T13:06:15.223067292Z" level=info msg="StopPodSandbox for \"7e6768f4f4034764ce3dfae9a60bd95c769d0afcb67b58697658a9e2d13a5ecc\"" Jan 29 13:06:15.223244 containerd[1508]: time="2025-01-29T13:06:15.223163153Z" level=info msg="TearDown network for sandbox \"7e6768f4f4034764ce3dfae9a60bd95c769d0afcb67b58697658a9e2d13a5ecc\" successfully" Jan 29 13:06:15.223244 containerd[1508]: time="2025-01-29T13:06:15.223181055Z" level=info msg="StopPodSandbox for \"7e6768f4f4034764ce3dfae9a60bd95c769d0afcb67b58697658a9e2d13a5ecc\" returns successfully" Jan 29 13:06:15.223952 containerd[1508]: time="2025-01-29T13:06:15.223862973Z" level=info msg="TearDown network for sandbox \"68c05b2e3e6fc50790a3d480d0e8d3ad8f8efdb9bf5ec0c4f5439a284af61cfd\" successfully" Jan 29 13:06:15.223952 containerd[1508]: time="2025-01-29T13:06:15.223886421Z" level=info msg="StopPodSandbox for \"68c05b2e3e6fc50790a3d480d0e8d3ad8f8efdb9bf5ec0c4f5439a284af61cfd\" returns successfully" Jan 29 13:06:15.223952 containerd[1508]: time="2025-01-29T13:06:15.223898679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-9xm5w,Uid:0c9c54e0-dec4-4f35-9513-44b910bfb601,Namespace:default,Attempt:3,}" Jan 29 13:06:15.228583 containerd[1508]: time="2025-01-29T13:06:15.228550853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tcnjs,Uid:ac21dc14-23e0-4c74-8492-563d8d3aeeb5,Namespace:calico-system,Attempt:7,}" Jan 29 13:06:15.376351 systemd[1]: run-netns-cni\x2dbbf61181\x2dba18\x2d77ce\x2dfd35\x2dcba0c7cc0371.mount: Deactivated successfully. Jan 29 13:06:15.376498 systemd[1]: run-netns-cni\x2dc00c1aa6\x2d7524\x2dbf0a\x2d3a0e\x2d76b88798e1e7.mount: Deactivated successfully. Jan 29 13:06:15.458474 containerd[1508]: time="2025-01-29T13:06:15.458400935Z" level=error msg="Failed to destroy network for sandbox \"8a7b0edf178e94d9efefea5355ab892a6135c08a39ca09f556e430f64121a279\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:15.464010 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8a7b0edf178e94d9efefea5355ab892a6135c08a39ca09f556e430f64121a279-shm.mount: Deactivated successfully. Jan 29 13:06:15.467110 containerd[1508]: time="2025-01-29T13:06:15.466947424Z" level=error msg="encountered an error cleaning up failed sandbox \"8a7b0edf178e94d9efefea5355ab892a6135c08a39ca09f556e430f64121a279\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:15.467110 containerd[1508]: time="2025-01-29T13:06:15.467051649Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-9xm5w,Uid:0c9c54e0-dec4-4f35-9513-44b910bfb601,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"8a7b0edf178e94d9efefea5355ab892a6135c08a39ca09f556e430f64121a279\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:15.469392 kubelet[1878]: E0129 13:06:15.468583 1878 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a7b0edf178e94d9efefea5355ab892a6135c08a39ca09f556e430f64121a279\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:15.469392 kubelet[1878]: E0129 13:06:15.468931 1878 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a7b0edf178e94d9efefea5355ab892a6135c08a39ca09f556e430f64121a279\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-9xm5w" Jan 29 13:06:15.469392 kubelet[1878]: E0129 13:06:15.468966 1878 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a7b0edf178e94d9efefea5355ab892a6135c08a39ca09f556e430f64121a279\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-9xm5w" Jan 29 13:06:15.469620 kubelet[1878]: E0129 13:06:15.469038 1878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-9xm5w_default(0c9c54e0-dec4-4f35-9513-44b910bfb601)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-9xm5w_default(0c9c54e0-dec4-4f35-9513-44b910bfb601)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8a7b0edf178e94d9efefea5355ab892a6135c08a39ca09f556e430f64121a279\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-9xm5w" podUID="0c9c54e0-dec4-4f35-9513-44b910bfb601" Jan 29 13:06:15.475891 containerd[1508]: time="2025-01-29T13:06:15.475833720Z" level=error msg="Failed to destroy network for sandbox \"9180f5a0a0d90cce678c15907b73958255ceebdb1e1d97b4c5a638d02c7750c6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:15.476550 containerd[1508]: time="2025-01-29T13:06:15.476512677Z" level=error msg="encountered an error cleaning up failed sandbox \"9180f5a0a0d90cce678c15907b73958255ceebdb1e1d97b4c5a638d02c7750c6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:15.476797 containerd[1508]: time="2025-01-29T13:06:15.476758204Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tcnjs,Uid:ac21dc14-23e0-4c74-8492-563d8d3aeeb5,Namespace:calico-system,Attempt:7,} failed, error" error="failed to setup network for sandbox \"9180f5a0a0d90cce678c15907b73958255ceebdb1e1d97b4c5a638d02c7750c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:15.477328 kubelet[1878]: E0129 13:06:15.477121 1878 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9180f5a0a0d90cce678c15907b73958255ceebdb1e1d97b4c5a638d02c7750c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:15.477328 kubelet[1878]: E0129 13:06:15.477190 1878 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9180f5a0a0d90cce678c15907b73958255ceebdb1e1d97b4c5a638d02c7750c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tcnjs" Jan 29 13:06:15.477328 kubelet[1878]: E0129 13:06:15.477219 1878 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9180f5a0a0d90cce678c15907b73958255ceebdb1e1d97b4c5a638d02c7750c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tcnjs" Jan 29 13:06:15.477566 kubelet[1878]: E0129 13:06:15.477276 1878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tcnjs_calico-system(ac21dc14-23e0-4c74-8492-563d8d3aeeb5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tcnjs_calico-system(ac21dc14-23e0-4c74-8492-563d8d3aeeb5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9180f5a0a0d90cce678c15907b73958255ceebdb1e1d97b4c5a638d02c7750c6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tcnjs" podUID="ac21dc14-23e0-4c74-8492-563d8d3aeeb5" Jan 29 13:06:15.482780 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9180f5a0a0d90cce678c15907b73958255ceebdb1e1d97b4c5a638d02c7750c6-shm.mount: Deactivated successfully. Jan 29 13:06:15.931044 kubelet[1878]: E0129 13:06:15.930907 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:16.228618 kubelet[1878]: I0129 13:06:16.227989 1878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9180f5a0a0d90cce678c15907b73958255ceebdb1e1d97b4c5a638d02c7750c6" Jan 29 13:06:16.231078 containerd[1508]: time="2025-01-29T13:06:16.230146209Z" level=info msg="StopPodSandbox for \"9180f5a0a0d90cce678c15907b73958255ceebdb1e1d97b4c5a638d02c7750c6\"" Jan 29 13:06:16.231078 containerd[1508]: time="2025-01-29T13:06:16.230412335Z" level=info msg="Ensure that sandbox 9180f5a0a0d90cce678c15907b73958255ceebdb1e1d97b4c5a638d02c7750c6 in task-service has been cleanup successfully" Jan 29 13:06:16.231078 containerd[1508]: time="2025-01-29T13:06:16.230696317Z" level=info msg="TearDown network for sandbox \"9180f5a0a0d90cce678c15907b73958255ceebdb1e1d97b4c5a638d02c7750c6\" successfully" Jan 29 13:06:16.231078 containerd[1508]: time="2025-01-29T13:06:16.230717925Z" level=info msg="StopPodSandbox for \"9180f5a0a0d90cce678c15907b73958255ceebdb1e1d97b4c5a638d02c7750c6\" returns successfully" Jan 29 13:06:16.233834 systemd[1]: run-netns-cni\x2df0c86af4\x2d6321\x2dfc44\x2d6325\x2d0f831083db72.mount: Deactivated successfully. Jan 29 13:06:16.235123 containerd[1508]: time="2025-01-29T13:06:16.233964295Z" level=info msg="StopPodSandbox for \"64ee480c91dc3eedcc0d154350362904ddb07a18dd99834836b2839d5c0fb1a8\"" Jan 29 13:06:16.235123 containerd[1508]: time="2025-01-29T13:06:16.234067909Z" level=info msg="TearDown network for sandbox \"64ee480c91dc3eedcc0d154350362904ddb07a18dd99834836b2839d5c0fb1a8\" successfully" Jan 29 13:06:16.235123 containerd[1508]: time="2025-01-29T13:06:16.234086781Z" level=info msg="StopPodSandbox for \"64ee480c91dc3eedcc0d154350362904ddb07a18dd99834836b2839d5c0fb1a8\" returns successfully" Jan 29 13:06:16.236207 containerd[1508]: time="2025-01-29T13:06:16.236177800Z" level=info msg="StopPodSandbox for \"95f6bb89a876c6ff7a75821b36f30d7cfcfb6959a13af4d998100d2845c6b912\"" Jan 29 13:06:16.236830 containerd[1508]: time="2025-01-29T13:06:16.236679682Z" level=info msg="TearDown network for sandbox \"95f6bb89a876c6ff7a75821b36f30d7cfcfb6959a13af4d998100d2845c6b912\" successfully" Jan 29 13:06:16.236830 containerd[1508]: time="2025-01-29T13:06:16.236701394Z" level=info msg="StopPodSandbox for \"95f6bb89a876c6ff7a75821b36f30d7cfcfb6959a13af4d998100d2845c6b912\" returns successfully" Jan 29 13:06:16.237753 containerd[1508]: time="2025-01-29T13:06:16.237707589Z" level=info msg="StopPodSandbox for \"d25fdc15c38fc8e86faec4088bb463c13036fd3ee0f7c4f819cfce539211e8d2\"" Jan 29 13:06:16.237840 containerd[1508]: time="2025-01-29T13:06:16.237816074Z" level=info msg="TearDown network for sandbox \"d25fdc15c38fc8e86faec4088bb463c13036fd3ee0f7c4f819cfce539211e8d2\" successfully" Jan 29 13:06:16.237840 containerd[1508]: time="2025-01-29T13:06:16.237835091Z" level=info msg="StopPodSandbox for \"d25fdc15c38fc8e86faec4088bb463c13036fd3ee0f7c4f819cfce539211e8d2\" returns successfully" Jan 29 13:06:16.238093 kubelet[1878]: I0129 13:06:16.238057 1878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a7b0edf178e94d9efefea5355ab892a6135c08a39ca09f556e430f64121a279" Jan 29 13:06:16.239359 containerd[1508]: time="2025-01-29T13:06:16.239141336Z" level=info msg="StopPodSandbox for \"8a7b0edf178e94d9efefea5355ab892a6135c08a39ca09f556e430f64121a279\"" Jan 29 13:06:16.240018 containerd[1508]: time="2025-01-29T13:06:16.239983093Z" level=info msg="Ensure that sandbox 8a7b0edf178e94d9efefea5355ab892a6135c08a39ca09f556e430f64121a279 in task-service has been cleanup successfully" Jan 29 13:06:16.240446 containerd[1508]: time="2025-01-29T13:06:16.240412880Z" level=info msg="TearDown network for sandbox \"8a7b0edf178e94d9efefea5355ab892a6135c08a39ca09f556e430f64121a279\" successfully" Jan 29 13:06:16.240446 containerd[1508]: time="2025-01-29T13:06:16.240441449Z" level=info msg="StopPodSandbox for \"8a7b0edf178e94d9efefea5355ab892a6135c08a39ca09f556e430f64121a279\" returns successfully" Jan 29 13:06:16.240573 containerd[1508]: time="2025-01-29T13:06:16.240517363Z" level=info msg="StopPodSandbox for \"a3c5920ccc742c50f2cb190847e8db5db50a928a9c848c3b3182315c3345bca0\"" Jan 29 13:06:16.242555 containerd[1508]: time="2025-01-29T13:06:16.240617289Z" level=info msg="TearDown network for sandbox \"a3c5920ccc742c50f2cb190847e8db5db50a928a9c848c3b3182315c3345bca0\" successfully" Jan 29 13:06:16.242555 containerd[1508]: time="2025-01-29T13:06:16.240642630Z" level=info msg="StopPodSandbox for \"a3c5920ccc742c50f2cb190847e8db5db50a928a9c848c3b3182315c3345bca0\" returns successfully" Jan 29 13:06:16.243170 containerd[1508]: time="2025-01-29T13:06:16.242804100Z" level=info msg="StopPodSandbox for \"5ae8539768dde0b4b7d53f0cf55858b9bec886fc086f463dfdfa8a87f703f103\"" Jan 29 13:06:16.243170 containerd[1508]: time="2025-01-29T13:06:16.242907319Z" level=info msg="TearDown network for sandbox \"5ae8539768dde0b4b7d53f0cf55858b9bec886fc086f463dfdfa8a87f703f103\" successfully" Jan 29 13:06:16.243170 containerd[1508]: time="2025-01-29T13:06:16.242926054Z" level=info msg="StopPodSandbox for \"5ae8539768dde0b4b7d53f0cf55858b9bec886fc086f463dfdfa8a87f703f103\" returns successfully" Jan 29 13:06:16.243170 containerd[1508]: time="2025-01-29T13:06:16.242995171Z" level=info msg="StopPodSandbox for \"bea178fe28cf855ef27058adce84845b8f5abe30355d68106f97d1f715ffb2aa\"" Jan 29 13:06:16.243170 containerd[1508]: time="2025-01-29T13:06:16.243087565Z" level=info msg="TearDown network for sandbox \"bea178fe28cf855ef27058adce84845b8f5abe30355d68106f97d1f715ffb2aa\" successfully" Jan 29 13:06:16.243170 containerd[1508]: time="2025-01-29T13:06:16.243105087Z" level=info msg="StopPodSandbox for \"bea178fe28cf855ef27058adce84845b8f5abe30355d68106f97d1f715ffb2aa\" returns successfully" Jan 29 13:06:16.243653 systemd[1]: run-netns-cni\x2dab2e0686\x2d8588\x2dbdab\x2de5f6\x2d4b8090523fa8.mount: Deactivated successfully. Jan 29 13:06:16.245663 containerd[1508]: time="2025-01-29T13:06:16.245632961Z" level=info msg="StopPodSandbox for \"24bc3d97635d102c8b625be4a39399ee1aeffea51e3942c828350f65913580a8\"" Jan 29 13:06:16.245884 containerd[1508]: time="2025-01-29T13:06:16.245856397Z" level=info msg="TearDown network for sandbox \"24bc3d97635d102c8b625be4a39399ee1aeffea51e3942c828350f65913580a8\" successfully" Jan 29 13:06:16.246339 containerd[1508]: time="2025-01-29T13:06:16.245966952Z" level=info msg="StopPodSandbox for \"24bc3d97635d102c8b625be4a39399ee1aeffea51e3942c828350f65913580a8\" returns successfully" Jan 29 13:06:16.246339 containerd[1508]: time="2025-01-29T13:06:16.246068377Z" level=info msg="StopPodSandbox for \"f61d6ee5389e43c43a84d46d0c771edb2ff088040550efc66d97a815bcfa91a4\"" Jan 29 13:06:16.246339 containerd[1508]: time="2025-01-29T13:06:16.246169782Z" level=info msg="TearDown network for sandbox \"f61d6ee5389e43c43a84d46d0c771edb2ff088040550efc66d97a815bcfa91a4\" successfully" Jan 29 13:06:16.246339 containerd[1508]: time="2025-01-29T13:06:16.246187480Z" level=info msg="StopPodSandbox for \"f61d6ee5389e43c43a84d46d0c771edb2ff088040550efc66d97a815bcfa91a4\" returns successfully" Jan 29 13:06:16.248591 containerd[1508]: time="2025-01-29T13:06:16.248173034Z" level=info msg="StopPodSandbox for \"7e6768f4f4034764ce3dfae9a60bd95c769d0afcb67b58697658a9e2d13a5ecc\"" Jan 29 13:06:16.248591 containerd[1508]: time="2025-01-29T13:06:16.248277110Z" level=info msg="TearDown network for sandbox \"7e6768f4f4034764ce3dfae9a60bd95c769d0afcb67b58697658a9e2d13a5ecc\" successfully" Jan 29 13:06:16.248591 containerd[1508]: time="2025-01-29T13:06:16.248317093Z" level=info msg="StopPodSandbox for \"7e6768f4f4034764ce3dfae9a60bd95c769d0afcb67b58697658a9e2d13a5ecc\" returns successfully" Jan 29 13:06:16.248591 containerd[1508]: time="2025-01-29T13:06:16.248415311Z" level=info msg="StopPodSandbox for \"68c05b2e3e6fc50790a3d480d0e8d3ad8f8efdb9bf5ec0c4f5439a284af61cfd\"" Jan 29 13:06:16.248591 containerd[1508]: time="2025-01-29T13:06:16.248507346Z" level=info msg="TearDown network for sandbox \"68c05b2e3e6fc50790a3d480d0e8d3ad8f8efdb9bf5ec0c4f5439a284af61cfd\" successfully" Jan 29 13:06:16.248591 containerd[1508]: time="2025-01-29T13:06:16.248523681Z" level=info msg="StopPodSandbox for \"68c05b2e3e6fc50790a3d480d0e8d3ad8f8efdb9bf5ec0c4f5439a284af61cfd\" returns successfully" Jan 29 13:06:16.249470 containerd[1508]: time="2025-01-29T13:06:16.249439694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tcnjs,Uid:ac21dc14-23e0-4c74-8492-563d8d3aeeb5,Namespace:calico-system,Attempt:8,}" Jan 29 13:06:16.256593 containerd[1508]: time="2025-01-29T13:06:16.256545401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-9xm5w,Uid:0c9c54e0-dec4-4f35-9513-44b910bfb601,Namespace:default,Attempt:4,}" Jan 29 13:06:16.405203 containerd[1508]: time="2025-01-29T13:06:16.405134823Z" level=error msg="Failed to destroy network for sandbox \"02ae5515988d032760a90241ac57d0688b979398522a72ca7bfa1a3bcfa4d974\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:16.409026 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-02ae5515988d032760a90241ac57d0688b979398522a72ca7bfa1a3bcfa4d974-shm.mount: Deactivated successfully. Jan 29 13:06:16.411336 containerd[1508]: time="2025-01-29T13:06:16.410135866Z" level=error msg="encountered an error cleaning up failed sandbox \"02ae5515988d032760a90241ac57d0688b979398522a72ca7bfa1a3bcfa4d974\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:16.411336 containerd[1508]: time="2025-01-29T13:06:16.410254996Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-9xm5w,Uid:0c9c54e0-dec4-4f35-9513-44b910bfb601,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"02ae5515988d032760a90241ac57d0688b979398522a72ca7bfa1a3bcfa4d974\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:16.411581 kubelet[1878]: E0129 13:06:16.410776 1878 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02ae5515988d032760a90241ac57d0688b979398522a72ca7bfa1a3bcfa4d974\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:16.411581 kubelet[1878]: E0129 13:06:16.410865 1878 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02ae5515988d032760a90241ac57d0688b979398522a72ca7bfa1a3bcfa4d974\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-9xm5w" Jan 29 13:06:16.411581 kubelet[1878]: E0129 13:06:16.410899 1878 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02ae5515988d032760a90241ac57d0688b979398522a72ca7bfa1a3bcfa4d974\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-9xm5w" Jan 29 13:06:16.411740 kubelet[1878]: E0129 13:06:16.410960 1878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-9xm5w_default(0c9c54e0-dec4-4f35-9513-44b910bfb601)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-9xm5w_default(0c9c54e0-dec4-4f35-9513-44b910bfb601)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"02ae5515988d032760a90241ac57d0688b979398522a72ca7bfa1a3bcfa4d974\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-9xm5w" podUID="0c9c54e0-dec4-4f35-9513-44b910bfb601" Jan 29 13:06:16.424933 containerd[1508]: time="2025-01-29T13:06:16.424869332Z" level=error msg="Failed to destroy network for sandbox \"fb50b8f712dbeb34e03b467de759b07f6dd51d0aa863da4f57a1a2ffa8c49720\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:16.427313 containerd[1508]: time="2025-01-29T13:06:16.425629779Z" level=error msg="encountered an error cleaning up failed sandbox \"fb50b8f712dbeb34e03b467de759b07f6dd51d0aa863da4f57a1a2ffa8c49720\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:16.427714 containerd[1508]: time="2025-01-29T13:06:16.427564358Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tcnjs,Uid:ac21dc14-23e0-4c74-8492-563d8d3aeeb5,Namespace:calico-system,Attempt:8,} failed, error" error="failed to setup network for sandbox \"fb50b8f712dbeb34e03b467de759b07f6dd51d0aa863da4f57a1a2ffa8c49720\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:16.429428 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fb50b8f712dbeb34e03b467de759b07f6dd51d0aa863da4f57a1a2ffa8c49720-shm.mount: Deactivated successfully. Jan 29 13:06:16.429866 kubelet[1878]: E0129 13:06:16.429726 1878 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb50b8f712dbeb34e03b467de759b07f6dd51d0aa863da4f57a1a2ffa8c49720\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:16.431315 kubelet[1878]: E0129 13:06:16.430843 1878 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb50b8f712dbeb34e03b467de759b07f6dd51d0aa863da4f57a1a2ffa8c49720\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tcnjs" Jan 29 13:06:16.431532 kubelet[1878]: E0129 13:06:16.431491 1878 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb50b8f712dbeb34e03b467de759b07f6dd51d0aa863da4f57a1a2ffa8c49720\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tcnjs" Jan 29 13:06:16.431861 kubelet[1878]: E0129 13:06:16.431748 1878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tcnjs_calico-system(ac21dc14-23e0-4c74-8492-563d8d3aeeb5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tcnjs_calico-system(ac21dc14-23e0-4c74-8492-563d8d3aeeb5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fb50b8f712dbeb34e03b467de759b07f6dd51d0aa863da4f57a1a2ffa8c49720\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tcnjs" podUID="ac21dc14-23e0-4c74-8492-563d8d3aeeb5" Jan 29 13:06:16.931644 kubelet[1878]: E0129 13:06:16.931450 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:17.245370 kubelet[1878]: I0129 13:06:17.244826 1878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb50b8f712dbeb34e03b467de759b07f6dd51d0aa863da4f57a1a2ffa8c49720" Jan 29 13:06:17.247001 containerd[1508]: time="2025-01-29T13:06:17.246230250Z" level=info msg="StopPodSandbox for \"fb50b8f712dbeb34e03b467de759b07f6dd51d0aa863da4f57a1a2ffa8c49720\"" Jan 29 13:06:17.247001 containerd[1508]: time="2025-01-29T13:06:17.246826284Z" level=info msg="Ensure that sandbox fb50b8f712dbeb34e03b467de759b07f6dd51d0aa863da4f57a1a2ffa8c49720 in task-service has been cleanup successfully" Jan 29 13:06:17.251315 kubelet[1878]: I0129 13:06:17.249539 1878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02ae5515988d032760a90241ac57d0688b979398522a72ca7bfa1a3bcfa4d974" Jan 29 13:06:17.251271 systemd[1]: run-netns-cni\x2d24520838\x2d5cb6\x2da437\x2def10\x2d6c88dddfe624.mount: Deactivated successfully. Jan 29 13:06:17.251526 containerd[1508]: time="2025-01-29T13:06:17.249980421Z" level=info msg="StopPodSandbox for \"02ae5515988d032760a90241ac57d0688b979398522a72ca7bfa1a3bcfa4d974\"" Jan 29 13:06:17.251526 containerd[1508]: time="2025-01-29T13:06:17.250172504Z" level=info msg="Ensure that sandbox 02ae5515988d032760a90241ac57d0688b979398522a72ca7bfa1a3bcfa4d974 in task-service has been cleanup successfully" Jan 29 13:06:17.251719 containerd[1508]: time="2025-01-29T13:06:17.251685292Z" level=info msg="TearDown network for sandbox \"fb50b8f712dbeb34e03b467de759b07f6dd51d0aa863da4f57a1a2ffa8c49720\" successfully" Jan 29 13:06:17.251855 containerd[1508]: time="2025-01-29T13:06:17.251828141Z" level=info msg="StopPodSandbox for \"fb50b8f712dbeb34e03b467de759b07f6dd51d0aa863da4f57a1a2ffa8c49720\" returns successfully" Jan 29 13:06:17.251993 containerd[1508]: time="2025-01-29T13:06:17.251961027Z" level=info msg="TearDown network for sandbox \"02ae5515988d032760a90241ac57d0688b979398522a72ca7bfa1a3bcfa4d974\" successfully" Jan 29 13:06:17.252077 containerd[1508]: time="2025-01-29T13:06:17.251990834Z" level=info msg="StopPodSandbox for \"02ae5515988d032760a90241ac57d0688b979398522a72ca7bfa1a3bcfa4d974\" returns successfully" Jan 29 13:06:17.255502 containerd[1508]: time="2025-01-29T13:06:17.252469421Z" level=info msg="StopPodSandbox for \"9180f5a0a0d90cce678c15907b73958255ceebdb1e1d97b4c5a638d02c7750c6\"" Jan 29 13:06:17.255502 containerd[1508]: time="2025-01-29T13:06:17.252574118Z" level=info msg="TearDown network for sandbox \"9180f5a0a0d90cce678c15907b73958255ceebdb1e1d97b4c5a638d02c7750c6\" successfully" Jan 29 13:06:17.255502 containerd[1508]: time="2025-01-29T13:06:17.252591668Z" level=info msg="StopPodSandbox for \"9180f5a0a0d90cce678c15907b73958255ceebdb1e1d97b4c5a638d02c7750c6\" returns successfully" Jan 29 13:06:17.255502 containerd[1508]: time="2025-01-29T13:06:17.252657994Z" level=info msg="StopPodSandbox for \"8a7b0edf178e94d9efefea5355ab892a6135c08a39ca09f556e430f64121a279\"" Jan 29 13:06:17.255502 containerd[1508]: time="2025-01-29T13:06:17.252785741Z" level=info msg="TearDown network for sandbox \"8a7b0edf178e94d9efefea5355ab892a6135c08a39ca09f556e430f64121a279\" successfully" Jan 29 13:06:17.255502 containerd[1508]: time="2025-01-29T13:06:17.252803749Z" level=info msg="StopPodSandbox for \"8a7b0edf178e94d9efefea5355ab892a6135c08a39ca09f556e430f64121a279\" returns successfully" Jan 29 13:06:17.255502 containerd[1508]: time="2025-01-29T13:06:17.253753704Z" level=info msg="StopPodSandbox for \"5ae8539768dde0b4b7d53f0cf55858b9bec886fc086f463dfdfa8a87f703f103\"" Jan 29 13:06:17.255502 containerd[1508]: time="2025-01-29T13:06:17.253853139Z" level=info msg="TearDown network for sandbox \"5ae8539768dde0b4b7d53f0cf55858b9bec886fc086f463dfdfa8a87f703f103\" successfully" Jan 29 13:06:17.255502 containerd[1508]: time="2025-01-29T13:06:17.253870220Z" level=info msg="StopPodSandbox for \"5ae8539768dde0b4b7d53f0cf55858b9bec886fc086f463dfdfa8a87f703f103\" returns successfully" Jan 29 13:06:17.255502 containerd[1508]: time="2025-01-29T13:06:17.253937091Z" level=info msg="StopPodSandbox for \"64ee480c91dc3eedcc0d154350362904ddb07a18dd99834836b2839d5c0fb1a8\"" Jan 29 13:06:17.255502 containerd[1508]: time="2025-01-29T13:06:17.254024910Z" level=info msg="TearDown network for sandbox \"64ee480c91dc3eedcc0d154350362904ddb07a18dd99834836b2839d5c0fb1a8\" successfully" Jan 29 13:06:17.255502 containerd[1508]: time="2025-01-29T13:06:17.254072469Z" level=info msg="StopPodSandbox for \"64ee480c91dc3eedcc0d154350362904ddb07a18dd99834836b2839d5c0fb1a8\" returns successfully" Jan 29 13:06:17.258333 containerd[1508]: time="2025-01-29T13:06:17.256242407Z" level=info msg="StopPodSandbox for \"24bc3d97635d102c8b625be4a39399ee1aeffea51e3942c828350f65913580a8\"" Jan 29 13:06:17.258333 containerd[1508]: time="2025-01-29T13:06:17.256378985Z" level=info msg="TearDown network for sandbox \"24bc3d97635d102c8b625be4a39399ee1aeffea51e3942c828350f65913580a8\" successfully" Jan 29 13:06:17.258333 containerd[1508]: time="2025-01-29T13:06:17.256399924Z" level=info msg="StopPodSandbox for \"24bc3d97635d102c8b625be4a39399ee1aeffea51e3942c828350f65913580a8\" returns successfully" Jan 29 13:06:17.258333 containerd[1508]: time="2025-01-29T13:06:17.256472237Z" level=info msg="StopPodSandbox for \"95f6bb89a876c6ff7a75821b36f30d7cfcfb6959a13af4d998100d2845c6b912\"" Jan 29 13:06:17.258333 containerd[1508]: time="2025-01-29T13:06:17.256563631Z" level=info msg="TearDown network for sandbox \"95f6bb89a876c6ff7a75821b36f30d7cfcfb6959a13af4d998100d2845c6b912\" successfully" Jan 29 13:06:17.258333 containerd[1508]: time="2025-01-29T13:06:17.256581002Z" level=info msg="StopPodSandbox for \"95f6bb89a876c6ff7a75821b36f30d7cfcfb6959a13af4d998100d2845c6b912\" returns successfully" Jan 29 13:06:17.258530 systemd[1]: run-netns-cni\x2d1416051d\x2d0c29\x2dedd2\x2da49b\x2df97d30569af8.mount: Deactivated successfully. Jan 29 13:06:17.260824 containerd[1508]: time="2025-01-29T13:06:17.260522938Z" level=info msg="StopPodSandbox for \"7e6768f4f4034764ce3dfae9a60bd95c769d0afcb67b58697658a9e2d13a5ecc\"" Jan 29 13:06:17.260824 containerd[1508]: time="2025-01-29T13:06:17.260627986Z" level=info msg="TearDown network for sandbox \"7e6768f4f4034764ce3dfae9a60bd95c769d0afcb67b58697658a9e2d13a5ecc\" successfully" Jan 29 13:06:17.260824 containerd[1508]: time="2025-01-29T13:06:17.260646978Z" level=info msg="StopPodSandbox for \"7e6768f4f4034764ce3dfae9a60bd95c769d0afcb67b58697658a9e2d13a5ecc\" returns successfully" Jan 29 13:06:17.260824 containerd[1508]: time="2025-01-29T13:06:17.260736801Z" level=info msg="StopPodSandbox for \"d25fdc15c38fc8e86faec4088bb463c13036fd3ee0f7c4f819cfce539211e8d2\"" Jan 29 13:06:17.261066 containerd[1508]: time="2025-01-29T13:06:17.260842249Z" level=info msg="TearDown network for sandbox \"d25fdc15c38fc8e86faec4088bb463c13036fd3ee0f7c4f819cfce539211e8d2\" successfully" Jan 29 13:06:17.261066 containerd[1508]: time="2025-01-29T13:06:17.260860307Z" level=info msg="StopPodSandbox for \"d25fdc15c38fc8e86faec4088bb463c13036fd3ee0f7c4f819cfce539211e8d2\" returns successfully" Jan 29 13:06:17.262250 containerd[1508]: time="2025-01-29T13:06:17.261734731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-9xm5w,Uid:0c9c54e0-dec4-4f35-9513-44b910bfb601,Namespace:default,Attempt:5,}" Jan 29 13:06:17.262250 containerd[1508]: time="2025-01-29T13:06:17.262210562Z" level=info msg="StopPodSandbox for \"a3c5920ccc742c50f2cb190847e8db5db50a928a9c848c3b3182315c3345bca0\"" Jan 29 13:06:17.262595 containerd[1508]: time="2025-01-29T13:06:17.262344244Z" level=info msg="TearDown network for sandbox \"a3c5920ccc742c50f2cb190847e8db5db50a928a9c848c3b3182315c3345bca0\" successfully" Jan 29 13:06:17.262595 containerd[1508]: time="2025-01-29T13:06:17.262364405Z" level=info msg="StopPodSandbox for \"a3c5920ccc742c50f2cb190847e8db5db50a928a9c848c3b3182315c3345bca0\" returns successfully" Jan 29 13:06:17.262707 containerd[1508]: time="2025-01-29T13:06:17.262643383Z" level=info msg="StopPodSandbox for \"bea178fe28cf855ef27058adce84845b8f5abe30355d68106f97d1f715ffb2aa\"" Jan 29 13:06:17.262767 containerd[1508]: time="2025-01-29T13:06:17.262738615Z" level=info msg="TearDown network for sandbox \"bea178fe28cf855ef27058adce84845b8f5abe30355d68106f97d1f715ffb2aa\" successfully" Jan 29 13:06:17.262767 containerd[1508]: time="2025-01-29T13:06:17.262756452Z" level=info msg="StopPodSandbox for \"bea178fe28cf855ef27058adce84845b8f5abe30355d68106f97d1f715ffb2aa\" returns successfully" Jan 29 13:06:17.264891 containerd[1508]: time="2025-01-29T13:06:17.264052689Z" level=info msg="StopPodSandbox for \"f61d6ee5389e43c43a84d46d0c771edb2ff088040550efc66d97a815bcfa91a4\"" Jan 29 13:06:17.264891 containerd[1508]: time="2025-01-29T13:06:17.264155772Z" level=info msg="TearDown network for sandbox \"f61d6ee5389e43c43a84d46d0c771edb2ff088040550efc66d97a815bcfa91a4\" successfully" Jan 29 13:06:17.264891 containerd[1508]: time="2025-01-29T13:06:17.264176215Z" level=info msg="StopPodSandbox for \"f61d6ee5389e43c43a84d46d0c771edb2ff088040550efc66d97a815bcfa91a4\" returns successfully" Jan 29 13:06:17.265671 containerd[1508]: time="2025-01-29T13:06:17.265584491Z" level=info msg="StopPodSandbox for \"68c05b2e3e6fc50790a3d480d0e8d3ad8f8efdb9bf5ec0c4f5439a284af61cfd\"" Jan 29 13:06:17.265755 containerd[1508]: time="2025-01-29T13:06:17.265720399Z" level=info msg="TearDown network for sandbox \"68c05b2e3e6fc50790a3d480d0e8d3ad8f8efdb9bf5ec0c4f5439a284af61cfd\" successfully" Jan 29 13:06:17.265755 containerd[1508]: time="2025-01-29T13:06:17.265739663Z" level=info msg="StopPodSandbox for \"68c05b2e3e6fc50790a3d480d0e8d3ad8f8efdb9bf5ec0c4f5439a284af61cfd\" returns successfully" Jan 29 13:06:17.266675 containerd[1508]: time="2025-01-29T13:06:17.266621717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tcnjs,Uid:ac21dc14-23e0-4c74-8492-563d8d3aeeb5,Namespace:calico-system,Attempt:9,}" Jan 29 13:06:17.449824 containerd[1508]: time="2025-01-29T13:06:17.447326121Z" level=error msg="Failed to destroy network for sandbox \"c587deb900f8b67b66654db172e06c8d8b4473796fee70675ac0bc60b14d4401\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:17.452312 containerd[1508]: time="2025-01-29T13:06:17.451509332Z" level=error msg="encountered an error cleaning up failed sandbox \"c587deb900f8b67b66654db172e06c8d8b4473796fee70675ac0bc60b14d4401\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:17.452645 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c587deb900f8b67b66654db172e06c8d8b4473796fee70675ac0bc60b14d4401-shm.mount: Deactivated successfully. Jan 29 13:06:17.456488 containerd[1508]: time="2025-01-29T13:06:17.452495714Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tcnjs,Uid:ac21dc14-23e0-4c74-8492-563d8d3aeeb5,Namespace:calico-system,Attempt:9,} failed, error" error="failed to setup network for sandbox \"c587deb900f8b67b66654db172e06c8d8b4473796fee70675ac0bc60b14d4401\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:17.456488 containerd[1508]: time="2025-01-29T13:06:17.454068110Z" level=error msg="Failed to destroy network for sandbox \"96d71c6ce1c17ba958a3091dad1360b894fee90e9c1277be5d1fb525e2bdc23b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:17.456488 containerd[1508]: time="2025-01-29T13:06:17.454623344Z" level=error msg="encountered an error cleaning up failed sandbox \"96d71c6ce1c17ba958a3091dad1360b894fee90e9c1277be5d1fb525e2bdc23b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:17.456488 containerd[1508]: time="2025-01-29T13:06:17.454680281Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-9xm5w,Uid:0c9c54e0-dec4-4f35-9513-44b910bfb601,Namespace:default,Attempt:5,} failed, error" error="failed to setup network for sandbox \"96d71c6ce1c17ba958a3091dad1360b894fee90e9c1277be5d1fb525e2bdc23b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:17.456788 kubelet[1878]: E0129 13:06:17.455556 1878 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c587deb900f8b67b66654db172e06c8d8b4473796fee70675ac0bc60b14d4401\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:17.456788 kubelet[1878]: E0129 13:06:17.455631 1878 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c587deb900f8b67b66654db172e06c8d8b4473796fee70675ac0bc60b14d4401\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tcnjs" Jan 29 13:06:17.456788 kubelet[1878]: E0129 13:06:17.455662 1878 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c587deb900f8b67b66654db172e06c8d8b4473796fee70675ac0bc60b14d4401\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tcnjs" Jan 29 13:06:17.456948 kubelet[1878]: E0129 13:06:17.455730 1878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tcnjs_calico-system(ac21dc14-23e0-4c74-8492-563d8d3aeeb5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tcnjs_calico-system(ac21dc14-23e0-4c74-8492-563d8d3aeeb5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c587deb900f8b67b66654db172e06c8d8b4473796fee70675ac0bc60b14d4401\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tcnjs" podUID="ac21dc14-23e0-4c74-8492-563d8d3aeeb5" Jan 29 13:06:17.458030 kubelet[1878]: E0129 13:06:17.457977 1878 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96d71c6ce1c17ba958a3091dad1360b894fee90e9c1277be5d1fb525e2bdc23b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:17.458030 kubelet[1878]: E0129 13:06:17.458025 1878 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96d71c6ce1c17ba958a3091dad1360b894fee90e9c1277be5d1fb525e2bdc23b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-9xm5w" Jan 29 13:06:17.458163 kubelet[1878]: E0129 13:06:17.458050 1878 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96d71c6ce1c17ba958a3091dad1360b894fee90e9c1277be5d1fb525e2bdc23b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-9xm5w" Jan 29 13:06:17.458163 kubelet[1878]: E0129 13:06:17.458116 1878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-9xm5w_default(0c9c54e0-dec4-4f35-9513-44b910bfb601)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-9xm5w_default(0c9c54e0-dec4-4f35-9513-44b910bfb601)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"96d71c6ce1c17ba958a3091dad1360b894fee90e9c1277be5d1fb525e2bdc23b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-9xm5w" podUID="0c9c54e0-dec4-4f35-9513-44b910bfb601" Jan 29 13:06:17.458269 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-96d71c6ce1c17ba958a3091dad1360b894fee90e9c1277be5d1fb525e2bdc23b-shm.mount: Deactivated successfully. Jan 29 13:06:17.932730 kubelet[1878]: E0129 13:06:17.932588 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:18.014257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2524618940.mount: Deactivated successfully. Jan 29 13:06:18.067311 containerd[1508]: time="2025-01-29T13:06:18.067235920Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:06:18.068394 containerd[1508]: time="2025-01-29T13:06:18.068323733Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 29 13:06:18.069165 containerd[1508]: time="2025-01-29T13:06:18.069098282Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:06:18.071853 containerd[1508]: time="2025-01-29T13:06:18.071818623Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:06:18.073546 containerd[1508]: time="2025-01-29T13:06:18.072899496Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 9.946738739s" Jan 29 13:06:18.073546 containerd[1508]: time="2025-01-29T13:06:18.072960569Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 29 13:06:18.103016 containerd[1508]: time="2025-01-29T13:06:18.102952158Z" level=info msg="CreateContainer within sandbox \"0d2113a633548da87d66cf96d633b08f5f0b6750b20d047c8c0f2e78a7237de4\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 13:06:18.117938 containerd[1508]: time="2025-01-29T13:06:18.117855088Z" level=info msg="CreateContainer within sandbox \"0d2113a633548da87d66cf96d633b08f5f0b6750b20d047c8c0f2e78a7237de4\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0f3d9f4b6cc44189f3805dc26d74f645e3e9e4f0bf98a0c3b020a53986e7f4b1\"" Jan 29 13:06:18.118814 containerd[1508]: time="2025-01-29T13:06:18.118780267Z" level=info msg="StartContainer for \"0f3d9f4b6cc44189f3805dc26d74f645e3e9e4f0bf98a0c3b020a53986e7f4b1\"" Jan 29 13:06:18.258972 kubelet[1878]: I0129 13:06:18.257909 1878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96d71c6ce1c17ba958a3091dad1360b894fee90e9c1277be5d1fb525e2bdc23b" Jan 29 13:06:18.259725 containerd[1508]: time="2025-01-29T13:06:18.258607557Z" level=info msg="StopPodSandbox for \"96d71c6ce1c17ba958a3091dad1360b894fee90e9c1277be5d1fb525e2bdc23b\"" Jan 29 13:06:18.260084 containerd[1508]: time="2025-01-29T13:06:18.259967789Z" level=info msg="Ensure that sandbox 96d71c6ce1c17ba958a3091dad1360b894fee90e9c1277be5d1fb525e2bdc23b in task-service has been cleanup successfully" Jan 29 13:06:18.265307 containerd[1508]: time="2025-01-29T13:06:18.261949769Z" level=info msg="TearDown network for sandbox \"96d71c6ce1c17ba958a3091dad1360b894fee90e9c1277be5d1fb525e2bdc23b\" successfully" Jan 29 13:06:18.265307 containerd[1508]: time="2025-01-29T13:06:18.261982896Z" level=info msg="StopPodSandbox for \"96d71c6ce1c17ba958a3091dad1360b894fee90e9c1277be5d1fb525e2bdc23b\" returns successfully" Jan 29 13:06:18.265307 containerd[1508]: time="2025-01-29T13:06:18.262952948Z" level=info msg="StopPodSandbox for \"02ae5515988d032760a90241ac57d0688b979398522a72ca7bfa1a3bcfa4d974\"" Jan 29 13:06:18.265307 containerd[1508]: time="2025-01-29T13:06:18.263249349Z" level=info msg="TearDown network for sandbox \"02ae5515988d032760a90241ac57d0688b979398522a72ca7bfa1a3bcfa4d974\" successfully" Jan 29 13:06:18.265307 containerd[1508]: time="2025-01-29T13:06:18.263274016Z" level=info msg="StopPodSandbox for \"02ae5515988d032760a90241ac57d0688b979398522a72ca7bfa1a3bcfa4d974\" returns successfully" Jan 29 13:06:18.265307 containerd[1508]: time="2025-01-29T13:06:18.264177194Z" level=info msg="StopPodSandbox for \"8a7b0edf178e94d9efefea5355ab892a6135c08a39ca09f556e430f64121a279\"" Jan 29 13:06:18.265307 containerd[1508]: time="2025-01-29T13:06:18.264527516Z" level=info msg="TearDown network for sandbox \"8a7b0edf178e94d9efefea5355ab892a6135c08a39ca09f556e430f64121a279\" successfully" Jan 29 13:06:18.265307 containerd[1508]: time="2025-01-29T13:06:18.264663629Z" level=info msg="StopPodSandbox for \"8a7b0edf178e94d9efefea5355ab892a6135c08a39ca09f556e430f64121a279\" returns successfully" Jan 29 13:06:18.267053 containerd[1508]: time="2025-01-29T13:06:18.266040990Z" level=info msg="StopPodSandbox for \"5ae8539768dde0b4b7d53f0cf55858b9bec886fc086f463dfdfa8a87f703f103\"" Jan 29 13:06:18.267053 containerd[1508]: time="2025-01-29T13:06:18.266168574Z" level=info msg="TearDown network for sandbox \"5ae8539768dde0b4b7d53f0cf55858b9bec886fc086f463dfdfa8a87f703f103\" successfully" Jan 29 13:06:18.267053 containerd[1508]: time="2025-01-29T13:06:18.266189760Z" level=info msg="StopPodSandbox for \"5ae8539768dde0b4b7d53f0cf55858b9bec886fc086f463dfdfa8a87f703f103\" returns successfully" Jan 29 13:06:18.267564 containerd[1508]: time="2025-01-29T13:06:18.267531530Z" level=info msg="StopPodSandbox for \"24bc3d97635d102c8b625be4a39399ee1aeffea51e3942c828350f65913580a8\"" Jan 29 13:06:18.269385 containerd[1508]: time="2025-01-29T13:06:18.268160447Z" level=info msg="TearDown network for sandbox \"24bc3d97635d102c8b625be4a39399ee1aeffea51e3942c828350f65913580a8\" successfully" Jan 29 13:06:18.269385 containerd[1508]: time="2025-01-29T13:06:18.268226181Z" level=info msg="StopPodSandbox for \"24bc3d97635d102c8b625be4a39399ee1aeffea51e3942c828350f65913580a8\" returns successfully" Jan 29 13:06:18.269385 containerd[1508]: time="2025-01-29T13:06:18.269257768Z" level=info msg="StopPodSandbox for \"7e6768f4f4034764ce3dfae9a60bd95c769d0afcb67b58697658a9e2d13a5ecc\"" Jan 29 13:06:18.270895 containerd[1508]: time="2025-01-29T13:06:18.269693283Z" level=info msg="TearDown network for sandbox \"7e6768f4f4034764ce3dfae9a60bd95c769d0afcb67b58697658a9e2d13a5ecc\" successfully" Jan 29 13:06:18.270895 containerd[1508]: time="2025-01-29T13:06:18.269756902Z" level=info msg="StopPodSandbox for \"7e6768f4f4034764ce3dfae9a60bd95c769d0afcb67b58697658a9e2d13a5ecc\" returns successfully" Jan 29 13:06:18.270895 containerd[1508]: time="2025-01-29T13:06:18.270712433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-9xm5w,Uid:0c9c54e0-dec4-4f35-9513-44b910bfb601,Namespace:default,Attempt:6,}" Jan 29 13:06:18.273178 kubelet[1878]: I0129 13:06:18.273152 1878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c587deb900f8b67b66654db172e06c8d8b4473796fee70675ac0bc60b14d4401" Jan 29 13:06:18.274951 containerd[1508]: time="2025-01-29T13:06:18.274803102Z" level=info msg="StopPodSandbox for \"c587deb900f8b67b66654db172e06c8d8b4473796fee70675ac0bc60b14d4401\"" Jan 29 13:06:18.275163 containerd[1508]: time="2025-01-29T13:06:18.275040491Z" level=info msg="Ensure that sandbox c587deb900f8b67b66654db172e06c8d8b4473796fee70675ac0bc60b14d4401 in task-service has been cleanup successfully" Jan 29 13:06:18.275546 containerd[1508]: time="2025-01-29T13:06:18.275447546Z" level=info msg="TearDown network for sandbox \"c587deb900f8b67b66654db172e06c8d8b4473796fee70675ac0bc60b14d4401\" successfully" Jan 29 13:06:18.275609 containerd[1508]: time="2025-01-29T13:06:18.275577482Z" level=info msg="StopPodSandbox for \"c587deb900f8b67b66654db172e06c8d8b4473796fee70675ac0bc60b14d4401\" returns successfully" Jan 29 13:06:18.276091 containerd[1508]: time="2025-01-29T13:06:18.276061279Z" level=info msg="StopPodSandbox for \"fb50b8f712dbeb34e03b467de759b07f6dd51d0aa863da4f57a1a2ffa8c49720\"" Jan 29 13:06:18.276415 containerd[1508]: time="2025-01-29T13:06:18.276387768Z" level=info msg="TearDown network for sandbox \"fb50b8f712dbeb34e03b467de759b07f6dd51d0aa863da4f57a1a2ffa8c49720\" successfully" Jan 29 13:06:18.276557 containerd[1508]: time="2025-01-29T13:06:18.276414690Z" level=info msg="StopPodSandbox for \"fb50b8f712dbeb34e03b467de759b07f6dd51d0aa863da4f57a1a2ffa8c49720\" returns successfully" Jan 29 13:06:18.277386 containerd[1508]: time="2025-01-29T13:06:18.277354946Z" level=info msg="StopPodSandbox for \"9180f5a0a0d90cce678c15907b73958255ceebdb1e1d97b4c5a638d02c7750c6\"" Jan 29 13:06:18.277621 containerd[1508]: time="2025-01-29T13:06:18.277594798Z" level=info msg="TearDown network for sandbox \"9180f5a0a0d90cce678c15907b73958255ceebdb1e1d97b4c5a638d02c7750c6\" successfully" Jan 29 13:06:18.277694 containerd[1508]: time="2025-01-29T13:06:18.277621957Z" level=info msg="StopPodSandbox for \"9180f5a0a0d90cce678c15907b73958255ceebdb1e1d97b4c5a638d02c7750c6\" returns successfully" Jan 29 13:06:18.278146 containerd[1508]: time="2025-01-29T13:06:18.278117145Z" level=info msg="StopPodSandbox for \"64ee480c91dc3eedcc0d154350362904ddb07a18dd99834836b2839d5c0fb1a8\"" Jan 29 13:06:18.278521 containerd[1508]: time="2025-01-29T13:06:18.278389236Z" level=info msg="TearDown network for sandbox \"64ee480c91dc3eedcc0d154350362904ddb07a18dd99834836b2839d5c0fb1a8\" successfully" Jan 29 13:06:18.278621 containerd[1508]: time="2025-01-29T13:06:18.278527140Z" level=info msg="StopPodSandbox for \"64ee480c91dc3eedcc0d154350362904ddb07a18dd99834836b2839d5c0fb1a8\" returns successfully" Jan 29 13:06:18.279247 containerd[1508]: time="2025-01-29T13:06:18.279217910Z" level=info msg="StopPodSandbox for \"95f6bb89a876c6ff7a75821b36f30d7cfcfb6959a13af4d998100d2845c6b912\"" Jan 29 13:06:18.279590 containerd[1508]: time="2025-01-29T13:06:18.279438814Z" level=info msg="TearDown network for sandbox \"95f6bb89a876c6ff7a75821b36f30d7cfcfb6959a13af4d998100d2845c6b912\" successfully" Jan 29 13:06:18.279590 containerd[1508]: time="2025-01-29T13:06:18.279458217Z" level=info msg="StopPodSandbox for \"95f6bb89a876c6ff7a75821b36f30d7cfcfb6959a13af4d998100d2845c6b912\" returns successfully" Jan 29 13:06:18.280562 containerd[1508]: time="2025-01-29T13:06:18.280356622Z" level=info msg="StopPodSandbox for \"d25fdc15c38fc8e86faec4088bb463c13036fd3ee0f7c4f819cfce539211e8d2\"" Jan 29 13:06:18.280562 containerd[1508]: time="2025-01-29T13:06:18.280458101Z" level=info msg="TearDown network for sandbox \"d25fdc15c38fc8e86faec4088bb463c13036fd3ee0f7c4f819cfce539211e8d2\" successfully" Jan 29 13:06:18.280562 containerd[1508]: time="2025-01-29T13:06:18.280476781Z" level=info msg="StopPodSandbox for \"d25fdc15c38fc8e86faec4088bb463c13036fd3ee0f7c4f819cfce539211e8d2\" returns successfully" Jan 29 13:06:18.281157 containerd[1508]: time="2025-01-29T13:06:18.280881127Z" level=info msg="StopPodSandbox for \"a3c5920ccc742c50f2cb190847e8db5db50a928a9c848c3b3182315c3345bca0\"" Jan 29 13:06:18.281157 containerd[1508]: time="2025-01-29T13:06:18.280977166Z" level=info msg="TearDown network for sandbox \"a3c5920ccc742c50f2cb190847e8db5db50a928a9c848c3b3182315c3345bca0\" successfully" Jan 29 13:06:18.281157 containerd[1508]: time="2025-01-29T13:06:18.280994304Z" level=info msg="StopPodSandbox for \"a3c5920ccc742c50f2cb190847e8db5db50a928a9c848c3b3182315c3345bca0\" returns successfully" Jan 29 13:06:18.281579 containerd[1508]: time="2025-01-29T13:06:18.281539755Z" level=info msg="StopPodSandbox for \"bea178fe28cf855ef27058adce84845b8f5abe30355d68106f97d1f715ffb2aa\"" Jan 29 13:06:18.302504 systemd[1]: Started cri-containerd-0f3d9f4b6cc44189f3805dc26d74f645e3e9e4f0bf98a0c3b020a53986e7f4b1.scope - libcontainer container 0f3d9f4b6cc44189f3805dc26d74f645e3e9e4f0bf98a0c3b020a53986e7f4b1. Jan 29 13:06:18.336239 containerd[1508]: time="2025-01-29T13:06:18.334984588Z" level=info msg="TearDown network for sandbox \"bea178fe28cf855ef27058adce84845b8f5abe30355d68106f97d1f715ffb2aa\" successfully" Jan 29 13:06:18.336239 containerd[1508]: time="2025-01-29T13:06:18.335047645Z" level=info msg="StopPodSandbox for \"bea178fe28cf855ef27058adce84845b8f5abe30355d68106f97d1f715ffb2aa\" returns successfully" Jan 29 13:06:18.367362 systemd[1]: run-netns-cni\x2d2f7db34e\x2d604e\x2deb3c\x2df566\x2d3cc9e9010be0.mount: Deactivated successfully. Jan 29 13:06:18.367718 systemd[1]: run-netns-cni\x2d3da4314b\x2d404c\x2dfaa9\x2d53ad\x2d28a59d152b63.mount: Deactivated successfully. Jan 29 13:06:18.383462 containerd[1508]: time="2025-01-29T13:06:18.383411795Z" level=info msg="StopPodSandbox for \"f61d6ee5389e43c43a84d46d0c771edb2ff088040550efc66d97a815bcfa91a4\"" Jan 29 13:06:18.383881 containerd[1508]: time="2025-01-29T13:06:18.383559559Z" level=info msg="TearDown network for sandbox \"f61d6ee5389e43c43a84d46d0c771edb2ff088040550efc66d97a815bcfa91a4\" successfully" Jan 29 13:06:18.383881 containerd[1508]: time="2025-01-29T13:06:18.383635025Z" level=info msg="StopPodSandbox for \"f61d6ee5389e43c43a84d46d0c771edb2ff088040550efc66d97a815bcfa91a4\" returns successfully" Jan 29 13:06:18.384908 containerd[1508]: time="2025-01-29T13:06:18.384705973Z" level=info msg="StopPodSandbox for \"68c05b2e3e6fc50790a3d480d0e8d3ad8f8efdb9bf5ec0c4f5439a284af61cfd\"" Jan 29 13:06:18.384908 containerd[1508]: time="2025-01-29T13:06:18.384839591Z" level=info msg="TearDown network for sandbox \"68c05b2e3e6fc50790a3d480d0e8d3ad8f8efdb9bf5ec0c4f5439a284af61cfd\" successfully" Jan 29 13:06:18.384908 containerd[1508]: time="2025-01-29T13:06:18.384861241Z" level=info msg="StopPodSandbox for \"68c05b2e3e6fc50790a3d480d0e8d3ad8f8efdb9bf5ec0c4f5439a284af61cfd\" returns successfully" Jan 29 13:06:18.387518 containerd[1508]: time="2025-01-29T13:06:18.386133013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tcnjs,Uid:ac21dc14-23e0-4c74-8492-563d8d3aeeb5,Namespace:calico-system,Attempt:10,}" Jan 29 13:06:18.419888 containerd[1508]: time="2025-01-29T13:06:18.419826313Z" level=info msg="StartContainer for \"0f3d9f4b6cc44189f3805dc26d74f645e3e9e4f0bf98a0c3b020a53986e7f4b1\" returns successfully" Jan 29 13:06:18.568768 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 29 13:06:18.569618 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 29 13:06:18.645800 containerd[1508]: time="2025-01-29T13:06:18.645727065Z" level=error msg="Failed to destroy network for sandbox \"ba67376b1f089398f75b39c2b6cd0bba676e6ce39bf7f902417a71037d9ead81\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:18.647840 containerd[1508]: time="2025-01-29T13:06:18.647788681Z" level=error msg="encountered an error cleaning up failed sandbox \"ba67376b1f089398f75b39c2b6cd0bba676e6ce39bf7f902417a71037d9ead81\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:18.647973 containerd[1508]: time="2025-01-29T13:06:18.647877251Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-9xm5w,Uid:0c9c54e0-dec4-4f35-9513-44b910bfb601,Namespace:default,Attempt:6,} failed, error" error="failed to setup network for sandbox \"ba67376b1f089398f75b39c2b6cd0bba676e6ce39bf7f902417a71037d9ead81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:18.649790 kubelet[1878]: E0129 13:06:18.649691 1878 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba67376b1f089398f75b39c2b6cd0bba676e6ce39bf7f902417a71037d9ead81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:18.650350 kubelet[1878]: E0129 13:06:18.649914 1878 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba67376b1f089398f75b39c2b6cd0bba676e6ce39bf7f902417a71037d9ead81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-9xm5w" Jan 29 13:06:18.650350 kubelet[1878]: E0129 13:06:18.649974 1878 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba67376b1f089398f75b39c2b6cd0bba676e6ce39bf7f902417a71037d9ead81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-9xm5w" Jan 29 13:06:18.650350 kubelet[1878]: E0129 13:06:18.650099 1878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-9xm5w_default(0c9c54e0-dec4-4f35-9513-44b910bfb601)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-9xm5w_default(0c9c54e0-dec4-4f35-9513-44b910bfb601)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ba67376b1f089398f75b39c2b6cd0bba676e6ce39bf7f902417a71037d9ead81\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-9xm5w" podUID="0c9c54e0-dec4-4f35-9513-44b910bfb601" Jan 29 13:06:18.804249 containerd[1508]: 2025-01-29 13:06:18.748 [INFO][2928] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="44cb67337dea8a123b6778f392880c08843bc2c1a2ac5e96c8bb27e26b8a96d1" Jan 29 13:06:18.804249 containerd[1508]: 2025-01-29 13:06:18.749 [INFO][2928] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="44cb67337dea8a123b6778f392880c08843bc2c1a2ac5e96c8bb27e26b8a96d1" iface="eth0" netns="/var/run/netns/cni-adbd0f5b-f46a-4e65-34a8-d32aa3daef1d" Jan 29 13:06:18.804249 containerd[1508]: 2025-01-29 13:06:18.749 [INFO][2928] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="44cb67337dea8a123b6778f392880c08843bc2c1a2ac5e96c8bb27e26b8a96d1" iface="eth0" netns="/var/run/netns/cni-adbd0f5b-f46a-4e65-34a8-d32aa3daef1d" Jan 29 13:06:18.804249 containerd[1508]: 2025-01-29 13:06:18.750 [INFO][2928] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="44cb67337dea8a123b6778f392880c08843bc2c1a2ac5e96c8bb27e26b8a96d1" iface="eth0" netns="/var/run/netns/cni-adbd0f5b-f46a-4e65-34a8-d32aa3daef1d" Jan 29 13:06:18.804249 containerd[1508]: 2025-01-29 13:06:18.750 [INFO][2928] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="44cb67337dea8a123b6778f392880c08843bc2c1a2ac5e96c8bb27e26b8a96d1" Jan 29 13:06:18.804249 containerd[1508]: 2025-01-29 13:06:18.750 [INFO][2928] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="44cb67337dea8a123b6778f392880c08843bc2c1a2ac5e96c8bb27e26b8a96d1" Jan 29 13:06:18.804249 containerd[1508]: 2025-01-29 13:06:18.787 [INFO][2943] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="44cb67337dea8a123b6778f392880c08843bc2c1a2ac5e96c8bb27e26b8a96d1" HandleID="k8s-pod-network.44cb67337dea8a123b6778f392880c08843bc2c1a2ac5e96c8bb27e26b8a96d1" Workload="10.230.23.118-k8s-csi--node--driver--tcnjs-eth0" Jan 29 13:06:18.804249 containerd[1508]: 2025-01-29 13:06:18.787 [INFO][2943] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 13:06:18.804249 containerd[1508]: 2025-01-29 13:06:18.787 [INFO][2943] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 13:06:18.804249 containerd[1508]: 2025-01-29 13:06:18.797 [WARNING][2943] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="44cb67337dea8a123b6778f392880c08843bc2c1a2ac5e96c8bb27e26b8a96d1" HandleID="k8s-pod-network.44cb67337dea8a123b6778f392880c08843bc2c1a2ac5e96c8bb27e26b8a96d1" Workload="10.230.23.118-k8s-csi--node--driver--tcnjs-eth0" Jan 29 13:06:18.804249 containerd[1508]: 2025-01-29 13:06:18.797 [INFO][2943] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="44cb67337dea8a123b6778f392880c08843bc2c1a2ac5e96c8bb27e26b8a96d1" HandleID="k8s-pod-network.44cb67337dea8a123b6778f392880c08843bc2c1a2ac5e96c8bb27e26b8a96d1" Workload="10.230.23.118-k8s-csi--node--driver--tcnjs-eth0" Jan 29 13:06:18.804249 containerd[1508]: 2025-01-29 13:06:18.800 [INFO][2943] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 13:06:18.804249 containerd[1508]: 2025-01-29 13:06:18.802 [INFO][2928] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="44cb67337dea8a123b6778f392880c08843bc2c1a2ac5e96c8bb27e26b8a96d1" Jan 29 13:06:18.807826 containerd[1508]: time="2025-01-29T13:06:18.807734143Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tcnjs,Uid:ac21dc14-23e0-4c74-8492-563d8d3aeeb5,Namespace:calico-system,Attempt:10,} failed, error" error="failed to setup network for sandbox \"44cb67337dea8a123b6778f392880c08843bc2c1a2ac5e96c8bb27e26b8a96d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:18.808189 kubelet[1878]: E0129 13:06:18.808135 1878 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44cb67337dea8a123b6778f392880c08843bc2c1a2ac5e96c8bb27e26b8a96d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 13:06:18.808308 kubelet[1878]: E0129 13:06:18.808224 1878 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44cb67337dea8a123b6778f392880c08843bc2c1a2ac5e96c8bb27e26b8a96d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tcnjs" Jan 29 13:06:18.808568 kubelet[1878]: E0129 13:06:18.808264 1878 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44cb67337dea8a123b6778f392880c08843bc2c1a2ac5e96c8bb27e26b8a96d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tcnjs" Jan 29 13:06:18.809041 kubelet[1878]: E0129 13:06:18.808614 1878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tcnjs_calico-system(ac21dc14-23e0-4c74-8492-563d8d3aeeb5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tcnjs_calico-system(ac21dc14-23e0-4c74-8492-563d8d3aeeb5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"44cb67337dea8a123b6778f392880c08843bc2c1a2ac5e96c8bb27e26b8a96d1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tcnjs" podUID="ac21dc14-23e0-4c74-8492-563d8d3aeeb5" Jan 29 13:06:18.933832 kubelet[1878]: E0129 13:06:18.933625 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:19.280717 kubelet[1878]: I0129 13:06:19.279418 1878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba67376b1f089398f75b39c2b6cd0bba676e6ce39bf7f902417a71037d9ead81" Jan 29 13:06:19.281324 containerd[1508]: time="2025-01-29T13:06:19.281032367Z" level=info msg="StopPodSandbox for \"ba67376b1f089398f75b39c2b6cd0bba676e6ce39bf7f902417a71037d9ead81\"" Jan 29 13:06:19.281667 containerd[1508]: time="2025-01-29T13:06:19.281318726Z" level=info msg="Ensure that sandbox ba67376b1f089398f75b39c2b6cd0bba676e6ce39bf7f902417a71037d9ead81 in task-service has been cleanup successfully" Jan 29 13:06:19.282466 containerd[1508]: time="2025-01-29T13:06:19.282437490Z" level=info msg="TearDown network for sandbox \"ba67376b1f089398f75b39c2b6cd0bba676e6ce39bf7f902417a71037d9ead81\" successfully" Jan 29 13:06:19.282604 containerd[1508]: time="2025-01-29T13:06:19.282579327Z" level=info msg="StopPodSandbox for \"ba67376b1f089398f75b39c2b6cd0bba676e6ce39bf7f902417a71037d9ead81\" returns successfully" Jan 29 13:06:19.283632 containerd[1508]: time="2025-01-29T13:06:19.283487672Z" level=info msg="StopPodSandbox for \"96d71c6ce1c17ba958a3091dad1360b894fee90e9c1277be5d1fb525e2bdc23b\"" Jan 29 13:06:19.283632 containerd[1508]: time="2025-01-29T13:06:19.283590735Z" level=info msg="TearDown network for sandbox \"96d71c6ce1c17ba958a3091dad1360b894fee90e9c1277be5d1fb525e2bdc23b\" successfully" Jan 29 13:06:19.283632 containerd[1508]: time="2025-01-29T13:06:19.283608777Z" level=info msg="StopPodSandbox for \"96d71c6ce1c17ba958a3091dad1360b894fee90e9c1277be5d1fb525e2bdc23b\" returns successfully" Jan 29 13:06:19.284183 containerd[1508]: time="2025-01-29T13:06:19.284148981Z" level=info msg="StopPodSandbox for \"02ae5515988d032760a90241ac57d0688b979398522a72ca7bfa1a3bcfa4d974\"" Jan 29 13:06:19.285790 containerd[1508]: time="2025-01-29T13:06:19.284276161Z" level=info msg="TearDown network for sandbox \"02ae5515988d032760a90241ac57d0688b979398522a72ca7bfa1a3bcfa4d974\" successfully" Jan 29 13:06:19.285790 containerd[1508]: time="2025-01-29T13:06:19.284328637Z" level=info msg="StopPodSandbox for \"02ae5515988d032760a90241ac57d0688b979398522a72ca7bfa1a3bcfa4d974\" returns successfully" Jan 29 13:06:19.285909 update_engine[1484]: I20250129 13:06:19.284462 1484 update_attempter.cc:509] Updating boot flags... Jan 29 13:06:19.287137 containerd[1508]: time="2025-01-29T13:06:19.286796049Z" level=info msg="StopPodSandbox for \"8a7b0edf178e94d9efefea5355ab892a6135c08a39ca09f556e430f64121a279\"" Jan 29 13:06:19.287137 containerd[1508]: time="2025-01-29T13:06:19.286902701Z" level=info msg="TearDown network for sandbox \"8a7b0edf178e94d9efefea5355ab892a6135c08a39ca09f556e430f64121a279\" successfully" Jan 29 13:06:19.287137 containerd[1508]: time="2025-01-29T13:06:19.286921293Z" level=info msg="StopPodSandbox for \"8a7b0edf178e94d9efefea5355ab892a6135c08a39ca09f556e430f64121a279\" returns successfully" Jan 29 13:06:19.288006 containerd[1508]: time="2025-01-29T13:06:19.287974353Z" level=info msg="StopPodSandbox for \"5ae8539768dde0b4b7d53f0cf55858b9bec886fc086f463dfdfa8a87f703f103\"" Jan 29 13:06:19.288707 containerd[1508]: time="2025-01-29T13:06:19.288658808Z" level=info msg="TearDown network for sandbox \"5ae8539768dde0b4b7d53f0cf55858b9bec886fc086f463dfdfa8a87f703f103\" successfully" Jan 29 13:06:19.290511 containerd[1508]: time="2025-01-29T13:06:19.288900164Z" level=info msg="StopPodSandbox for \"5ae8539768dde0b4b7d53f0cf55858b9bec886fc086f463dfdfa8a87f703f103\" returns successfully" Jan 29 13:06:19.291106 containerd[1508]: time="2025-01-29T13:06:19.290825981Z" level=info msg="StopPodSandbox for \"c587deb900f8b67b66654db172e06c8d8b4473796fee70675ac0bc60b14d4401\"" Jan 29 13:06:19.291106 containerd[1508]: time="2025-01-29T13:06:19.290981936Z" level=info msg="TearDown network for sandbox \"c587deb900f8b67b66654db172e06c8d8b4473796fee70675ac0bc60b14d4401\" successfully" Jan 29 13:06:19.291106 containerd[1508]: time="2025-01-29T13:06:19.291001616Z" level=info msg="StopPodSandbox for \"c587deb900f8b67b66654db172e06c8d8b4473796fee70675ac0bc60b14d4401\" returns successfully" Jan 29 13:06:19.295960 containerd[1508]: time="2025-01-29T13:06:19.295446214Z" level=info msg="StopPodSandbox for \"fb50b8f712dbeb34e03b467de759b07f6dd51d0aa863da4f57a1a2ffa8c49720\"" Jan 29 13:06:19.295960 containerd[1508]: time="2025-01-29T13:06:19.295680765Z" level=info msg="TearDown network for sandbox \"fb50b8f712dbeb34e03b467de759b07f6dd51d0aa863da4f57a1a2ffa8c49720\" successfully" Jan 29 13:06:19.295960 containerd[1508]: time="2025-01-29T13:06:19.295733098Z" level=info msg="StopPodSandbox for \"fb50b8f712dbeb34e03b467de759b07f6dd51d0aa863da4f57a1a2ffa8c49720\" returns successfully" Jan 29 13:06:19.296816 containerd[1508]: time="2025-01-29T13:06:19.296331176Z" level=info msg="StopPodSandbox for \"24bc3d97635d102c8b625be4a39399ee1aeffea51e3942c828350f65913580a8\"" Jan 29 13:06:19.297699 containerd[1508]: time="2025-01-29T13:06:19.296590127Z" level=info msg="TearDown network for sandbox \"24bc3d97635d102c8b625be4a39399ee1aeffea51e3942c828350f65913580a8\" successfully" Jan 29 13:06:19.297901 containerd[1508]: time="2025-01-29T13:06:19.297766424Z" level=info msg="StopPodSandbox for \"24bc3d97635d102c8b625be4a39399ee1aeffea51e3942c828350f65913580a8\" returns successfully" Jan 29 13:06:19.298135 containerd[1508]: time="2025-01-29T13:06:19.297035693Z" level=info msg="StopPodSandbox for \"9180f5a0a0d90cce678c15907b73958255ceebdb1e1d97b4c5a638d02c7750c6\"" Jan 29 13:06:19.303316 containerd[1508]: time="2025-01-29T13:06:19.298631892Z" level=info msg="StopPodSandbox for \"7e6768f4f4034764ce3dfae9a60bd95c769d0afcb67b58697658a9e2d13a5ecc\"" Jan 29 13:06:19.303316 containerd[1508]: time="2025-01-29T13:06:19.301020298Z" level=info msg="TearDown network for sandbox \"9180f5a0a0d90cce678c15907b73958255ceebdb1e1d97b4c5a638d02c7750c6\" successfully" Jan 29 13:06:19.303316 containerd[1508]: time="2025-01-29T13:06:19.301048548Z" level=info msg="StopPodSandbox for \"9180f5a0a0d90cce678c15907b73958255ceebdb1e1d97b4c5a638d02c7750c6\" returns successfully" Jan 29 13:06:19.309516 containerd[1508]: time="2025-01-29T13:06:19.309428023Z" level=info msg="StopPodSandbox for \"64ee480c91dc3eedcc0d154350362904ddb07a18dd99834836b2839d5c0fb1a8\"" Jan 29 13:06:19.309861 containerd[1508]: time="2025-01-29T13:06:19.309809606Z" level=info msg="TearDown network for sandbox \"64ee480c91dc3eedcc0d154350362904ddb07a18dd99834836b2839d5c0fb1a8\" successfully" Jan 29 13:06:19.309978 containerd[1508]: time="2025-01-29T13:06:19.309950247Z" level=info msg="StopPodSandbox for \"64ee480c91dc3eedcc0d154350362904ddb07a18dd99834836b2839d5c0fb1a8\" returns successfully" Jan 29 13:06:19.310274 kubelet[1878]: I0129 13:06:19.310189 1878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-jb472" podStartSLOduration=4.641328383 podStartE2EDuration="25.310144934s" podCreationTimestamp="2025-01-29 13:05:54 +0000 UTC" firstStartedPulling="2025-01-29 13:05:57.404977562 +0000 UTC m=+4.203293773" lastFinishedPulling="2025-01-29 13:06:18.073794118 +0000 UTC m=+24.872110324" observedRunningTime="2025-01-29 13:06:19.309806105 +0000 UTC m=+26.108122321" watchObservedRunningTime="2025-01-29 13:06:19.310144934 +0000 UTC m=+26.108461148" Jan 29 13:06:19.310931 containerd[1508]: time="2025-01-29T13:06:19.310706037Z" level=info msg="StopPodSandbox for \"95f6bb89a876c6ff7a75821b36f30d7cfcfb6959a13af4d998100d2845c6b912\"" Jan 29 13:06:19.311583 containerd[1508]: time="2025-01-29T13:06:19.311413042Z" level=info msg="TearDown network for sandbox \"7e6768f4f4034764ce3dfae9a60bd95c769d0afcb67b58697658a9e2d13a5ecc\" successfully" Jan 29 13:06:19.311583 containerd[1508]: time="2025-01-29T13:06:19.311444921Z" level=info msg="StopPodSandbox for \"7e6768f4f4034764ce3dfae9a60bd95c769d0afcb67b58697658a9e2d13a5ecc\" returns successfully" Jan 29 13:06:19.311583 containerd[1508]: time="2025-01-29T13:06:19.311468551Z" level=info msg="TearDown network for sandbox \"95f6bb89a876c6ff7a75821b36f30d7cfcfb6959a13af4d998100d2845c6b912\" successfully" Jan 29 13:06:19.311583 containerd[1508]: time="2025-01-29T13:06:19.311490385Z" level=info msg="StopPodSandbox for \"95f6bb89a876c6ff7a75821b36f30d7cfcfb6959a13af4d998100d2845c6b912\" returns successfully" Jan 29 13:06:19.313750 containerd[1508]: time="2025-01-29T13:06:19.312106158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-9xm5w,Uid:0c9c54e0-dec4-4f35-9513-44b910bfb601,Namespace:default,Attempt:7,}" Jan 29 13:06:19.315643 containerd[1508]: time="2025-01-29T13:06:19.315607679Z" level=info msg="StopPodSandbox for \"d25fdc15c38fc8e86faec4088bb463c13036fd3ee0f7c4f819cfce539211e8d2\"" Jan 29 13:06:19.316547 containerd[1508]: time="2025-01-29T13:06:19.316517937Z" level=info msg="TearDown network for sandbox \"d25fdc15c38fc8e86faec4088bb463c13036fd3ee0f7c4f819cfce539211e8d2\" successfully" Jan 29 13:06:19.316652 containerd[1508]: time="2025-01-29T13:06:19.316627580Z" level=info msg="StopPodSandbox for \"d25fdc15c38fc8e86faec4088bb463c13036fd3ee0f7c4f819cfce539211e8d2\" returns successfully" Jan 29 13:06:19.318558 containerd[1508]: time="2025-01-29T13:06:19.318453743Z" level=info msg="StopPodSandbox for \"a3c5920ccc742c50f2cb190847e8db5db50a928a9c848c3b3182315c3345bca0\"" Jan 29 13:06:19.318634 containerd[1508]: time="2025-01-29T13:06:19.318564962Z" level=info msg="TearDown network for sandbox \"a3c5920ccc742c50f2cb190847e8db5db50a928a9c848c3b3182315c3345bca0\" successfully" Jan 29 13:06:19.318634 containerd[1508]: time="2025-01-29T13:06:19.318584470Z" level=info msg="StopPodSandbox for \"a3c5920ccc742c50f2cb190847e8db5db50a928a9c848c3b3182315c3345bca0\" returns successfully" Jan 29 13:06:19.319426 containerd[1508]: time="2025-01-29T13:06:19.319381385Z" level=info msg="StopPodSandbox for \"bea178fe28cf855ef27058adce84845b8f5abe30355d68106f97d1f715ffb2aa\"" Jan 29 13:06:19.319530 containerd[1508]: time="2025-01-29T13:06:19.319505109Z" level=info msg="TearDown network for sandbox \"bea178fe28cf855ef27058adce84845b8f5abe30355d68106f97d1f715ffb2aa\" successfully" Jan 29 13:06:19.319599 containerd[1508]: time="2025-01-29T13:06:19.319530027Z" level=info msg="StopPodSandbox for \"bea178fe28cf855ef27058adce84845b8f5abe30355d68106f97d1f715ffb2aa\" returns successfully" Jan 29 13:06:19.320500 containerd[1508]: time="2025-01-29T13:06:19.320463676Z" level=info msg="StopPodSandbox for \"f61d6ee5389e43c43a84d46d0c771edb2ff088040550efc66d97a815bcfa91a4\"" Jan 29 13:06:19.320597 containerd[1508]: time="2025-01-29T13:06:19.320571805Z" level=info msg="TearDown network for sandbox \"f61d6ee5389e43c43a84d46d0c771edb2ff088040550efc66d97a815bcfa91a4\" successfully" Jan 29 13:06:19.320657 containerd[1508]: time="2025-01-29T13:06:19.320589296Z" level=info msg="StopPodSandbox for \"f61d6ee5389e43c43a84d46d0c771edb2ff088040550efc66d97a815bcfa91a4\" returns successfully" Jan 29 13:06:19.321721 containerd[1508]: time="2025-01-29T13:06:19.321377773Z" level=info msg="StopPodSandbox for \"68c05b2e3e6fc50790a3d480d0e8d3ad8f8efdb9bf5ec0c4f5439a284af61cfd\"" Jan 29 13:06:19.321721 containerd[1508]: time="2025-01-29T13:06:19.321545873Z" level=info msg="TearDown network for sandbox \"68c05b2e3e6fc50790a3d480d0e8d3ad8f8efdb9bf5ec0c4f5439a284af61cfd\" successfully" Jan 29 13:06:19.321721 containerd[1508]: time="2025-01-29T13:06:19.321564762Z" level=info msg="StopPodSandbox for \"68c05b2e3e6fc50790a3d480d0e8d3ad8f8efdb9bf5ec0c4f5439a284af61cfd\" returns successfully" Jan 29 13:06:19.324427 containerd[1508]: time="2025-01-29T13:06:19.323274122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tcnjs,Uid:ac21dc14-23e0-4c74-8492-563d8d3aeeb5,Namespace:calico-system,Attempt:10,}" Jan 29 13:06:19.348849 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2886) Jan 29 13:06:19.370008 systemd[1]: run-netns-cni\x2dadbd0f5b\x2df46a\x2d4e65\x2d34a8\x2dd32aa3daef1d.mount: Deactivated successfully. Jan 29 13:06:19.370677 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-44cb67337dea8a123b6778f392880c08843bc2c1a2ac5e96c8bb27e26b8a96d1-shm.mount: Deactivated successfully. Jan 29 13:06:19.370797 systemd[1]: run-netns-cni\x2d54e06e4b\x2dca5c\x2d4eba\x2d3356\x2d6cb9404a098a.mount: Deactivated successfully. Jan 29 13:06:19.370901 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ba67376b1f089398f75b39c2b6cd0bba676e6ce39bf7f902417a71037d9ead81-shm.mount: Deactivated successfully. Jan 29 13:06:19.686767 systemd-networkd[1422]: cali9da5e4c35f9: Link UP Jan 29 13:06:19.687411 systemd-networkd[1422]: cali9da5e4c35f9: Gained carrier Jan 29 13:06:19.704200 containerd[1508]: 2025-01-29 13:06:19.520 [INFO][2975] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 13:06:19.704200 containerd[1508]: 2025-01-29 13:06:19.541 [INFO][2975] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.230.23.118-k8s-nginx--deployment--8587fbcb89--9xm5w-eth0 nginx-deployment-8587fbcb89- default 0c9c54e0-dec4-4f35-9513-44b910bfb601 1153 0 2025-01-29 13:06:12 +0000 UTC map[app:nginx pod-template-hash:8587fbcb89 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.230.23.118 nginx-deployment-8587fbcb89-9xm5w eth0 default [] [] [kns.default ksa.default.default] cali9da5e4c35f9 [] []}} ContainerID="817a753e14de9429bb2ab61e1fa9c16404c51dad829abccf0f8ca1819ad56822" Namespace="default" Pod="nginx-deployment-8587fbcb89-9xm5w" WorkloadEndpoint="10.230.23.118-k8s-nginx--deployment--8587fbcb89--9xm5w-" Jan 29 13:06:19.704200 containerd[1508]: 2025-01-29 13:06:19.542 [INFO][2975] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="817a753e14de9429bb2ab61e1fa9c16404c51dad829abccf0f8ca1819ad56822" Namespace="default" Pod="nginx-deployment-8587fbcb89-9xm5w" WorkloadEndpoint="10.230.23.118-k8s-nginx--deployment--8587fbcb89--9xm5w-eth0" Jan 29 13:06:19.704200 containerd[1508]: 2025-01-29 13:06:19.601 [INFO][3016] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="817a753e14de9429bb2ab61e1fa9c16404c51dad829abccf0f8ca1819ad56822" HandleID="k8s-pod-network.817a753e14de9429bb2ab61e1fa9c16404c51dad829abccf0f8ca1819ad56822" Workload="10.230.23.118-k8s-nginx--deployment--8587fbcb89--9xm5w-eth0" Jan 29 13:06:19.704200 containerd[1508]: 2025-01-29 13:06:19.616 [INFO][3016] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="817a753e14de9429bb2ab61e1fa9c16404c51dad829abccf0f8ca1819ad56822" HandleID="k8s-pod-network.817a753e14de9429bb2ab61e1fa9c16404c51dad829abccf0f8ca1819ad56822" Workload="10.230.23.118-k8s-nginx--deployment--8587fbcb89--9xm5w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290eb0), Attrs:map[string]string{"namespace":"default", "node":"10.230.23.118", "pod":"nginx-deployment-8587fbcb89-9xm5w", "timestamp":"2025-01-29 13:06:19.601662492 +0000 UTC"}, Hostname:"10.230.23.118", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 13:06:19.704200 containerd[1508]: 2025-01-29 13:06:19.616 [INFO][3016] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 13:06:19.704200 containerd[1508]: 2025-01-29 13:06:19.616 [INFO][3016] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 13:06:19.704200 containerd[1508]: 2025-01-29 13:06:19.616 [INFO][3016] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.230.23.118' Jan 29 13:06:19.704200 containerd[1508]: 2025-01-29 13:06:19.621 [INFO][3016] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.817a753e14de9429bb2ab61e1fa9c16404c51dad829abccf0f8ca1819ad56822" host="10.230.23.118" Jan 29 13:06:19.704200 containerd[1508]: 2025-01-29 13:06:19.628 [INFO][3016] ipam/ipam.go 372: Looking up existing affinities for host host="10.230.23.118" Jan 29 13:06:19.704200 containerd[1508]: 2025-01-29 13:06:19.637 [INFO][3016] ipam/ipam.go 489: Trying affinity for 192.168.64.192/26 host="10.230.23.118" Jan 29 13:06:19.704200 containerd[1508]: 2025-01-29 13:06:19.640 [INFO][3016] ipam/ipam.go 155: Attempting to load block cidr=192.168.64.192/26 host="10.230.23.118" Jan 29 13:06:19.704200 containerd[1508]: 2025-01-29 13:06:19.644 [INFO][3016] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.64.192/26 host="10.230.23.118" Jan 29 13:06:19.704200 containerd[1508]: 2025-01-29 13:06:19.644 [INFO][3016] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.64.192/26 handle="k8s-pod-network.817a753e14de9429bb2ab61e1fa9c16404c51dad829abccf0f8ca1819ad56822" host="10.230.23.118" Jan 29 13:06:19.704200 containerd[1508]: 2025-01-29 13:06:19.648 [INFO][3016] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.817a753e14de9429bb2ab61e1fa9c16404c51dad829abccf0f8ca1819ad56822 Jan 29 13:06:19.704200 containerd[1508]: 2025-01-29 13:06:19.656 [INFO][3016] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.64.192/26 handle="k8s-pod-network.817a753e14de9429bb2ab61e1fa9c16404c51dad829abccf0f8ca1819ad56822" host="10.230.23.118" Jan 29 13:06:19.704200 containerd[1508]: 2025-01-29 13:06:19.670 [INFO][3016] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.64.193/26] block=192.168.64.192/26 handle="k8s-pod-network.817a753e14de9429bb2ab61e1fa9c16404c51dad829abccf0f8ca1819ad56822" host="10.230.23.118" Jan 29 13:06:19.704200 containerd[1508]: 2025-01-29 13:06:19.670 [INFO][3016] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.64.193/26] handle="k8s-pod-network.817a753e14de9429bb2ab61e1fa9c16404c51dad829abccf0f8ca1819ad56822" host="10.230.23.118" Jan 29 13:06:19.704200 containerd[1508]: 2025-01-29 13:06:19.670 [INFO][3016] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 13:06:19.704200 containerd[1508]: 2025-01-29 13:06:19.670 [INFO][3016] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.64.193/26] IPv6=[] ContainerID="817a753e14de9429bb2ab61e1fa9c16404c51dad829abccf0f8ca1819ad56822" HandleID="k8s-pod-network.817a753e14de9429bb2ab61e1fa9c16404c51dad829abccf0f8ca1819ad56822" Workload="10.230.23.118-k8s-nginx--deployment--8587fbcb89--9xm5w-eth0" Jan 29 13:06:19.705538 containerd[1508]: 2025-01-29 13:06:19.673 [INFO][2975] cni-plugin/k8s.go 386: Populated endpoint ContainerID="817a753e14de9429bb2ab61e1fa9c16404c51dad829abccf0f8ca1819ad56822" Namespace="default" Pod="nginx-deployment-8587fbcb89-9xm5w" WorkloadEndpoint="10.230.23.118-k8s-nginx--deployment--8587fbcb89--9xm5w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.23.118-k8s-nginx--deployment--8587fbcb89--9xm5w-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"0c9c54e0-dec4-4f35-9513-44b910bfb601", ResourceVersion:"1153", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 13, 6, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.23.118", ContainerID:"", Pod:"nginx-deployment-8587fbcb89-9xm5w", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.64.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali9da5e4c35f9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 13:06:19.705538 containerd[1508]: 2025-01-29 13:06:19.673 [INFO][2975] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.64.193/32] ContainerID="817a753e14de9429bb2ab61e1fa9c16404c51dad829abccf0f8ca1819ad56822" Namespace="default" Pod="nginx-deployment-8587fbcb89-9xm5w" WorkloadEndpoint="10.230.23.118-k8s-nginx--deployment--8587fbcb89--9xm5w-eth0" Jan 29 13:06:19.705538 containerd[1508]: 2025-01-29 13:06:19.673 [INFO][2975] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9da5e4c35f9 ContainerID="817a753e14de9429bb2ab61e1fa9c16404c51dad829abccf0f8ca1819ad56822" Namespace="default" Pod="nginx-deployment-8587fbcb89-9xm5w" WorkloadEndpoint="10.230.23.118-k8s-nginx--deployment--8587fbcb89--9xm5w-eth0" Jan 29 13:06:19.705538 containerd[1508]: 2025-01-29 13:06:19.686 [INFO][2975] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="817a753e14de9429bb2ab61e1fa9c16404c51dad829abccf0f8ca1819ad56822" Namespace="default" Pod="nginx-deployment-8587fbcb89-9xm5w" WorkloadEndpoint="10.230.23.118-k8s-nginx--deployment--8587fbcb89--9xm5w-eth0" Jan 29 13:06:19.705538 containerd[1508]: 2025-01-29 13:06:19.687 [INFO][2975] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="817a753e14de9429bb2ab61e1fa9c16404c51dad829abccf0f8ca1819ad56822" Namespace="default" Pod="nginx-deployment-8587fbcb89-9xm5w" WorkloadEndpoint="10.230.23.118-k8s-nginx--deployment--8587fbcb89--9xm5w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.23.118-k8s-nginx--deployment--8587fbcb89--9xm5w-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"0c9c54e0-dec4-4f35-9513-44b910bfb601", ResourceVersion:"1153", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 13, 6, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.23.118", ContainerID:"817a753e14de9429bb2ab61e1fa9c16404c51dad829abccf0f8ca1819ad56822", Pod:"nginx-deployment-8587fbcb89-9xm5w", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.64.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali9da5e4c35f9", MAC:"8a:65:3c:f4:9e:74", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 13:06:19.705538 containerd[1508]: 2025-01-29 13:06:19.702 [INFO][2975] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="817a753e14de9429bb2ab61e1fa9c16404c51dad829abccf0f8ca1819ad56822" Namespace="default" Pod="nginx-deployment-8587fbcb89-9xm5w" WorkloadEndpoint="10.230.23.118-k8s-nginx--deployment--8587fbcb89--9xm5w-eth0" Jan 29 13:06:19.742057 containerd[1508]: time="2025-01-29T13:06:19.741685997Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 13:06:19.742057 containerd[1508]: time="2025-01-29T13:06:19.741825367Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 13:06:19.742057 containerd[1508]: time="2025-01-29T13:06:19.741859342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 13:06:19.743345 containerd[1508]: time="2025-01-29T13:06:19.743232256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 13:06:19.777667 systemd[1]: Started cri-containerd-817a753e14de9429bb2ab61e1fa9c16404c51dad829abccf0f8ca1819ad56822.scope - libcontainer container 817a753e14de9429bb2ab61e1fa9c16404c51dad829abccf0f8ca1819ad56822. Jan 29 13:06:19.820115 systemd-networkd[1422]: calieb93ce91038: Link UP Jan 29 13:06:19.822211 systemd-networkd[1422]: calieb93ce91038: Gained carrier Jan 29 13:06:19.851343 containerd[1508]: 2025-01-29 13:06:19.533 [INFO][2990] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 13:06:19.851343 containerd[1508]: 2025-01-29 13:06:19.560 [INFO][2990] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.230.23.118-k8s-csi--node--driver--tcnjs-eth0 csi-node-driver- calico-system ac21dc14-23e0-4c74-8492-563d8d3aeeb5 1196 0 2025-01-29 13:05:54 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.230.23.118 csi-node-driver-tcnjs eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calieb93ce91038 [] []}} ContainerID="5921974ea33fd30868e9ac4350c876a08b7f54f8d0b60386bc3ccbc355361b0f" Namespace="calico-system" Pod="csi-node-driver-tcnjs" WorkloadEndpoint="10.230.23.118-k8s-csi--node--driver--tcnjs-" Jan 29 13:06:19.851343 containerd[1508]: 2025-01-29 13:06:19.560 [INFO][2990] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5921974ea33fd30868e9ac4350c876a08b7f54f8d0b60386bc3ccbc355361b0f" Namespace="calico-system" Pod="csi-node-driver-tcnjs" WorkloadEndpoint="10.230.23.118-k8s-csi--node--driver--tcnjs-eth0" Jan 29 13:06:19.851343 containerd[1508]: 2025-01-29 13:06:19.631 [INFO][3021] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5921974ea33fd30868e9ac4350c876a08b7f54f8d0b60386bc3ccbc355361b0f" HandleID="k8s-pod-network.5921974ea33fd30868e9ac4350c876a08b7f54f8d0b60386bc3ccbc355361b0f" Workload="10.230.23.118-k8s-csi--node--driver--tcnjs-eth0" Jan 29 13:06:19.851343 containerd[1508]: 2025-01-29 13:06:19.646 [INFO][3021] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5921974ea33fd30868e9ac4350c876a08b7f54f8d0b60386bc3ccbc355361b0f" HandleID="k8s-pod-network.5921974ea33fd30868e9ac4350c876a08b7f54f8d0b60386bc3ccbc355361b0f" Workload="10.230.23.118-k8s-csi--node--driver--tcnjs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031b110), Attrs:map[string]string{"namespace":"calico-system", "node":"10.230.23.118", "pod":"csi-node-driver-tcnjs", "timestamp":"2025-01-29 13:06:19.631102306 +0000 UTC"}, Hostname:"10.230.23.118", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 13:06:19.851343 containerd[1508]: 2025-01-29 13:06:19.647 [INFO][3021] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 13:06:19.851343 containerd[1508]: 2025-01-29 13:06:19.670 [INFO][3021] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 13:06:19.851343 containerd[1508]: 2025-01-29 13:06:19.671 [INFO][3021] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.230.23.118' Jan 29 13:06:19.851343 containerd[1508]: 2025-01-29 13:06:19.721 [INFO][3021] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5921974ea33fd30868e9ac4350c876a08b7f54f8d0b60386bc3ccbc355361b0f" host="10.230.23.118" Jan 29 13:06:19.851343 containerd[1508]: 2025-01-29 13:06:19.728 [INFO][3021] ipam/ipam.go 372: Looking up existing affinities for host host="10.230.23.118" Jan 29 13:06:19.851343 containerd[1508]: 2025-01-29 13:06:19.758 [INFO][3021] ipam/ipam.go 489: Trying affinity for 192.168.64.192/26 host="10.230.23.118" Jan 29 13:06:19.851343 containerd[1508]: 2025-01-29 13:06:19.762 [INFO][3021] ipam/ipam.go 155: Attempting to load block cidr=192.168.64.192/26 host="10.230.23.118" Jan 29 13:06:19.851343 containerd[1508]: 2025-01-29 13:06:19.774 [INFO][3021] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.64.192/26 host="10.230.23.118" Jan 29 13:06:19.851343 containerd[1508]: 2025-01-29 13:06:19.774 [INFO][3021] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.64.192/26 handle="k8s-pod-network.5921974ea33fd30868e9ac4350c876a08b7f54f8d0b60386bc3ccbc355361b0f" host="10.230.23.118" Jan 29 13:06:19.851343 containerd[1508]: 2025-01-29 13:06:19.782 [INFO][3021] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5921974ea33fd30868e9ac4350c876a08b7f54f8d0b60386bc3ccbc355361b0f Jan 29 13:06:19.851343 containerd[1508]: 2025-01-29 13:06:19.793 [INFO][3021] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.64.192/26 handle="k8s-pod-network.5921974ea33fd30868e9ac4350c876a08b7f54f8d0b60386bc3ccbc355361b0f" host="10.230.23.118" Jan 29 13:06:19.851343 containerd[1508]: 2025-01-29 13:06:19.811 [INFO][3021] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.64.194/26] block=192.168.64.192/26 handle="k8s-pod-network.5921974ea33fd30868e9ac4350c876a08b7f54f8d0b60386bc3ccbc355361b0f" host="10.230.23.118" Jan 29 13:06:19.851343 containerd[1508]: 2025-01-29 13:06:19.811 [INFO][3021] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.64.194/26] handle="k8s-pod-network.5921974ea33fd30868e9ac4350c876a08b7f54f8d0b60386bc3ccbc355361b0f" host="10.230.23.118" Jan 29 13:06:19.851343 containerd[1508]: 2025-01-29 13:06:19.811 [INFO][3021] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 13:06:19.851343 containerd[1508]: 2025-01-29 13:06:19.811 [INFO][3021] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.64.194/26] IPv6=[] ContainerID="5921974ea33fd30868e9ac4350c876a08b7f54f8d0b60386bc3ccbc355361b0f" HandleID="k8s-pod-network.5921974ea33fd30868e9ac4350c876a08b7f54f8d0b60386bc3ccbc355361b0f" Workload="10.230.23.118-k8s-csi--node--driver--tcnjs-eth0" Jan 29 13:06:19.853227 containerd[1508]: 2025-01-29 13:06:19.814 [INFO][2990] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5921974ea33fd30868e9ac4350c876a08b7f54f8d0b60386bc3ccbc355361b0f" Namespace="calico-system" Pod="csi-node-driver-tcnjs" WorkloadEndpoint="10.230.23.118-k8s-csi--node--driver--tcnjs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.23.118-k8s-csi--node--driver--tcnjs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ac21dc14-23e0-4c74-8492-563d8d3aeeb5", ResourceVersion:"1196", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 13, 5, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.23.118", ContainerID:"", Pod:"csi-node-driver-tcnjs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.64.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calieb93ce91038", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 13:06:19.853227 containerd[1508]: 2025-01-29 13:06:19.814 [INFO][2990] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.64.194/32] ContainerID="5921974ea33fd30868e9ac4350c876a08b7f54f8d0b60386bc3ccbc355361b0f" Namespace="calico-system" Pod="csi-node-driver-tcnjs" WorkloadEndpoint="10.230.23.118-k8s-csi--node--driver--tcnjs-eth0" Jan 29 13:06:19.853227 containerd[1508]: 2025-01-29 13:06:19.814 [INFO][2990] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieb93ce91038 ContainerID="5921974ea33fd30868e9ac4350c876a08b7f54f8d0b60386bc3ccbc355361b0f" Namespace="calico-system" Pod="csi-node-driver-tcnjs" WorkloadEndpoint="10.230.23.118-k8s-csi--node--driver--tcnjs-eth0" Jan 29 13:06:19.853227 containerd[1508]: 2025-01-29 13:06:19.822 [INFO][2990] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5921974ea33fd30868e9ac4350c876a08b7f54f8d0b60386bc3ccbc355361b0f" Namespace="calico-system" Pod="csi-node-driver-tcnjs" WorkloadEndpoint="10.230.23.118-k8s-csi--node--driver--tcnjs-eth0" Jan 29 13:06:19.853227 containerd[1508]: 2025-01-29 13:06:19.824 [INFO][2990] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5921974ea33fd30868e9ac4350c876a08b7f54f8d0b60386bc3ccbc355361b0f" Namespace="calico-system" Pod="csi-node-driver-tcnjs" WorkloadEndpoint="10.230.23.118-k8s-csi--node--driver--tcnjs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.23.118-k8s-csi--node--driver--tcnjs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ac21dc14-23e0-4c74-8492-563d8d3aeeb5", ResourceVersion:"1196", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 13, 5, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.23.118", ContainerID:"5921974ea33fd30868e9ac4350c876a08b7f54f8d0b60386bc3ccbc355361b0f", Pod:"csi-node-driver-tcnjs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.64.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calieb93ce91038", MAC:"a6:31:dd:f7:17:37", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 13:06:19.853227 containerd[1508]: 2025-01-29 13:06:19.849 [INFO][2990] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5921974ea33fd30868e9ac4350c876a08b7f54f8d0b60386bc3ccbc355361b0f" Namespace="calico-system" Pod="csi-node-driver-tcnjs" WorkloadEndpoint="10.230.23.118-k8s-csi--node--driver--tcnjs-eth0" Jan 29 13:06:19.861634 containerd[1508]: time="2025-01-29T13:06:19.861579084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-9xm5w,Uid:0c9c54e0-dec4-4f35-9513-44b910bfb601,Namespace:default,Attempt:7,} returns sandbox id \"817a753e14de9429bb2ab61e1fa9c16404c51dad829abccf0f8ca1819ad56822\"" Jan 29 13:06:19.865357 containerd[1508]: time="2025-01-29T13:06:19.865267289Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 29 13:06:19.888144 containerd[1508]: time="2025-01-29T13:06:19.887962397Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 13:06:19.888469 containerd[1508]: time="2025-01-29T13:06:19.888412682Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 13:06:19.888542 containerd[1508]: time="2025-01-29T13:06:19.888508804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 13:06:19.888866 containerd[1508]: time="2025-01-29T13:06:19.888786069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 13:06:19.913547 systemd[1]: Started cri-containerd-5921974ea33fd30868e9ac4350c876a08b7f54f8d0b60386bc3ccbc355361b0f.scope - libcontainer container 5921974ea33fd30868e9ac4350c876a08b7f54f8d0b60386bc3ccbc355361b0f. Jan 29 13:06:19.934712 kubelet[1878]: E0129 13:06:19.934615 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:19.952438 containerd[1508]: time="2025-01-29T13:06:19.950613686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tcnjs,Uid:ac21dc14-23e0-4c74-8492-563d8d3aeeb5,Namespace:calico-system,Attempt:10,} returns sandbox id \"5921974ea33fd30868e9ac4350c876a08b7f54f8d0b60386bc3ccbc355361b0f\"" Jan 29 13:06:20.373751 systemd[1]: run-containerd-runc-k8s.io-817a753e14de9429bb2ab61e1fa9c16404c51dad829abccf0f8ca1819ad56822-runc.eCjobD.mount: Deactivated successfully. Jan 29 13:06:20.410317 kernel: bpftool[3270]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 29 13:06:20.772557 systemd-networkd[1422]: vxlan.calico: Link UP Jan 29 13:06:20.773158 systemd-networkd[1422]: vxlan.calico: Gained carrier Jan 29 13:06:20.935784 kubelet[1878]: E0129 13:06:20.935721 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:21.340913 systemd-networkd[1422]: cali9da5e4c35f9: Gained IPv6LL Jan 29 13:06:21.361978 systemd[1]: run-containerd-runc-k8s.io-0f3d9f4b6cc44189f3805dc26d74f645e3e9e4f0bf98a0c3b020a53986e7f4b1-runc.KON0Nu.mount: Deactivated successfully. Jan 29 13:06:21.533633 systemd-networkd[1422]: calieb93ce91038: Gained IPv6LL Jan 29 13:06:21.936472 kubelet[1878]: E0129 13:06:21.936368 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:22.044632 systemd-networkd[1422]: vxlan.calico: Gained IPv6LL Jan 29 13:06:22.937019 kubelet[1878]: E0129 13:06:22.936948 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:23.601597 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3462423230.mount: Deactivated successfully. Jan 29 13:06:23.937622 kubelet[1878]: E0129 13:06:23.937543 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:24.938679 kubelet[1878]: E0129 13:06:24.938524 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:25.279562 containerd[1508]: time="2025-01-29T13:06:25.279392205Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:06:25.280801 containerd[1508]: time="2025-01-29T13:06:25.280756706Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71015561" Jan 29 13:06:25.281648 containerd[1508]: time="2025-01-29T13:06:25.281609033Z" level=info msg="ImageCreate event name:\"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:06:25.285597 containerd[1508]: time="2025-01-29T13:06:25.285537203Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:06:25.287041 containerd[1508]: time="2025-01-29T13:06:25.286871407Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 5.421523129s" Jan 29 13:06:25.287041 containerd[1508]: time="2025-01-29T13:06:25.286915776Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 29 13:06:25.289316 containerd[1508]: time="2025-01-29T13:06:25.289269129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 29 13:06:25.291495 containerd[1508]: time="2025-01-29T13:06:25.291323159Z" level=info msg="CreateContainer within sandbox \"817a753e14de9429bb2ab61e1fa9c16404c51dad829abccf0f8ca1819ad56822\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 29 13:06:25.307767 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount547432259.mount: Deactivated successfully. Jan 29 13:06:25.311263 containerd[1508]: time="2025-01-29T13:06:25.311213967Z" level=info msg="CreateContainer within sandbox \"817a753e14de9429bb2ab61e1fa9c16404c51dad829abccf0f8ca1819ad56822\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"d9c857b3e3bec7d08279c18610db9b1e9f0b9d06423e746213020b56fb297471\"" Jan 29 13:06:25.312215 containerd[1508]: time="2025-01-29T13:06:25.312163478Z" level=info msg="StartContainer for \"d9c857b3e3bec7d08279c18610db9b1e9f0b9d06423e746213020b56fb297471\"" Jan 29 13:06:25.361494 systemd[1]: Started cri-containerd-d9c857b3e3bec7d08279c18610db9b1e9f0b9d06423e746213020b56fb297471.scope - libcontainer container d9c857b3e3bec7d08279c18610db9b1e9f0b9d06423e746213020b56fb297471. Jan 29 13:06:25.396677 containerd[1508]: time="2025-01-29T13:06:25.396615229Z" level=info msg="StartContainer for \"d9c857b3e3bec7d08279c18610db9b1e9f0b9d06423e746213020b56fb297471\" returns successfully" Jan 29 13:06:25.938755 kubelet[1878]: E0129 13:06:25.938679 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:26.757629 containerd[1508]: time="2025-01-29T13:06:26.757550192Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:06:26.758942 containerd[1508]: time="2025-01-29T13:06:26.758789391Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 29 13:06:26.759631 containerd[1508]: time="2025-01-29T13:06:26.759544401Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:06:26.763434 containerd[1508]: time="2025-01-29T13:06:26.762909271Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:06:26.764214 containerd[1508]: time="2025-01-29T13:06:26.764170710Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.474724101s" Jan 29 13:06:26.764309 containerd[1508]: time="2025-01-29T13:06:26.764221267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 29 13:06:26.767719 containerd[1508]: time="2025-01-29T13:06:26.767676330Z" level=info msg="CreateContainer within sandbox \"5921974ea33fd30868e9ac4350c876a08b7f54f8d0b60386bc3ccbc355361b0f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 29 13:06:26.792296 containerd[1508]: time="2025-01-29T13:06:26.792222086Z" level=info msg="CreateContainer within sandbox \"5921974ea33fd30868e9ac4350c876a08b7f54f8d0b60386bc3ccbc355361b0f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1aa43fb38fb6b4e7b125e7c99123e20360eb8e69c59969a05c6f8cf8aa76b78c\"" Jan 29 13:06:26.795231 containerd[1508]: time="2025-01-29T13:06:26.793322989Z" level=info msg="StartContainer for \"1aa43fb38fb6b4e7b125e7c99123e20360eb8e69c59969a05c6f8cf8aa76b78c\"" Jan 29 13:06:26.842523 systemd[1]: Started cri-containerd-1aa43fb38fb6b4e7b125e7c99123e20360eb8e69c59969a05c6f8cf8aa76b78c.scope - libcontainer container 1aa43fb38fb6b4e7b125e7c99123e20360eb8e69c59969a05c6f8cf8aa76b78c. Jan 29 13:06:26.940124 kubelet[1878]: E0129 13:06:26.939725 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:26.988733 containerd[1508]: time="2025-01-29T13:06:26.988664048Z" level=info msg="StartContainer for \"1aa43fb38fb6b4e7b125e7c99123e20360eb8e69c59969a05c6f8cf8aa76b78c\" returns successfully" Jan 29 13:06:26.990737 containerd[1508]: time="2025-01-29T13:06:26.990607756Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 29 13:06:27.940175 kubelet[1878]: E0129 13:06:27.940098 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:28.517993 containerd[1508]: time="2025-01-29T13:06:28.517923491Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:06:28.520026 containerd[1508]: time="2025-01-29T13:06:28.519790189Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 29 13:06:28.520699 containerd[1508]: time="2025-01-29T13:06:28.520545953Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:06:28.523791 containerd[1508]: time="2025-01-29T13:06:28.523734813Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:06:28.524782 containerd[1508]: time="2025-01-29T13:06:28.524744896Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.53409533s" Jan 29 13:06:28.525072 containerd[1508]: time="2025-01-29T13:06:28.524861453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 29 13:06:28.528210 containerd[1508]: time="2025-01-29T13:06:28.527931252Z" level=info msg="CreateContainer within sandbox \"5921974ea33fd30868e9ac4350c876a08b7f54f8d0b60386bc3ccbc355361b0f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 29 13:06:28.556579 containerd[1508]: time="2025-01-29T13:06:28.556442522Z" level=info msg="CreateContainer within sandbox \"5921974ea33fd30868e9ac4350c876a08b7f54f8d0b60386bc3ccbc355361b0f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"f860a7b1c1b8f52c00a2b938262fe5a7ea385533ee24e737ad3b5f8f440e0020\"" Jan 29 13:06:28.557454 containerd[1508]: time="2025-01-29T13:06:28.557422186Z" level=info msg="StartContainer for \"f860a7b1c1b8f52c00a2b938262fe5a7ea385533ee24e737ad3b5f8f440e0020\"" Jan 29 13:06:28.605583 systemd[1]: Started cri-containerd-f860a7b1c1b8f52c00a2b938262fe5a7ea385533ee24e737ad3b5f8f440e0020.scope - libcontainer container f860a7b1c1b8f52c00a2b938262fe5a7ea385533ee24e737ad3b5f8f440e0020. Jan 29 13:06:28.650990 containerd[1508]: time="2025-01-29T13:06:28.650820444Z" level=info msg="StartContainer for \"f860a7b1c1b8f52c00a2b938262fe5a7ea385533ee24e737ad3b5f8f440e0020\" returns successfully" Jan 29 13:06:28.940656 kubelet[1878]: E0129 13:06:28.940564 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:29.098178 kubelet[1878]: I0129 13:06:29.097733 1878 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 29 13:06:29.098178 kubelet[1878]: I0129 13:06:29.097804 1878 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 29 13:06:29.390337 kubelet[1878]: I0129 13:06:29.390132 1878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-tcnjs" podStartSLOduration=26.819406348 podStartE2EDuration="35.390108609s" podCreationTimestamp="2025-01-29 13:05:54 +0000 UTC" firstStartedPulling="2025-01-29 13:06:19.955533918 +0000 UTC m=+26.753850131" lastFinishedPulling="2025-01-29 13:06:28.526236185 +0000 UTC m=+35.324552392" observedRunningTime="2025-01-29 13:06:29.389927646 +0000 UTC m=+36.188243886" watchObservedRunningTime="2025-01-29 13:06:29.390108609 +0000 UTC m=+36.188424837" Jan 29 13:06:29.390937 kubelet[1878]: I0129 13:06:29.390764 1878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-9xm5w" podStartSLOduration=11.966883311 podStartE2EDuration="17.390753169s" podCreationTimestamp="2025-01-29 13:06:12 +0000 UTC" firstStartedPulling="2025-01-29 13:06:19.864576517 +0000 UTC m=+26.662892730" lastFinishedPulling="2025-01-29 13:06:25.288446381 +0000 UTC m=+32.086762588" observedRunningTime="2025-01-29 13:06:26.367346731 +0000 UTC m=+33.165662988" watchObservedRunningTime="2025-01-29 13:06:29.390753169 +0000 UTC m=+36.189069403" Jan 29 13:06:29.941411 kubelet[1878]: E0129 13:06:29.941308 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:30.942462 kubelet[1878]: E0129 13:06:30.942392 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:31.943676 kubelet[1878]: E0129 13:06:31.943585 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:32.944593 kubelet[1878]: E0129 13:06:32.944512 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:33.913894 kubelet[1878]: E0129 13:06:33.913812 1878 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:33.944927 kubelet[1878]: E0129 13:06:33.944868 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:34.453941 systemd[1]: Created slice kubepods-besteffort-pode2d69a62_dcb5_43fe_b986_44212693ba9b.slice - libcontainer container kubepods-besteffort-pode2d69a62_dcb5_43fe_b986_44212693ba9b.slice. Jan 29 13:06:34.547846 kubelet[1878]: I0129 13:06:34.547599 1878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/e2d69a62-dcb5-43fe-b986-44212693ba9b-data\") pod \"nfs-server-provisioner-0\" (UID: \"e2d69a62-dcb5-43fe-b986-44212693ba9b\") " pod="default/nfs-server-provisioner-0" Jan 29 13:06:34.547846 kubelet[1878]: I0129 13:06:34.547805 1878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj2bc\" (UniqueName: \"kubernetes.io/projected/e2d69a62-dcb5-43fe-b986-44212693ba9b-kube-api-access-bj2bc\") pod \"nfs-server-provisioner-0\" (UID: \"e2d69a62-dcb5-43fe-b986-44212693ba9b\") " pod="default/nfs-server-provisioner-0" Jan 29 13:06:34.760255 containerd[1508]: time="2025-01-29T13:06:34.759660099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:e2d69a62-dcb5-43fe-b986-44212693ba9b,Namespace:default,Attempt:0,}" Jan 29 13:06:34.945975 kubelet[1878]: E0129 13:06:34.945900 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:34.969017 systemd-networkd[1422]: cali60e51b789ff: Link UP Jan 29 13:06:34.971766 systemd-networkd[1422]: cali60e51b789ff: Gained carrier Jan 29 13:06:34.985913 containerd[1508]: 2025-01-29 13:06:34.852 [INFO][3586] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.230.23.118-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default e2d69a62-dcb5-43fe-b986-44212693ba9b 1281 0 2025-01-29 13:06:34 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.230.23.118 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="97efe15039e0d4075b9a03217e2ae1a457ae642603544abce3cb8da222474612" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.23.118-k8s-nfs--server--provisioner--0-" Jan 29 13:06:34.985913 containerd[1508]: 2025-01-29 13:06:34.852 [INFO][3586] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="97efe15039e0d4075b9a03217e2ae1a457ae642603544abce3cb8da222474612" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.23.118-k8s-nfs--server--provisioner--0-eth0" Jan 29 13:06:34.985913 containerd[1508]: 2025-01-29 13:06:34.906 [INFO][3598] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="97efe15039e0d4075b9a03217e2ae1a457ae642603544abce3cb8da222474612" HandleID="k8s-pod-network.97efe15039e0d4075b9a03217e2ae1a457ae642603544abce3cb8da222474612" Workload="10.230.23.118-k8s-nfs--server--provisioner--0-eth0" Jan 29 13:06:34.985913 containerd[1508]: 2025-01-29 13:06:34.920 [INFO][3598] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="97efe15039e0d4075b9a03217e2ae1a457ae642603544abce3cb8da222474612" HandleID="k8s-pod-network.97efe15039e0d4075b9a03217e2ae1a457ae642603544abce3cb8da222474612" Workload="10.230.23.118-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031caf0), Attrs:map[string]string{"namespace":"default", "node":"10.230.23.118", "pod":"nfs-server-provisioner-0", "timestamp":"2025-01-29 13:06:34.906001265 +0000 UTC"}, Hostname:"10.230.23.118", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 13:06:34.985913 containerd[1508]: 2025-01-29 13:06:34.920 [INFO][3598] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 13:06:34.985913 containerd[1508]: 2025-01-29 13:06:34.921 [INFO][3598] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 13:06:34.985913 containerd[1508]: 2025-01-29 13:06:34.921 [INFO][3598] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.230.23.118' Jan 29 13:06:34.985913 containerd[1508]: 2025-01-29 13:06:34.924 [INFO][3598] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.97efe15039e0d4075b9a03217e2ae1a457ae642603544abce3cb8da222474612" host="10.230.23.118" Jan 29 13:06:34.985913 containerd[1508]: 2025-01-29 13:06:34.931 [INFO][3598] ipam/ipam.go 372: Looking up existing affinities for host host="10.230.23.118" Jan 29 13:06:34.985913 containerd[1508]: 2025-01-29 13:06:34.938 [INFO][3598] ipam/ipam.go 489: Trying affinity for 192.168.64.192/26 host="10.230.23.118" Jan 29 13:06:34.985913 containerd[1508]: 2025-01-29 13:06:34.940 [INFO][3598] ipam/ipam.go 155: Attempting to load block cidr=192.168.64.192/26 host="10.230.23.118" Jan 29 13:06:34.985913 containerd[1508]: 2025-01-29 13:06:34.944 [INFO][3598] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.64.192/26 host="10.230.23.118" Jan 29 13:06:34.985913 containerd[1508]: 2025-01-29 13:06:34.944 [INFO][3598] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.64.192/26 handle="k8s-pod-network.97efe15039e0d4075b9a03217e2ae1a457ae642603544abce3cb8da222474612" host="10.230.23.118" Jan 29 13:06:34.985913 containerd[1508]: 2025-01-29 13:06:34.946 [INFO][3598] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.97efe15039e0d4075b9a03217e2ae1a457ae642603544abce3cb8da222474612 Jan 29 13:06:34.985913 containerd[1508]: 2025-01-29 13:06:34.953 [INFO][3598] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.64.192/26 handle="k8s-pod-network.97efe15039e0d4075b9a03217e2ae1a457ae642603544abce3cb8da222474612" host="10.230.23.118" Jan 29 13:06:34.985913 containerd[1508]: 2025-01-29 13:06:34.960 [INFO][3598] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.64.195/26] block=192.168.64.192/26 handle="k8s-pod-network.97efe15039e0d4075b9a03217e2ae1a457ae642603544abce3cb8da222474612" host="10.230.23.118" Jan 29 13:06:34.985913 containerd[1508]: 2025-01-29 13:06:34.961 [INFO][3598] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.64.195/26] handle="k8s-pod-network.97efe15039e0d4075b9a03217e2ae1a457ae642603544abce3cb8da222474612" host="10.230.23.118" Jan 29 13:06:34.985913 containerd[1508]: 2025-01-29 13:06:34.961 [INFO][3598] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 13:06:34.985913 containerd[1508]: 2025-01-29 13:06:34.961 [INFO][3598] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.64.195/26] IPv6=[] ContainerID="97efe15039e0d4075b9a03217e2ae1a457ae642603544abce3cb8da222474612" HandleID="k8s-pod-network.97efe15039e0d4075b9a03217e2ae1a457ae642603544abce3cb8da222474612" Workload="10.230.23.118-k8s-nfs--server--provisioner--0-eth0" Jan 29 13:06:34.987360 containerd[1508]: 2025-01-29 13:06:34.962 [INFO][3586] cni-plugin/k8s.go 386: Populated endpoint ContainerID="97efe15039e0d4075b9a03217e2ae1a457ae642603544abce3cb8da222474612" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.23.118-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.23.118-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"e2d69a62-dcb5-43fe-b986-44212693ba9b", ResourceVersion:"1281", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 13, 6, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.23.118", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.64.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 13:06:34.987360 containerd[1508]: 2025-01-29 13:06:34.963 [INFO][3586] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.64.195/32] ContainerID="97efe15039e0d4075b9a03217e2ae1a457ae642603544abce3cb8da222474612" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.23.118-k8s-nfs--server--provisioner--0-eth0" Jan 29 13:06:34.987360 containerd[1508]: 2025-01-29 13:06:34.963 [INFO][3586] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="97efe15039e0d4075b9a03217e2ae1a457ae642603544abce3cb8da222474612" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.23.118-k8s-nfs--server--provisioner--0-eth0" Jan 29 13:06:34.987360 containerd[1508]: 2025-01-29 13:06:34.968 [INFO][3586] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="97efe15039e0d4075b9a03217e2ae1a457ae642603544abce3cb8da222474612" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.23.118-k8s-nfs--server--provisioner--0-eth0" Jan 29 13:06:34.987893 containerd[1508]: 2025-01-29 13:06:34.969 [INFO][3586] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="97efe15039e0d4075b9a03217e2ae1a457ae642603544abce3cb8da222474612" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.23.118-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.23.118-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"e2d69a62-dcb5-43fe-b986-44212693ba9b", ResourceVersion:"1281", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 13, 6, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.23.118", ContainerID:"97efe15039e0d4075b9a03217e2ae1a457ae642603544abce3cb8da222474612", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.64.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"6a:4a:31:73:d2:15", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 13:06:34.987893 containerd[1508]: 2025-01-29 13:06:34.982 [INFO][3586] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="97efe15039e0d4075b9a03217e2ae1a457ae642603544abce3cb8da222474612" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.23.118-k8s-nfs--server--provisioner--0-eth0" Jan 29 13:06:35.021929 containerd[1508]: time="2025-01-29T13:06:35.020388883Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 13:06:35.021929 containerd[1508]: time="2025-01-29T13:06:35.020473075Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 13:06:35.021929 containerd[1508]: time="2025-01-29T13:06:35.020523474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 13:06:35.022502 containerd[1508]: time="2025-01-29T13:06:35.020775029Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 13:06:35.046884 systemd[1]: run-containerd-runc-k8s.io-97efe15039e0d4075b9a03217e2ae1a457ae642603544abce3cb8da222474612-runc.7F6w5U.mount: Deactivated successfully. Jan 29 13:06:35.056498 systemd[1]: Started cri-containerd-97efe15039e0d4075b9a03217e2ae1a457ae642603544abce3cb8da222474612.scope - libcontainer container 97efe15039e0d4075b9a03217e2ae1a457ae642603544abce3cb8da222474612. Jan 29 13:06:35.121662 containerd[1508]: time="2025-01-29T13:06:35.121581432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:e2d69a62-dcb5-43fe-b986-44212693ba9b,Namespace:default,Attempt:0,} returns sandbox id \"97efe15039e0d4075b9a03217e2ae1a457ae642603544abce3cb8da222474612\"" Jan 29 13:06:35.124250 containerd[1508]: time="2025-01-29T13:06:35.124217383Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 29 13:06:35.946487 kubelet[1878]: E0129 13:06:35.946387 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:36.946786 kubelet[1878]: E0129 13:06:36.946718 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:36.957768 systemd-networkd[1422]: cali60e51b789ff: Gained IPv6LL Jan 29 13:06:37.947543 kubelet[1878]: E0129 13:06:37.947409 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:38.457526 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3421124846.mount: Deactivated successfully. Jan 29 13:06:38.948651 kubelet[1878]: E0129 13:06:38.948597 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:39.950279 kubelet[1878]: E0129 13:06:39.950206 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:40.950765 kubelet[1878]: E0129 13:06:40.950707 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:41.427798 containerd[1508]: time="2025-01-29T13:06:41.426528846Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:06:41.430062 containerd[1508]: time="2025-01-29T13:06:41.429926340Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039414" Jan 29 13:06:41.431534 containerd[1508]: time="2025-01-29T13:06:41.431425609Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:06:41.435670 containerd[1508]: time="2025-01-29T13:06:41.435579921Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:06:41.437063 containerd[1508]: time="2025-01-29T13:06:41.437019746Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 6.312758868s" Jan 29 13:06:41.437128 containerd[1508]: time="2025-01-29T13:06:41.437067857Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 29 13:06:41.442088 containerd[1508]: time="2025-01-29T13:06:41.442040039Z" level=info msg="CreateContainer within sandbox \"97efe15039e0d4075b9a03217e2ae1a457ae642603544abce3cb8da222474612\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 29 13:06:41.456750 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2361593769.mount: Deactivated successfully. Jan 29 13:06:41.466465 containerd[1508]: time="2025-01-29T13:06:41.466386500Z" level=info msg="CreateContainer within sandbox \"97efe15039e0d4075b9a03217e2ae1a457ae642603544abce3cb8da222474612\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"4bef3e8549852c79d26f5d7bc0097cc571f24dbdc1de9ff0a7b42d88ffe5b4cc\"" Jan 29 13:06:41.473506 containerd[1508]: time="2025-01-29T13:06:41.473427591Z" level=info msg="StartContainer for \"4bef3e8549852c79d26f5d7bc0097cc571f24dbdc1de9ff0a7b42d88ffe5b4cc\"" Jan 29 13:06:41.518553 systemd[1]: Started cri-containerd-4bef3e8549852c79d26f5d7bc0097cc571f24dbdc1de9ff0a7b42d88ffe5b4cc.scope - libcontainer container 4bef3e8549852c79d26f5d7bc0097cc571f24dbdc1de9ff0a7b42d88ffe5b4cc. Jan 29 13:06:41.554974 containerd[1508]: time="2025-01-29T13:06:41.554915341Z" level=info msg="StartContainer for \"4bef3e8549852c79d26f5d7bc0097cc571f24dbdc1de9ff0a7b42d88ffe5b4cc\" returns successfully" Jan 29 13:06:41.951692 kubelet[1878]: E0129 13:06:41.951610 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:42.446913 kubelet[1878]: I0129 13:06:42.446775 1878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.130817521 podStartE2EDuration="8.446747719s" podCreationTimestamp="2025-01-29 13:06:34 +0000 UTC" firstStartedPulling="2025-01-29 13:06:35.123908004 +0000 UTC m=+41.922224210" lastFinishedPulling="2025-01-29 13:06:41.439838195 +0000 UTC m=+48.238154408" observedRunningTime="2025-01-29 13:06:42.446244146 +0000 UTC m=+49.244560382" watchObservedRunningTime="2025-01-29 13:06:42.446747719 +0000 UTC m=+49.245063938" Jan 29 13:06:42.951978 kubelet[1878]: E0129 13:06:42.951898 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:43.952828 kubelet[1878]: E0129 13:06:43.952745 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:44.953506 kubelet[1878]: E0129 13:06:44.953394 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:45.954411 kubelet[1878]: E0129 13:06:45.954191 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:46.955038 kubelet[1878]: E0129 13:06:46.954921 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:47.955891 kubelet[1878]: E0129 13:06:47.955787 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:48.957076 kubelet[1878]: E0129 13:06:48.956998 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:49.957974 kubelet[1878]: E0129 13:06:49.957883 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:50.958401 kubelet[1878]: E0129 13:06:50.958319 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:51.402613 systemd[1]: Created slice kubepods-besteffort-pod670481bc_7227_49e4_8524_d4b2d280892f.slice - libcontainer container kubepods-besteffort-pod670481bc_7227_49e4_8524_d4b2d280892f.slice. Jan 29 13:06:51.550109 kubelet[1878]: I0129 13:06:51.550035 1878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qp8s\" (UniqueName: \"kubernetes.io/projected/670481bc-7227-49e4-8524-d4b2d280892f-kube-api-access-4qp8s\") pod \"test-pod-1\" (UID: \"670481bc-7227-49e4-8524-d4b2d280892f\") " pod="default/test-pod-1" Jan 29 13:06:51.550354 kubelet[1878]: I0129 13:06:51.550120 1878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9d448bfb-d0ae-4ec1-abe2-1ff8fd464fa9\" (UniqueName: \"kubernetes.io/nfs/670481bc-7227-49e4-8524-d4b2d280892f-pvc-9d448bfb-d0ae-4ec1-abe2-1ff8fd464fa9\") pod \"test-pod-1\" (UID: \"670481bc-7227-49e4-8524-d4b2d280892f\") " pod="default/test-pod-1" Jan 29 13:06:51.721819 kernel: FS-Cache: Loaded Jan 29 13:06:51.815823 kernel: RPC: Registered named UNIX socket transport module. Jan 29 13:06:51.816044 kernel: RPC: Registered udp transport module. Jan 29 13:06:51.816088 kernel: RPC: Registered tcp transport module. Jan 29 13:06:51.816367 kernel: RPC: Registered tcp-with-tls transport module. Jan 29 13:06:51.817607 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 29 13:06:51.958847 kubelet[1878]: E0129 13:06:51.958769 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:52.178519 kernel: NFS: Registering the id_resolver key type Jan 29 13:06:52.178860 kernel: Key type id_resolver registered Jan 29 13:06:52.178927 kernel: Key type id_legacy registered Jan 29 13:06:52.235612 nfsidmap[3789]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Jan 29 13:06:52.247726 nfsidmap[3792]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Jan 29 13:06:52.323480 containerd[1508]: time="2025-01-29T13:06:52.322153284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:670481bc-7227-49e4-8524-d4b2d280892f,Namespace:default,Attempt:0,}" Jan 29 13:06:52.516650 systemd-networkd[1422]: cali5ec59c6bf6e: Link UP Jan 29 13:06:52.518461 systemd-networkd[1422]: cali5ec59c6bf6e: Gained carrier Jan 29 13:06:52.536880 containerd[1508]: 2025-01-29 13:06:52.399 [INFO][3796] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.230.23.118-k8s-test--pod--1-eth0 default 670481bc-7227-49e4-8524-d4b2d280892f 1343 0 2025-01-29 13:06:36 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.230.23.118 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="1d01b22a111278260afd41f862f19b5811df0cc7f36ecf8fb500316f4bcef02e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.23.118-k8s-test--pod--1-" Jan 29 13:06:52.536880 containerd[1508]: 2025-01-29 13:06:52.399 [INFO][3796] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1d01b22a111278260afd41f862f19b5811df0cc7f36ecf8fb500316f4bcef02e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.23.118-k8s-test--pod--1-eth0" Jan 29 13:06:52.536880 containerd[1508]: 2025-01-29 13:06:52.451 [INFO][3807] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1d01b22a111278260afd41f862f19b5811df0cc7f36ecf8fb500316f4bcef02e" HandleID="k8s-pod-network.1d01b22a111278260afd41f862f19b5811df0cc7f36ecf8fb500316f4bcef02e" Workload="10.230.23.118-k8s-test--pod--1-eth0" Jan 29 13:06:52.536880 containerd[1508]: 2025-01-29 13:06:52.465 [INFO][3807] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1d01b22a111278260afd41f862f19b5811df0cc7f36ecf8fb500316f4bcef02e" HandleID="k8s-pod-network.1d01b22a111278260afd41f862f19b5811df0cc7f36ecf8fb500316f4bcef02e" Workload="10.230.23.118-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000293170), Attrs:map[string]string{"namespace":"default", "node":"10.230.23.118", "pod":"test-pod-1", "timestamp":"2025-01-29 13:06:52.45153292 +0000 UTC"}, Hostname:"10.230.23.118", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 13:06:52.536880 containerd[1508]: 2025-01-29 13:06:52.465 [INFO][3807] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 13:06:52.536880 containerd[1508]: 2025-01-29 13:06:52.465 [INFO][3807] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 13:06:52.536880 containerd[1508]: 2025-01-29 13:06:52.465 [INFO][3807] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.230.23.118' Jan 29 13:06:52.536880 containerd[1508]: 2025-01-29 13:06:52.469 [INFO][3807] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1d01b22a111278260afd41f862f19b5811df0cc7f36ecf8fb500316f4bcef02e" host="10.230.23.118" Jan 29 13:06:52.536880 containerd[1508]: 2025-01-29 13:06:52.475 [INFO][3807] ipam/ipam.go 372: Looking up existing affinities for host host="10.230.23.118" Jan 29 13:06:52.536880 containerd[1508]: 2025-01-29 13:06:52.481 [INFO][3807] ipam/ipam.go 489: Trying affinity for 192.168.64.192/26 host="10.230.23.118" Jan 29 13:06:52.536880 containerd[1508]: 2025-01-29 13:06:52.484 [INFO][3807] ipam/ipam.go 155: Attempting to load block cidr=192.168.64.192/26 host="10.230.23.118" Jan 29 13:06:52.536880 containerd[1508]: 2025-01-29 13:06:52.487 [INFO][3807] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.64.192/26 host="10.230.23.118" Jan 29 13:06:52.536880 containerd[1508]: 2025-01-29 13:06:52.487 [INFO][3807] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.64.192/26 handle="k8s-pod-network.1d01b22a111278260afd41f862f19b5811df0cc7f36ecf8fb500316f4bcef02e" host="10.230.23.118" Jan 29 13:06:52.536880 containerd[1508]: 2025-01-29 13:06:52.490 [INFO][3807] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1d01b22a111278260afd41f862f19b5811df0cc7f36ecf8fb500316f4bcef02e Jan 29 13:06:52.536880 containerd[1508]: 2025-01-29 13:06:52.498 [INFO][3807] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.64.192/26 handle="k8s-pod-network.1d01b22a111278260afd41f862f19b5811df0cc7f36ecf8fb500316f4bcef02e" host="10.230.23.118" Jan 29 13:06:52.536880 containerd[1508]: 2025-01-29 13:06:52.506 [INFO][3807] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.64.196/26] block=192.168.64.192/26 handle="k8s-pod-network.1d01b22a111278260afd41f862f19b5811df0cc7f36ecf8fb500316f4bcef02e" host="10.230.23.118" Jan 29 13:06:52.536880 containerd[1508]: 2025-01-29 13:06:52.506 [INFO][3807] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.64.196/26] handle="k8s-pod-network.1d01b22a111278260afd41f862f19b5811df0cc7f36ecf8fb500316f4bcef02e" host="10.230.23.118" Jan 29 13:06:52.536880 containerd[1508]: 2025-01-29 13:06:52.506 [INFO][3807] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 13:06:52.536880 containerd[1508]: 2025-01-29 13:06:52.506 [INFO][3807] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.64.196/26] IPv6=[] ContainerID="1d01b22a111278260afd41f862f19b5811df0cc7f36ecf8fb500316f4bcef02e" HandleID="k8s-pod-network.1d01b22a111278260afd41f862f19b5811df0cc7f36ecf8fb500316f4bcef02e" Workload="10.230.23.118-k8s-test--pod--1-eth0" Jan 29 13:06:52.538265 containerd[1508]: 2025-01-29 13:06:52.509 [INFO][3796] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1d01b22a111278260afd41f862f19b5811df0cc7f36ecf8fb500316f4bcef02e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.23.118-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.23.118-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"670481bc-7227-49e4-8524-d4b2d280892f", ResourceVersion:"1343", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 13, 6, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.23.118", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.64.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 13:06:52.538265 containerd[1508]: 2025-01-29 13:06:52.509 [INFO][3796] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.64.196/32] ContainerID="1d01b22a111278260afd41f862f19b5811df0cc7f36ecf8fb500316f4bcef02e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.23.118-k8s-test--pod--1-eth0" Jan 29 13:06:52.538265 containerd[1508]: 2025-01-29 13:06:52.509 [INFO][3796] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="1d01b22a111278260afd41f862f19b5811df0cc7f36ecf8fb500316f4bcef02e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.23.118-k8s-test--pod--1-eth0" Jan 29 13:06:52.538265 containerd[1508]: 2025-01-29 13:06:52.519 [INFO][3796] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1d01b22a111278260afd41f862f19b5811df0cc7f36ecf8fb500316f4bcef02e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.23.118-k8s-test--pod--1-eth0" Jan 29 13:06:52.538265 containerd[1508]: 2025-01-29 13:06:52.520 [INFO][3796] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1d01b22a111278260afd41f862f19b5811df0cc7f36ecf8fb500316f4bcef02e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.23.118-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.23.118-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"670481bc-7227-49e4-8524-d4b2d280892f", ResourceVersion:"1343", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 13, 6, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.23.118", ContainerID:"1d01b22a111278260afd41f862f19b5811df0cc7f36ecf8fb500316f4bcef02e", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.64.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"de:d3:12:11:f3:d3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 13:06:52.538265 containerd[1508]: 2025-01-29 13:06:52.531 [INFO][3796] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1d01b22a111278260afd41f862f19b5811df0cc7f36ecf8fb500316f4bcef02e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.23.118-k8s-test--pod--1-eth0" Jan 29 13:06:52.584440 containerd[1508]: time="2025-01-29T13:06:52.584082557Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 13:06:52.584440 containerd[1508]: time="2025-01-29T13:06:52.584229812Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 13:06:52.584440 containerd[1508]: time="2025-01-29T13:06:52.584256117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 13:06:52.585386 containerd[1508]: time="2025-01-29T13:06:52.585008083Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 13:06:52.620695 systemd[1]: Started cri-containerd-1d01b22a111278260afd41f862f19b5811df0cc7f36ecf8fb500316f4bcef02e.scope - libcontainer container 1d01b22a111278260afd41f862f19b5811df0cc7f36ecf8fb500316f4bcef02e. Jan 29 13:06:52.686537 containerd[1508]: time="2025-01-29T13:06:52.686481156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:670481bc-7227-49e4-8524-d4b2d280892f,Namespace:default,Attempt:0,} returns sandbox id \"1d01b22a111278260afd41f862f19b5811df0cc7f36ecf8fb500316f4bcef02e\"" Jan 29 13:06:52.688739 containerd[1508]: time="2025-01-29T13:06:52.688708066Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 29 13:06:52.960659 kubelet[1878]: E0129 13:06:52.960511 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:53.049385 containerd[1508]: time="2025-01-29T13:06:53.049303363Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 13:06:53.051262 containerd[1508]: time="2025-01-29T13:06:53.051184468Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 29 13:06:53.055409 containerd[1508]: time="2025-01-29T13:06:53.055369285Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 366.607623ms" Jan 29 13:06:53.055653 containerd[1508]: time="2025-01-29T13:06:53.055524587Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 29 13:06:53.058936 containerd[1508]: time="2025-01-29T13:06:53.058710472Z" level=info msg="CreateContainer within sandbox \"1d01b22a111278260afd41f862f19b5811df0cc7f36ecf8fb500316f4bcef02e\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 29 13:06:53.087858 containerd[1508]: time="2025-01-29T13:06:53.087695334Z" level=info msg="CreateContainer within sandbox \"1d01b22a111278260afd41f862f19b5811df0cc7f36ecf8fb500316f4bcef02e\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"1c7c222dded664d573616152836139328eff191f2d74b5c247f0a8841fc9e028\"" Jan 29 13:06:53.088714 containerd[1508]: time="2025-01-29T13:06:53.088567314Z" level=info msg="StartContainer for \"1c7c222dded664d573616152836139328eff191f2d74b5c247f0a8841fc9e028\"" Jan 29 13:06:53.131555 systemd[1]: Started cri-containerd-1c7c222dded664d573616152836139328eff191f2d74b5c247f0a8841fc9e028.scope - libcontainer container 1c7c222dded664d573616152836139328eff191f2d74b5c247f0a8841fc9e028. Jan 29 13:06:53.182830 containerd[1508]: time="2025-01-29T13:06:53.182684087Z" level=info msg="StartContainer for \"1c7c222dded664d573616152836139328eff191f2d74b5c247f0a8841fc9e028\" returns successfully" Jan 29 13:06:53.724690 systemd-networkd[1422]: cali5ec59c6bf6e: Gained IPv6LL Jan 29 13:06:53.913667 kubelet[1878]: E0129 13:06:53.913564 1878 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:53.961151 kubelet[1878]: E0129 13:06:53.961092 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:53.979760 containerd[1508]: time="2025-01-29T13:06:53.979605854Z" level=info msg="StopPodSandbox for \"7e6768f4f4034764ce3dfae9a60bd95c769d0afcb67b58697658a9e2d13a5ecc\"" Jan 29 13:06:53.980538 containerd[1508]: time="2025-01-29T13:06:53.979776222Z" level=info msg="TearDown network for sandbox \"7e6768f4f4034764ce3dfae9a60bd95c769d0afcb67b58697658a9e2d13a5ecc\" successfully" Jan 29 13:06:53.980538 containerd[1508]: time="2025-01-29T13:06:53.979796302Z" level=info msg="StopPodSandbox for \"7e6768f4f4034764ce3dfae9a60bd95c769d0afcb67b58697658a9e2d13a5ecc\" returns successfully" Jan 29 13:06:53.983995 containerd[1508]: time="2025-01-29T13:06:53.983962911Z" level=info msg="RemovePodSandbox for \"7e6768f4f4034764ce3dfae9a60bd95c769d0afcb67b58697658a9e2d13a5ecc\"" Jan 29 13:06:53.991765 containerd[1508]: time="2025-01-29T13:06:53.991702548Z" level=info msg="Forcibly stopping sandbox \"7e6768f4f4034764ce3dfae9a60bd95c769d0afcb67b58697658a9e2d13a5ecc\"" Jan 29 13:06:53.991916 containerd[1508]: time="2025-01-29T13:06:53.991832307Z" level=info msg="TearDown network for sandbox \"7e6768f4f4034764ce3dfae9a60bd95c769d0afcb67b58697658a9e2d13a5ecc\" successfully" Jan 29 13:06:54.010481 containerd[1508]: time="2025-01-29T13:06:54.010262719Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7e6768f4f4034764ce3dfae9a60bd95c769d0afcb67b58697658a9e2d13a5ecc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 13:06:54.010481 containerd[1508]: time="2025-01-29T13:06:54.010408307Z" level=info msg="RemovePodSandbox \"7e6768f4f4034764ce3dfae9a60bd95c769d0afcb67b58697658a9e2d13a5ecc\" returns successfully" Jan 29 13:06:54.011122 containerd[1508]: time="2025-01-29T13:06:54.011072254Z" level=info msg="StopPodSandbox for \"24bc3d97635d102c8b625be4a39399ee1aeffea51e3942c828350f65913580a8\"" Jan 29 13:06:54.011243 containerd[1508]: time="2025-01-29T13:06:54.011208392Z" level=info msg="TearDown network for sandbox \"24bc3d97635d102c8b625be4a39399ee1aeffea51e3942c828350f65913580a8\" successfully" Jan 29 13:06:54.011243 containerd[1508]: time="2025-01-29T13:06:54.011228310Z" level=info msg="StopPodSandbox for \"24bc3d97635d102c8b625be4a39399ee1aeffea51e3942c828350f65913580a8\" returns successfully" Jan 29 13:06:54.012797 containerd[1508]: time="2025-01-29T13:06:54.011587897Z" level=info msg="RemovePodSandbox for \"24bc3d97635d102c8b625be4a39399ee1aeffea51e3942c828350f65913580a8\"" Jan 29 13:06:54.012797 containerd[1508]: time="2025-01-29T13:06:54.011623359Z" level=info msg="Forcibly stopping sandbox \"24bc3d97635d102c8b625be4a39399ee1aeffea51e3942c828350f65913580a8\"" Jan 29 13:06:54.012797 containerd[1508]: time="2025-01-29T13:06:54.011714266Z" level=info msg="TearDown network for sandbox \"24bc3d97635d102c8b625be4a39399ee1aeffea51e3942c828350f65913580a8\" successfully" Jan 29 13:06:54.015328 containerd[1508]: time="2025-01-29T13:06:54.014356123Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"24bc3d97635d102c8b625be4a39399ee1aeffea51e3942c828350f65913580a8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 13:06:54.015328 containerd[1508]: time="2025-01-29T13:06:54.014408481Z" level=info msg="RemovePodSandbox \"24bc3d97635d102c8b625be4a39399ee1aeffea51e3942c828350f65913580a8\" returns successfully" Jan 29 13:06:54.015823 containerd[1508]: time="2025-01-29T13:06:54.015789266Z" level=info msg="StopPodSandbox for \"5ae8539768dde0b4b7d53f0cf55858b9bec886fc086f463dfdfa8a87f703f103\"" Jan 29 13:06:54.016039 containerd[1508]: time="2025-01-29T13:06:54.016011415Z" level=info msg="TearDown network for sandbox \"5ae8539768dde0b4b7d53f0cf55858b9bec886fc086f463dfdfa8a87f703f103\" successfully" Jan 29 13:06:54.016209 containerd[1508]: time="2025-01-29T13:06:54.016123960Z" level=info msg="StopPodSandbox for \"5ae8539768dde0b4b7d53f0cf55858b9bec886fc086f463dfdfa8a87f703f103\" returns successfully" Jan 29 13:06:54.016602 containerd[1508]: time="2025-01-29T13:06:54.016571710Z" level=info msg="RemovePodSandbox for \"5ae8539768dde0b4b7d53f0cf55858b9bec886fc086f463dfdfa8a87f703f103\"" Jan 29 13:06:54.016735 containerd[1508]: time="2025-01-29T13:06:54.016709485Z" level=info msg="Forcibly stopping sandbox \"5ae8539768dde0b4b7d53f0cf55858b9bec886fc086f463dfdfa8a87f703f103\"" Jan 29 13:06:54.016933 containerd[1508]: time="2025-01-29T13:06:54.016885187Z" level=info msg="TearDown network for sandbox \"5ae8539768dde0b4b7d53f0cf55858b9bec886fc086f463dfdfa8a87f703f103\" successfully" Jan 29 13:06:54.020462 containerd[1508]: time="2025-01-29T13:06:54.020422514Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5ae8539768dde0b4b7d53f0cf55858b9bec886fc086f463dfdfa8a87f703f103\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 13:06:54.020647 containerd[1508]: time="2025-01-29T13:06:54.020617965Z" level=info msg="RemovePodSandbox \"5ae8539768dde0b4b7d53f0cf55858b9bec886fc086f463dfdfa8a87f703f103\" returns successfully" Jan 29 13:06:54.021148 containerd[1508]: time="2025-01-29T13:06:54.021118473Z" level=info msg="StopPodSandbox for \"8a7b0edf178e94d9efefea5355ab892a6135c08a39ca09f556e430f64121a279\"" Jan 29 13:06:54.021510 containerd[1508]: time="2025-01-29T13:06:54.021482540Z" level=info msg="TearDown network for sandbox \"8a7b0edf178e94d9efefea5355ab892a6135c08a39ca09f556e430f64121a279\" successfully" Jan 29 13:06:54.021648 containerd[1508]: time="2025-01-29T13:06:54.021622008Z" level=info msg="StopPodSandbox for \"8a7b0edf178e94d9efefea5355ab892a6135c08a39ca09f556e430f64121a279\" returns successfully" Jan 29 13:06:54.022152 containerd[1508]: time="2025-01-29T13:06:54.022122925Z" level=info msg="RemovePodSandbox for \"8a7b0edf178e94d9efefea5355ab892a6135c08a39ca09f556e430f64121a279\"" Jan 29 13:06:54.023568 containerd[1508]: time="2025-01-29T13:06:54.022482912Z" level=info msg="Forcibly stopping sandbox \"8a7b0edf178e94d9efefea5355ab892a6135c08a39ca09f556e430f64121a279\"" Jan 29 13:06:54.023568 containerd[1508]: time="2025-01-29T13:06:54.022581379Z" level=info msg="TearDown network for sandbox \"8a7b0edf178e94d9efefea5355ab892a6135c08a39ca09f556e430f64121a279\" successfully" Jan 29 13:06:54.025449 containerd[1508]: time="2025-01-29T13:06:54.025414054Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8a7b0edf178e94d9efefea5355ab892a6135c08a39ca09f556e430f64121a279\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 13:06:54.025639 containerd[1508]: time="2025-01-29T13:06:54.025610380Z" level=info msg="RemovePodSandbox \"8a7b0edf178e94d9efefea5355ab892a6135c08a39ca09f556e430f64121a279\" returns successfully" Jan 29 13:06:54.026214 containerd[1508]: time="2025-01-29T13:06:54.026184425Z" level=info msg="StopPodSandbox for \"02ae5515988d032760a90241ac57d0688b979398522a72ca7bfa1a3bcfa4d974\"" Jan 29 13:06:54.026587 containerd[1508]: time="2025-01-29T13:06:54.026559819Z" level=info msg="TearDown network for sandbox \"02ae5515988d032760a90241ac57d0688b979398522a72ca7bfa1a3bcfa4d974\" successfully" Jan 29 13:06:54.026734 containerd[1508]: time="2025-01-29T13:06:54.026707843Z" level=info msg="StopPodSandbox for \"02ae5515988d032760a90241ac57d0688b979398522a72ca7bfa1a3bcfa4d974\" returns successfully" Jan 29 13:06:54.027425 containerd[1508]: time="2025-01-29T13:06:54.027243456Z" level=info msg="RemovePodSandbox for \"02ae5515988d032760a90241ac57d0688b979398522a72ca7bfa1a3bcfa4d974\"" Jan 29 13:06:54.027556 containerd[1508]: time="2025-01-29T13:06:54.027530709Z" level=info msg="Forcibly stopping sandbox \"02ae5515988d032760a90241ac57d0688b979398522a72ca7bfa1a3bcfa4d974\"" Jan 29 13:06:54.027823 containerd[1508]: time="2025-01-29T13:06:54.027776389Z" level=info msg="TearDown network for sandbox \"02ae5515988d032760a90241ac57d0688b979398522a72ca7bfa1a3bcfa4d974\" successfully" Jan 29 13:06:54.031777 containerd[1508]: time="2025-01-29T13:06:54.031742632Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"02ae5515988d032760a90241ac57d0688b979398522a72ca7bfa1a3bcfa4d974\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 13:06:54.031969 containerd[1508]: time="2025-01-29T13:06:54.031940156Z" level=info msg="RemovePodSandbox \"02ae5515988d032760a90241ac57d0688b979398522a72ca7bfa1a3bcfa4d974\" returns successfully" Jan 29 13:06:54.032633 containerd[1508]: time="2025-01-29T13:06:54.032599261Z" level=info msg="StopPodSandbox for \"96d71c6ce1c17ba958a3091dad1360b894fee90e9c1277be5d1fb525e2bdc23b\"" Jan 29 13:06:54.032744 containerd[1508]: time="2025-01-29T13:06:54.032718461Z" level=info msg="TearDown network for sandbox \"96d71c6ce1c17ba958a3091dad1360b894fee90e9c1277be5d1fb525e2bdc23b\" successfully" Jan 29 13:06:54.032815 containerd[1508]: time="2025-01-29T13:06:54.032745046Z" level=info msg="StopPodSandbox for \"96d71c6ce1c17ba958a3091dad1360b894fee90e9c1277be5d1fb525e2bdc23b\" returns successfully" Jan 29 13:06:54.033317 containerd[1508]: time="2025-01-29T13:06:54.033227298Z" level=info msg="RemovePodSandbox for \"96d71c6ce1c17ba958a3091dad1360b894fee90e9c1277be5d1fb525e2bdc23b\"" Jan 29 13:06:54.034334 containerd[1508]: time="2025-01-29T13:06:54.033466284Z" level=info msg="Forcibly stopping sandbox \"96d71c6ce1c17ba958a3091dad1360b894fee90e9c1277be5d1fb525e2bdc23b\"" Jan 29 13:06:54.034334 containerd[1508]: time="2025-01-29T13:06:54.033568273Z" level=info msg="TearDown network for sandbox \"96d71c6ce1c17ba958a3091dad1360b894fee90e9c1277be5d1fb525e2bdc23b\" successfully" Jan 29 13:06:54.037766 containerd[1508]: time="2025-01-29T13:06:54.037718031Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"96d71c6ce1c17ba958a3091dad1360b894fee90e9c1277be5d1fb525e2bdc23b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 13:06:54.037854 containerd[1508]: time="2025-01-29T13:06:54.037779232Z" level=info msg="RemovePodSandbox \"96d71c6ce1c17ba958a3091dad1360b894fee90e9c1277be5d1fb525e2bdc23b\" returns successfully" Jan 29 13:06:54.038541 containerd[1508]: time="2025-01-29T13:06:54.038257996Z" level=info msg="StopPodSandbox for \"ba67376b1f089398f75b39c2b6cd0bba676e6ce39bf7f902417a71037d9ead81\"" Jan 29 13:06:54.038541 containerd[1508]: time="2025-01-29T13:06:54.038415631Z" level=info msg="TearDown network for sandbox \"ba67376b1f089398f75b39c2b6cd0bba676e6ce39bf7f902417a71037d9ead81\" successfully" Jan 29 13:06:54.038541 containerd[1508]: time="2025-01-29T13:06:54.038437501Z" level=info msg="StopPodSandbox for \"ba67376b1f089398f75b39c2b6cd0bba676e6ce39bf7f902417a71037d9ead81\" returns successfully" Jan 29 13:06:54.038953 containerd[1508]: time="2025-01-29T13:06:54.038850948Z" level=info msg="RemovePodSandbox for \"ba67376b1f089398f75b39c2b6cd0bba676e6ce39bf7f902417a71037d9ead81\"" Jan 29 13:06:54.039042 containerd[1508]: time="2025-01-29T13:06:54.038985124Z" level=info msg="Forcibly stopping sandbox \"ba67376b1f089398f75b39c2b6cd0bba676e6ce39bf7f902417a71037d9ead81\"" Jan 29 13:06:54.039157 containerd[1508]: time="2025-01-29T13:06:54.039081849Z" level=info msg="TearDown network for sandbox \"ba67376b1f089398f75b39c2b6cd0bba676e6ce39bf7f902417a71037d9ead81\" successfully" Jan 29 13:06:54.041528 containerd[1508]: time="2025-01-29T13:06:54.041486335Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ba67376b1f089398f75b39c2b6cd0bba676e6ce39bf7f902417a71037d9ead81\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 13:06:54.041601 containerd[1508]: time="2025-01-29T13:06:54.041537705Z" level=info msg="RemovePodSandbox \"ba67376b1f089398f75b39c2b6cd0bba676e6ce39bf7f902417a71037d9ead81\" returns successfully" Jan 29 13:06:54.041922 containerd[1508]: time="2025-01-29T13:06:54.041884216Z" level=info msg="StopPodSandbox for \"68c05b2e3e6fc50790a3d480d0e8d3ad8f8efdb9bf5ec0c4f5439a284af61cfd\"" Jan 29 13:06:54.042090 containerd[1508]: time="2025-01-29T13:06:54.041993885Z" level=info msg="TearDown network for sandbox \"68c05b2e3e6fc50790a3d480d0e8d3ad8f8efdb9bf5ec0c4f5439a284af61cfd\" successfully" Jan 29 13:06:54.042090 containerd[1508]: time="2025-01-29T13:06:54.042017249Z" level=info msg="StopPodSandbox for \"68c05b2e3e6fc50790a3d480d0e8d3ad8f8efdb9bf5ec0c4f5439a284af61cfd\" returns successfully" Jan 29 13:06:54.043847 containerd[1508]: time="2025-01-29T13:06:54.042678368Z" level=info msg="RemovePodSandbox for \"68c05b2e3e6fc50790a3d480d0e8d3ad8f8efdb9bf5ec0c4f5439a284af61cfd\"" Jan 29 13:06:54.043847 containerd[1508]: time="2025-01-29T13:06:54.042713367Z" level=info msg="Forcibly stopping sandbox \"68c05b2e3e6fc50790a3d480d0e8d3ad8f8efdb9bf5ec0c4f5439a284af61cfd\"" Jan 29 13:06:54.043847 containerd[1508]: time="2025-01-29T13:06:54.042801267Z" level=info msg="TearDown network for sandbox \"68c05b2e3e6fc50790a3d480d0e8d3ad8f8efdb9bf5ec0c4f5439a284af61cfd\" successfully" Jan 29 13:06:54.045100 containerd[1508]: time="2025-01-29T13:06:54.045054755Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"68c05b2e3e6fc50790a3d480d0e8d3ad8f8efdb9bf5ec0c4f5439a284af61cfd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 13:06:54.045209 containerd[1508]: time="2025-01-29T13:06:54.045110612Z" level=info msg="RemovePodSandbox \"68c05b2e3e6fc50790a3d480d0e8d3ad8f8efdb9bf5ec0c4f5439a284af61cfd\" returns successfully" Jan 29 13:06:54.046800 containerd[1508]: time="2025-01-29T13:06:54.046330809Z" level=info msg="StopPodSandbox for \"f61d6ee5389e43c43a84d46d0c771edb2ff088040550efc66d97a815bcfa91a4\"" Jan 29 13:06:54.046800 containerd[1508]: time="2025-01-29T13:06:54.046478160Z" level=info msg="TearDown network for sandbox \"f61d6ee5389e43c43a84d46d0c771edb2ff088040550efc66d97a815bcfa91a4\" successfully" Jan 29 13:06:54.046800 containerd[1508]: time="2025-01-29T13:06:54.046523848Z" level=info msg="StopPodSandbox for \"f61d6ee5389e43c43a84d46d0c771edb2ff088040550efc66d97a815bcfa91a4\" returns successfully" Jan 29 13:06:54.077133 containerd[1508]: time="2025-01-29T13:06:54.076805640Z" level=info msg="RemovePodSandbox for \"f61d6ee5389e43c43a84d46d0c771edb2ff088040550efc66d97a815bcfa91a4\"" Jan 29 13:06:54.077133 containerd[1508]: time="2025-01-29T13:06:54.076861599Z" level=info msg="Forcibly stopping sandbox \"f61d6ee5389e43c43a84d46d0c771edb2ff088040550efc66d97a815bcfa91a4\"" Jan 29 13:06:54.077133 containerd[1508]: time="2025-01-29T13:06:54.076983034Z" level=info msg="TearDown network for sandbox \"f61d6ee5389e43c43a84d46d0c771edb2ff088040550efc66d97a815bcfa91a4\" successfully" Jan 29 13:06:54.083989 containerd[1508]: time="2025-01-29T13:06:54.083176797Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f61d6ee5389e43c43a84d46d0c771edb2ff088040550efc66d97a815bcfa91a4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 13:06:54.083989 containerd[1508]: time="2025-01-29T13:06:54.083358121Z" level=info msg="RemovePodSandbox \"f61d6ee5389e43c43a84d46d0c771edb2ff088040550efc66d97a815bcfa91a4\" returns successfully" Jan 29 13:06:54.085323 containerd[1508]: time="2025-01-29T13:06:54.084924020Z" level=info msg="StopPodSandbox for \"bea178fe28cf855ef27058adce84845b8f5abe30355d68106f97d1f715ffb2aa\"" Jan 29 13:06:54.085323 containerd[1508]: time="2025-01-29T13:06:54.085039060Z" level=info msg="TearDown network for sandbox \"bea178fe28cf855ef27058adce84845b8f5abe30355d68106f97d1f715ffb2aa\" successfully" Jan 29 13:06:54.085323 containerd[1508]: time="2025-01-29T13:06:54.085058419Z" level=info msg="StopPodSandbox for \"bea178fe28cf855ef27058adce84845b8f5abe30355d68106f97d1f715ffb2aa\" returns successfully" Jan 29 13:06:54.085762 containerd[1508]: time="2025-01-29T13:06:54.085583713Z" level=info msg="RemovePodSandbox for \"bea178fe28cf855ef27058adce84845b8f5abe30355d68106f97d1f715ffb2aa\"" Jan 29 13:06:54.085762 containerd[1508]: time="2025-01-29T13:06:54.085674898Z" level=info msg="Forcibly stopping sandbox \"bea178fe28cf855ef27058adce84845b8f5abe30355d68106f97d1f715ffb2aa\"" Jan 29 13:06:54.086188 containerd[1508]: time="2025-01-29T13:06:54.085943912Z" level=info msg="TearDown network for sandbox \"bea178fe28cf855ef27058adce84845b8f5abe30355d68106f97d1f715ffb2aa\" successfully" Jan 29 13:06:54.088954 containerd[1508]: time="2025-01-29T13:06:54.088904659Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bea178fe28cf855ef27058adce84845b8f5abe30355d68106f97d1f715ffb2aa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 13:06:54.089054 containerd[1508]: time="2025-01-29T13:06:54.088959319Z" level=info msg="RemovePodSandbox \"bea178fe28cf855ef27058adce84845b8f5abe30355d68106f97d1f715ffb2aa\" returns successfully" Jan 29 13:06:54.089766 containerd[1508]: time="2025-01-29T13:06:54.089526939Z" level=info msg="StopPodSandbox for \"a3c5920ccc742c50f2cb190847e8db5db50a928a9c848c3b3182315c3345bca0\"" Jan 29 13:06:54.089766 containerd[1508]: time="2025-01-29T13:06:54.089665551Z" level=info msg="TearDown network for sandbox \"a3c5920ccc742c50f2cb190847e8db5db50a928a9c848c3b3182315c3345bca0\" successfully" Jan 29 13:06:54.089766 containerd[1508]: time="2025-01-29T13:06:54.089686741Z" level=info msg="StopPodSandbox for \"a3c5920ccc742c50f2cb190847e8db5db50a928a9c848c3b3182315c3345bca0\" returns successfully" Jan 29 13:06:54.091322 containerd[1508]: time="2025-01-29T13:06:54.090322732Z" level=info msg="RemovePodSandbox for \"a3c5920ccc742c50f2cb190847e8db5db50a928a9c848c3b3182315c3345bca0\"" Jan 29 13:06:54.091322 containerd[1508]: time="2025-01-29T13:06:54.090357256Z" level=info msg="Forcibly stopping sandbox \"a3c5920ccc742c50f2cb190847e8db5db50a928a9c848c3b3182315c3345bca0\"" Jan 29 13:06:54.091322 containerd[1508]: time="2025-01-29T13:06:54.090468581Z" level=info msg="TearDown network for sandbox \"a3c5920ccc742c50f2cb190847e8db5db50a928a9c848c3b3182315c3345bca0\" successfully" Jan 29 13:06:54.093117 containerd[1508]: time="2025-01-29T13:06:54.093082050Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a3c5920ccc742c50f2cb190847e8db5db50a928a9c848c3b3182315c3345bca0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 13:06:54.093247 containerd[1508]: time="2025-01-29T13:06:54.093219712Z" level=info msg="RemovePodSandbox \"a3c5920ccc742c50f2cb190847e8db5db50a928a9c848c3b3182315c3345bca0\" returns successfully" Jan 29 13:06:54.093747 containerd[1508]: time="2025-01-29T13:06:54.093717808Z" level=info msg="StopPodSandbox for \"d25fdc15c38fc8e86faec4088bb463c13036fd3ee0f7c4f819cfce539211e8d2\"" Jan 29 13:06:54.094110 containerd[1508]: time="2025-01-29T13:06:54.093965750Z" level=info msg="TearDown network for sandbox \"d25fdc15c38fc8e86faec4088bb463c13036fd3ee0f7c4f819cfce539211e8d2\" successfully" Jan 29 13:06:54.094110 containerd[1508]: time="2025-01-29T13:06:54.093990306Z" level=info msg="StopPodSandbox for \"d25fdc15c38fc8e86faec4088bb463c13036fd3ee0f7c4f819cfce539211e8d2\" returns successfully" Jan 29 13:06:54.094474 containerd[1508]: time="2025-01-29T13:06:54.094444975Z" level=info msg="RemovePodSandbox for \"d25fdc15c38fc8e86faec4088bb463c13036fd3ee0f7c4f819cfce539211e8d2\"" Jan 29 13:06:54.094530 containerd[1508]: time="2025-01-29T13:06:54.094480715Z" level=info msg="Forcibly stopping sandbox \"d25fdc15c38fc8e86faec4088bb463c13036fd3ee0f7c4f819cfce539211e8d2\"" Jan 29 13:06:54.094603 containerd[1508]: time="2025-01-29T13:06:54.094567520Z" level=info msg="TearDown network for sandbox \"d25fdc15c38fc8e86faec4088bb463c13036fd3ee0f7c4f819cfce539211e8d2\" successfully" Jan 29 13:06:54.096835 containerd[1508]: time="2025-01-29T13:06:54.096793105Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d25fdc15c38fc8e86faec4088bb463c13036fd3ee0f7c4f819cfce539211e8d2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 13:06:54.096929 containerd[1508]: time="2025-01-29T13:06:54.096848474Z" level=info msg="RemovePodSandbox \"d25fdc15c38fc8e86faec4088bb463c13036fd3ee0f7c4f819cfce539211e8d2\" returns successfully" Jan 29 13:06:54.097238 containerd[1508]: time="2025-01-29T13:06:54.097206201Z" level=info msg="StopPodSandbox for \"95f6bb89a876c6ff7a75821b36f30d7cfcfb6959a13af4d998100d2845c6b912\"" Jan 29 13:06:54.097579 containerd[1508]: time="2025-01-29T13:06:54.097449464Z" level=info msg="TearDown network for sandbox \"95f6bb89a876c6ff7a75821b36f30d7cfcfb6959a13af4d998100d2845c6b912\" successfully" Jan 29 13:06:54.097579 containerd[1508]: time="2025-01-29T13:06:54.097476624Z" level=info msg="StopPodSandbox for \"95f6bb89a876c6ff7a75821b36f30d7cfcfb6959a13af4d998100d2845c6b912\" returns successfully" Jan 29 13:06:54.097840 containerd[1508]: time="2025-01-29T13:06:54.097806434Z" level=info msg="RemovePodSandbox for \"95f6bb89a876c6ff7a75821b36f30d7cfcfb6959a13af4d998100d2845c6b912\"" Jan 29 13:06:54.097913 containerd[1508]: time="2025-01-29T13:06:54.097843348Z" level=info msg="Forcibly stopping sandbox \"95f6bb89a876c6ff7a75821b36f30d7cfcfb6959a13af4d998100d2845c6b912\"" Jan 29 13:06:54.097972 containerd[1508]: time="2025-01-29T13:06:54.097930952Z" level=info msg="TearDown network for sandbox \"95f6bb89a876c6ff7a75821b36f30d7cfcfb6959a13af4d998100d2845c6b912\" successfully" Jan 29 13:06:54.100208 containerd[1508]: time="2025-01-29T13:06:54.100138994Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"95f6bb89a876c6ff7a75821b36f30d7cfcfb6959a13af4d998100d2845c6b912\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 13:06:54.100208 containerd[1508]: time="2025-01-29T13:06:54.100187517Z" level=info msg="RemovePodSandbox \"95f6bb89a876c6ff7a75821b36f30d7cfcfb6959a13af4d998100d2845c6b912\" returns successfully" Jan 29 13:06:54.101023 containerd[1508]: time="2025-01-29T13:06:54.100718505Z" level=info msg="StopPodSandbox for \"64ee480c91dc3eedcc0d154350362904ddb07a18dd99834836b2839d5c0fb1a8\"" Jan 29 13:06:54.101023 containerd[1508]: time="2025-01-29T13:06:54.100824159Z" level=info msg="TearDown network for sandbox \"64ee480c91dc3eedcc0d154350362904ddb07a18dd99834836b2839d5c0fb1a8\" successfully" Jan 29 13:06:54.101023 containerd[1508]: time="2025-01-29T13:06:54.100842717Z" level=info msg="StopPodSandbox for \"64ee480c91dc3eedcc0d154350362904ddb07a18dd99834836b2839d5c0fb1a8\" returns successfully" Jan 29 13:06:54.101606 containerd[1508]: time="2025-01-29T13:06:54.101505493Z" level=info msg="RemovePodSandbox for \"64ee480c91dc3eedcc0d154350362904ddb07a18dd99834836b2839d5c0fb1a8\"" Jan 29 13:06:54.101954 containerd[1508]: time="2025-01-29T13:06:54.101775601Z" level=info msg="Forcibly stopping sandbox \"64ee480c91dc3eedcc0d154350362904ddb07a18dd99834836b2839d5c0fb1a8\"" Jan 29 13:06:54.101954 containerd[1508]: time="2025-01-29T13:06:54.101870223Z" level=info msg="TearDown network for sandbox \"64ee480c91dc3eedcc0d154350362904ddb07a18dd99834836b2839d5c0fb1a8\" successfully" Jan 29 13:06:54.104396 containerd[1508]: time="2025-01-29T13:06:54.104261718Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"64ee480c91dc3eedcc0d154350362904ddb07a18dd99834836b2839d5c0fb1a8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 13:06:54.104396 containerd[1508]: time="2025-01-29T13:06:54.104356812Z" level=info msg="RemovePodSandbox \"64ee480c91dc3eedcc0d154350362904ddb07a18dd99834836b2839d5c0fb1a8\" returns successfully" Jan 29 13:06:54.104794 containerd[1508]: time="2025-01-29T13:06:54.104733285Z" level=info msg="StopPodSandbox for \"9180f5a0a0d90cce678c15907b73958255ceebdb1e1d97b4c5a638d02c7750c6\"" Jan 29 13:06:54.104897 containerd[1508]: time="2025-01-29T13:06:54.104869485Z" level=info msg="TearDown network for sandbox \"9180f5a0a0d90cce678c15907b73958255ceebdb1e1d97b4c5a638d02c7750c6\" successfully" Jan 29 13:06:54.105021 containerd[1508]: time="2025-01-29T13:06:54.104896371Z" level=info msg="StopPodSandbox for \"9180f5a0a0d90cce678c15907b73958255ceebdb1e1d97b4c5a638d02c7750c6\" returns successfully" Jan 29 13:06:54.105420 containerd[1508]: time="2025-01-29T13:06:54.105368925Z" level=info msg="RemovePodSandbox for \"9180f5a0a0d90cce678c15907b73958255ceebdb1e1d97b4c5a638d02c7750c6\"" Jan 29 13:06:54.105420 containerd[1508]: time="2025-01-29T13:06:54.105404211Z" level=info msg="Forcibly stopping sandbox \"9180f5a0a0d90cce678c15907b73958255ceebdb1e1d97b4c5a638d02c7750c6\"" Jan 29 13:06:54.105539 containerd[1508]: time="2025-01-29T13:06:54.105485784Z" level=info msg="TearDown network for sandbox \"9180f5a0a0d90cce678c15907b73958255ceebdb1e1d97b4c5a638d02c7750c6\" successfully" Jan 29 13:06:54.107698 containerd[1508]: time="2025-01-29T13:06:54.107664926Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9180f5a0a0d90cce678c15907b73958255ceebdb1e1d97b4c5a638d02c7750c6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 13:06:54.107976 containerd[1508]: time="2025-01-29T13:06:54.107712633Z" level=info msg="RemovePodSandbox \"9180f5a0a0d90cce678c15907b73958255ceebdb1e1d97b4c5a638d02c7750c6\" returns successfully" Jan 29 13:06:54.108422 containerd[1508]: time="2025-01-29T13:06:54.108168872Z" level=info msg="StopPodSandbox for \"fb50b8f712dbeb34e03b467de759b07f6dd51d0aa863da4f57a1a2ffa8c49720\"" Jan 29 13:06:54.108422 containerd[1508]: time="2025-01-29T13:06:54.108295351Z" level=info msg="TearDown network for sandbox \"fb50b8f712dbeb34e03b467de759b07f6dd51d0aa863da4f57a1a2ffa8c49720\" successfully" Jan 29 13:06:54.108422 containerd[1508]: time="2025-01-29T13:06:54.108329179Z" level=info msg="StopPodSandbox for \"fb50b8f712dbeb34e03b467de759b07f6dd51d0aa863da4f57a1a2ffa8c49720\" returns successfully" Jan 29 13:06:54.109346 containerd[1508]: time="2025-01-29T13:06:54.109163229Z" level=info msg="RemovePodSandbox for \"fb50b8f712dbeb34e03b467de759b07f6dd51d0aa863da4f57a1a2ffa8c49720\"" Jan 29 13:06:54.109346 containerd[1508]: time="2025-01-29T13:06:54.109196777Z" level=info msg="Forcibly stopping sandbox \"fb50b8f712dbeb34e03b467de759b07f6dd51d0aa863da4f57a1a2ffa8c49720\"" Jan 29 13:06:54.109508 containerd[1508]: time="2025-01-29T13:06:54.109371140Z" level=info msg="TearDown network for sandbox \"fb50b8f712dbeb34e03b467de759b07f6dd51d0aa863da4f57a1a2ffa8c49720\" successfully" Jan 29 13:06:54.113157 containerd[1508]: time="2025-01-29T13:06:54.113111879Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fb50b8f712dbeb34e03b467de759b07f6dd51d0aa863da4f57a1a2ffa8c49720\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 13:06:54.113278 containerd[1508]: time="2025-01-29T13:06:54.113167862Z" level=info msg="RemovePodSandbox \"fb50b8f712dbeb34e03b467de759b07f6dd51d0aa863da4f57a1a2ffa8c49720\" returns successfully" Jan 29 13:06:54.113545 containerd[1508]: time="2025-01-29T13:06:54.113511919Z" level=info msg="StopPodSandbox for \"c587deb900f8b67b66654db172e06c8d8b4473796fee70675ac0bc60b14d4401\"" Jan 29 13:06:54.113658 containerd[1508]: time="2025-01-29T13:06:54.113629349Z" level=info msg="TearDown network for sandbox \"c587deb900f8b67b66654db172e06c8d8b4473796fee70675ac0bc60b14d4401\" successfully" Jan 29 13:06:54.113901 containerd[1508]: time="2025-01-29T13:06:54.113657011Z" level=info msg="StopPodSandbox for \"c587deb900f8b67b66654db172e06c8d8b4473796fee70675ac0bc60b14d4401\" returns successfully" Jan 29 13:06:54.114048 containerd[1508]: time="2025-01-29T13:06:54.114018686Z" level=info msg="RemovePodSandbox for \"c587deb900f8b67b66654db172e06c8d8b4473796fee70675ac0bc60b14d4401\"" Jan 29 13:06:54.114186 containerd[1508]: time="2025-01-29T13:06:54.114053982Z" level=info msg="Forcibly stopping sandbox \"c587deb900f8b67b66654db172e06c8d8b4473796fee70675ac0bc60b14d4401\"" Jan 29 13:06:54.114186 containerd[1508]: time="2025-01-29T13:06:54.114137373Z" level=info msg="TearDown network for sandbox \"c587deb900f8b67b66654db172e06c8d8b4473796fee70675ac0bc60b14d4401\" successfully" Jan 29 13:06:54.116451 containerd[1508]: time="2025-01-29T13:06:54.116395454Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c587deb900f8b67b66654db172e06c8d8b4473796fee70675ac0bc60b14d4401\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 13:06:54.116451 containerd[1508]: time="2025-01-29T13:06:54.116447540Z" level=info msg="RemovePodSandbox \"c587deb900f8b67b66654db172e06c8d8b4473796fee70675ac0bc60b14d4401\" returns successfully" Jan 29 13:06:54.961931 kubelet[1878]: E0129 13:06:54.961860 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:55.962769 kubelet[1878]: E0129 13:06:55.962679 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:56.963021 kubelet[1878]: E0129 13:06:56.962948 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:57.964122 kubelet[1878]: E0129 13:06:57.964048 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:58.964274 kubelet[1878]: E0129 13:06:58.964191 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:06:59.965175 kubelet[1878]: E0129 13:06:59.965096 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:07:00.414095 systemd[1]: run-containerd-runc-k8s.io-0f3d9f4b6cc44189f3805dc26d74f645e3e9e4f0bf98a0c3b020a53986e7f4b1-runc.hhUZNL.mount: Deactivated successfully. Jan 29 13:07:00.966043 kubelet[1878]: E0129 13:07:00.965949 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:07:01.967092 kubelet[1878]: E0129 13:07:01.966988 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 13:07:02.967370 kubelet[1878]: E0129 13:07:02.967245 1878 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"