Jan 29 14:13:12.067951 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 09:36:13 -00 2025 Jan 29 14:13:12.068002 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=519b8fded83181f8e61f734d5291f916d7548bfba9487c78bcb50d002d81719d Jan 29 14:13:12.068017 kernel: BIOS-provided physical RAM map: Jan 29 14:13:12.068032 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 29 14:13:12.068042 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 29 14:13:12.068052 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 29 14:13:12.068063 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jan 29 14:13:12.068074 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jan 29 14:13:12.068083 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 29 14:13:12.068093 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 29 14:13:12.068103 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 29 14:13:12.068113 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 29 14:13:12.068128 kernel: NX (Execute Disable) protection: active Jan 29 14:13:12.068138 kernel: APIC: Static calls initialized Jan 29 14:13:12.068150 kernel: SMBIOS 2.8 present. Jan 29 14:13:12.068162 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Jan 29 14:13:12.068173 kernel: Hypervisor detected: KVM Jan 29 14:13:12.068189 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 14:13:12.068200 kernel: kvm-clock: using sched offset of 4447656866 cycles Jan 29 14:13:12.068212 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 14:13:12.068223 kernel: tsc: Detected 2500.032 MHz processor Jan 29 14:13:12.068334 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 14:13:12.068350 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 14:13:12.068362 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 29 14:13:12.068373 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 29 14:13:12.068385 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 14:13:12.068403 kernel: Using GB pages for direct mapping Jan 29 14:13:12.068415 kernel: ACPI: Early table checksum verification disabled Jan 29 14:13:12.068427 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Jan 29 14:13:12.068439 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 14:13:12.068450 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 14:13:12.068462 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 14:13:12.068473 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jan 29 14:13:12.068484 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 14:13:12.068496 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 14:13:12.068531 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 14:13:12.068543 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 14:13:12.068566 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jan 29 14:13:12.068589 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jan 29 14:13:12.068609 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jan 29 14:13:12.068627 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jan 29 14:13:12.068651 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jan 29 14:13:12.068668 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jan 29 14:13:12.068680 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jan 29 14:13:12.068709 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 29 14:13:12.068721 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 29 14:13:12.068732 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jan 29 14:13:12.068744 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Jan 29 14:13:12.068755 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jan 29 14:13:12.068767 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Jan 29 14:13:12.068783 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jan 29 14:13:12.068808 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Jan 29 14:13:12.068820 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jan 29 14:13:12.068832 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Jan 29 14:13:12.068843 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jan 29 14:13:12.068855 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Jan 29 14:13:12.068869 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jan 29 14:13:12.068881 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Jan 29 14:13:12.068893 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jan 29 14:13:12.068909 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Jan 29 14:13:12.068922 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 29 14:13:12.068934 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 29 14:13:12.068958 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jan 29 14:13:12.068970 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Jan 29 14:13:12.068982 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Jan 29 14:13:12.068994 kernel: Zone ranges: Jan 29 14:13:12.069006 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 14:13:12.069017 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jan 29 14:13:12.069033 kernel: Normal empty Jan 29 14:13:12.069045 kernel: Movable zone start for each node Jan 29 14:13:12.069057 kernel: Early memory node ranges Jan 29 14:13:12.069068 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 29 14:13:12.069080 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jan 29 14:13:12.069091 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jan 29 14:13:12.069103 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 14:13:12.069114 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 29 14:13:12.069126 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jan 29 14:13:12.069138 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 29 14:13:12.069154 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 14:13:12.069166 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 14:13:12.069183 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 29 14:13:12.069195 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 14:13:12.069206 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 14:13:12.069218 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 14:13:12.069241 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 14:13:12.069279 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 14:13:12.069305 kernel: TSC deadline timer available Jan 29 14:13:12.069323 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Jan 29 14:13:12.069335 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 14:13:12.069347 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 29 14:13:12.069358 kernel: Booting paravirtualized kernel on KVM Jan 29 14:13:12.069370 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 14:13:12.069382 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 29 14:13:12.069394 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 29 14:13:12.069405 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 29 14:13:12.069417 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 29 14:13:12.069433 kernel: kvm-guest: PV spinlocks enabled Jan 29 14:13:12.069445 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 29 14:13:12.069458 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=519b8fded83181f8e61f734d5291f916d7548bfba9487c78bcb50d002d81719d Jan 29 14:13:12.069471 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 14:13:12.069482 kernel: random: crng init done Jan 29 14:13:12.069494 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 14:13:12.069505 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 29 14:13:12.069517 kernel: Fallback order for Node 0: 0 Jan 29 14:13:12.069534 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Jan 29 14:13:12.069545 kernel: Policy zone: DMA32 Jan 29 14:13:12.069557 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 14:13:12.069568 kernel: software IO TLB: area num 16. Jan 29 14:13:12.069580 kernel: Memory: 1901524K/2096616K available (12288K kernel code, 2301K rwdata, 22736K rodata, 42972K init, 2220K bss, 194832K reserved, 0K cma-reserved) Jan 29 14:13:12.069592 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 29 14:13:12.069604 kernel: Kernel/User page tables isolation: enabled Jan 29 14:13:12.069616 kernel: ftrace: allocating 37923 entries in 149 pages Jan 29 14:13:12.069627 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 14:13:12.069644 kernel: Dynamic Preempt: voluntary Jan 29 14:13:12.069655 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 14:13:12.069681 kernel: rcu: RCU event tracing is enabled. Jan 29 14:13:12.069693 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 29 14:13:12.069706 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 14:13:12.069730 kernel: Rude variant of Tasks RCU enabled. Jan 29 14:13:12.069757 kernel: Tracing variant of Tasks RCU enabled. Jan 29 14:13:12.069769 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 14:13:12.069782 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 29 14:13:12.069795 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jan 29 14:13:12.069808 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 14:13:12.069820 kernel: Console: colour VGA+ 80x25 Jan 29 14:13:12.069837 kernel: printk: console [tty0] enabled Jan 29 14:13:12.069851 kernel: printk: console [ttyS0] enabled Jan 29 14:13:12.069870 kernel: ACPI: Core revision 20230628 Jan 29 14:13:12.069883 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 14:13:12.069896 kernel: x2apic enabled Jan 29 14:13:12.069913 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 14:13:12.069933 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240957bf147, max_idle_ns: 440795216753 ns Jan 29 14:13:12.069946 kernel: Calibrating delay loop (skipped) preset value.. 5000.06 BogoMIPS (lpj=2500032) Jan 29 14:13:12.069958 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 29 14:13:12.069983 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 29 14:13:12.069996 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 29 14:13:12.070008 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 14:13:12.070020 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 14:13:12.070032 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 14:13:12.070061 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 14:13:12.070073 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 29 14:13:12.070085 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 14:13:12.070109 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 14:13:12.070122 kernel: MDS: Mitigation: Clear CPU buffers Jan 29 14:13:12.070134 kernel: MMIO Stale Data: Unknown: No mitigations Jan 29 14:13:12.070146 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 29 14:13:12.070171 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 14:13:12.070184 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 14:13:12.070196 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 14:13:12.070209 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 14:13:12.070227 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 29 14:13:12.070240 kernel: Freeing SMP alternatives memory: 32K Jan 29 14:13:12.070278 kernel: pid_max: default: 32768 minimum: 301 Jan 29 14:13:12.070292 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 14:13:12.070305 kernel: landlock: Up and running. Jan 29 14:13:12.070318 kernel: SELinux: Initializing. Jan 29 14:13:12.070330 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 14:13:12.070343 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 14:13:12.070356 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Jan 29 14:13:12.070369 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 29 14:13:12.070381 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 29 14:13:12.070401 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 29 14:13:12.070414 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Jan 29 14:13:12.070434 kernel: signal: max sigframe size: 1776 Jan 29 14:13:12.070457 kernel: rcu: Hierarchical SRCU implementation. Jan 29 14:13:12.070479 kernel: rcu: Max phase no-delay instances is 400. Jan 29 14:13:12.070503 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 29 14:13:12.070523 kernel: smp: Bringing up secondary CPUs ... Jan 29 14:13:12.070539 kernel: smpboot: x86: Booting SMP configuration: Jan 29 14:13:12.070551 kernel: .... node #0, CPUs: #1 Jan 29 14:13:12.070569 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jan 29 14:13:12.070582 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 14:13:12.070595 kernel: smpboot: Max logical packages: 16 Jan 29 14:13:12.070607 kernel: smpboot: Total of 2 processors activated (10000.12 BogoMIPS) Jan 29 14:13:12.070620 kernel: devtmpfs: initialized Jan 29 14:13:12.070632 kernel: x86/mm: Memory block size: 128MB Jan 29 14:13:12.070645 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 14:13:12.070658 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 29 14:13:12.070671 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 14:13:12.070688 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 14:13:12.070701 kernel: audit: initializing netlink subsys (disabled) Jan 29 14:13:12.070714 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 14:13:12.070726 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 14:13:12.070739 kernel: audit: type=2000 audit(1738159990.578:1): state=initialized audit_enabled=0 res=1 Jan 29 14:13:12.070752 kernel: cpuidle: using governor menu Jan 29 14:13:12.070770 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 14:13:12.070788 kernel: dca service started, version 1.12.1 Jan 29 14:13:12.070802 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 29 14:13:12.070820 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 29 14:13:12.070833 kernel: PCI: Using configuration type 1 for base access Jan 29 14:13:12.070846 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 14:13:12.070859 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 14:13:12.070872 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 14:13:12.070884 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 14:13:12.070904 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 14:13:12.070916 kernel: ACPI: Added _OSI(Module Device) Jan 29 14:13:12.070929 kernel: ACPI: Added _OSI(Processor Device) Jan 29 14:13:12.070947 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 14:13:12.070966 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 14:13:12.070979 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 14:13:12.070992 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 14:13:12.071004 kernel: ACPI: Interpreter enabled Jan 29 14:13:12.071017 kernel: ACPI: PM: (supports S0 S5) Jan 29 14:13:12.071029 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 14:13:12.071042 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 14:13:12.071055 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 14:13:12.071072 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 29 14:13:12.071085 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 14:13:12.071406 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 14:13:12.071598 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 29 14:13:12.071796 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 29 14:13:12.071816 kernel: PCI host bridge to bus 0000:00 Jan 29 14:13:12.072012 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 14:13:12.072211 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 14:13:12.072450 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 14:13:12.072602 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 29 14:13:12.072782 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 29 14:13:12.072943 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jan 29 14:13:12.073094 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 14:13:12.073331 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 29 14:13:12.073601 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Jan 29 14:13:12.073785 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Jan 29 14:13:12.073948 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Jan 29 14:13:12.074135 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Jan 29 14:13:12.074342 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 14:13:12.074561 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 29 14:13:12.074755 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Jan 29 14:13:12.074968 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 29 14:13:12.075156 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Jan 29 14:13:12.076214 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 29 14:13:12.076421 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Jan 29 14:13:12.076648 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 29 14:13:12.076847 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Jan 29 14:13:12.077049 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 29 14:13:12.077230 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Jan 29 14:13:12.077453 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 29 14:13:12.077654 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Jan 29 14:13:12.077835 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 29 14:13:12.078042 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Jan 29 14:13:12.080467 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 29 14:13:12.080650 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Jan 29 14:13:12.080858 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 29 14:13:12.081029 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 29 14:13:12.081196 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Jan 29 14:13:12.081407 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jan 29 14:13:12.081583 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Jan 29 14:13:12.081814 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 29 14:13:12.081983 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 29 14:13:12.082149 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Jan 29 14:13:12.084399 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Jan 29 14:13:12.084615 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 29 14:13:12.084803 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 29 14:13:12.085024 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 29 14:13:12.085226 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Jan 29 14:13:12.086456 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Jan 29 14:13:12.086662 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 29 14:13:12.086849 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 29 14:13:12.087057 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Jan 29 14:13:12.087923 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Jan 29 14:13:12.088129 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 29 14:13:12.088381 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 29 14:13:12.088549 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 29 14:13:12.088752 kernel: pci_bus 0000:02: extended config space not accessible Jan 29 14:13:12.088957 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Jan 29 14:13:12.089133 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Jan 29 14:13:12.089366 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 29 14:13:12.089538 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 29 14:13:12.089745 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 29 14:13:12.089936 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Jan 29 14:13:12.090114 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 29 14:13:12.090402 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 29 14:13:12.090596 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 29 14:13:12.090823 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 29 14:13:12.091017 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jan 29 14:13:12.091202 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 29 14:13:12.093313 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 29 14:13:12.093498 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 29 14:13:12.093678 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 29 14:13:12.093848 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 29 14:13:12.094048 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 29 14:13:12.094219 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 29 14:13:12.096439 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 29 14:13:12.096615 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 29 14:13:12.096824 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 29 14:13:12.096997 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 29 14:13:12.097157 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 29 14:13:12.097368 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 29 14:13:12.097543 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 29 14:13:12.097717 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 29 14:13:12.097893 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 29 14:13:12.098048 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 29 14:13:12.098203 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 29 14:13:12.098221 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 14:13:12.100297 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 14:13:12.100320 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 14:13:12.100342 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 14:13:12.100356 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 29 14:13:12.100369 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 29 14:13:12.100382 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 29 14:13:12.100395 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 29 14:13:12.100408 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 29 14:13:12.100421 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 29 14:13:12.100434 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 29 14:13:12.100447 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 29 14:13:12.100465 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 29 14:13:12.100478 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 29 14:13:12.100491 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 29 14:13:12.100504 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 29 14:13:12.100517 kernel: iommu: Default domain type: Translated Jan 29 14:13:12.100530 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 14:13:12.100543 kernel: PCI: Using ACPI for IRQ routing Jan 29 14:13:12.100556 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 14:13:12.100568 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 29 14:13:12.100586 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jan 29 14:13:12.100766 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 29 14:13:12.100937 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 29 14:13:12.101101 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 14:13:12.101121 kernel: vgaarb: loaded Jan 29 14:13:12.101135 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 14:13:12.101148 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 14:13:12.101161 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 14:13:12.101181 kernel: pnp: PnP ACPI init Jan 29 14:13:12.101426 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 29 14:13:12.101448 kernel: pnp: PnP ACPI: found 5 devices Jan 29 14:13:12.101462 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 14:13:12.101475 kernel: NET: Registered PF_INET protocol family Jan 29 14:13:12.101488 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 14:13:12.101501 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 29 14:13:12.101514 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 14:13:12.101527 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 29 14:13:12.101548 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 29 14:13:12.101561 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 29 14:13:12.101574 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 14:13:12.101587 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 14:13:12.101600 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 14:13:12.101612 kernel: NET: Registered PF_XDP protocol family Jan 29 14:13:12.101774 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Jan 29 14:13:12.101940 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 29 14:13:12.102126 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 29 14:13:12.103395 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 29 14:13:12.103565 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 29 14:13:12.103731 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 29 14:13:12.103901 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 29 14:13:12.104090 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 29 14:13:12.105369 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 29 14:13:12.105549 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 29 14:13:12.105728 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 29 14:13:12.105909 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 29 14:13:12.106069 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 29 14:13:12.107292 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 29 14:13:12.107476 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 29 14:13:12.107660 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 29 14:13:12.107887 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 29 14:13:12.108081 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 29 14:13:12.108277 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 29 14:13:12.108452 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 29 14:13:12.108623 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 29 14:13:12.108813 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 29 14:13:12.108986 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 29 14:13:12.109146 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 29 14:13:12.113385 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 29 14:13:12.113571 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 29 14:13:12.113750 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 29 14:13:12.113924 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 29 14:13:12.114105 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 29 14:13:12.114332 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 29 14:13:12.114511 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 29 14:13:12.114694 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 29 14:13:12.114971 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 29 14:13:12.115222 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 29 14:13:12.117762 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 29 14:13:12.117930 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 29 14:13:12.118125 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 29 14:13:12.119118 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 29 14:13:12.119410 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 29 14:13:12.119598 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 29 14:13:12.119778 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 29 14:13:12.119954 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 29 14:13:12.120121 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 29 14:13:12.120325 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 29 14:13:12.120502 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 29 14:13:12.120687 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 29 14:13:12.120897 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 29 14:13:12.121075 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 29 14:13:12.121322 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 29 14:13:12.121492 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 29 14:13:12.121663 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 14:13:12.121854 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 14:13:12.122010 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 14:13:12.122174 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 29 14:13:12.122395 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 29 14:13:12.122569 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jan 29 14:13:12.122769 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 29 14:13:12.122938 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jan 29 14:13:12.123115 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 29 14:13:12.123325 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 29 14:13:12.123521 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Jan 29 14:13:12.123699 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 29 14:13:12.123888 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 29 14:13:12.124087 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Jan 29 14:13:12.124371 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 29 14:13:12.124534 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 29 14:13:12.124711 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jan 29 14:13:12.124878 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 29 14:13:12.125053 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 29 14:13:12.125275 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Jan 29 14:13:12.125436 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 29 14:13:12.125601 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 29 14:13:12.125803 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Jan 29 14:13:12.125959 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 29 14:13:12.126136 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 29 14:13:12.126383 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Jan 29 14:13:12.126554 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jan 29 14:13:12.126703 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 29 14:13:12.126891 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Jan 29 14:13:12.127064 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 29 14:13:12.127223 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 29 14:13:12.127284 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 29 14:13:12.127301 kernel: PCI: CLS 0 bytes, default 64 Jan 29 14:13:12.127314 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 29 14:13:12.127328 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jan 29 14:13:12.127342 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 29 14:13:12.127356 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240957bf147, max_idle_ns: 440795216753 ns Jan 29 14:13:12.127370 kernel: Initialise system trusted keyrings Jan 29 14:13:12.127391 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 29 14:13:12.127405 kernel: Key type asymmetric registered Jan 29 14:13:12.127418 kernel: Asymmetric key parser 'x509' registered Jan 29 14:13:12.127431 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 14:13:12.127445 kernel: io scheduler mq-deadline registered Jan 29 14:13:12.127458 kernel: io scheduler kyber registered Jan 29 14:13:12.127471 kernel: io scheduler bfq registered Jan 29 14:13:12.127659 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 29 14:13:12.127817 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 29 14:13:12.127995 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 14:13:12.128183 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 29 14:13:12.128386 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 29 14:13:12.128554 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 14:13:12.128731 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 29 14:13:12.128892 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 29 14:13:12.129074 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 14:13:12.129343 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 29 14:13:12.129513 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 29 14:13:12.129680 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 14:13:12.129847 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 29 14:13:12.130023 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 29 14:13:12.130197 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 14:13:12.130420 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 29 14:13:12.130598 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 29 14:13:12.130774 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 14:13:12.130940 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 29 14:13:12.131121 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 29 14:13:12.131327 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 14:13:12.131494 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 29 14:13:12.131670 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 29 14:13:12.131873 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 14:13:12.131895 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 14:13:12.131915 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 29 14:13:12.131936 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 29 14:13:12.131950 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 14:13:12.131964 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 14:13:12.131985 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 14:13:12.131998 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 14:13:12.132012 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 14:13:12.132178 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 29 14:13:12.132200 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 14:13:12.132400 kernel: rtc_cmos 00:03: registered as rtc0 Jan 29 14:13:12.132557 kernel: rtc_cmos 00:03: setting system clock to 2025-01-29T14:13:11 UTC (1738159991) Jan 29 14:13:12.132718 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 29 14:13:12.132737 kernel: intel_pstate: CPU model not supported Jan 29 14:13:12.132757 kernel: NET: Registered PF_INET6 protocol family Jan 29 14:13:12.132770 kernel: Segment Routing with IPv6 Jan 29 14:13:12.132783 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 14:13:12.132796 kernel: NET: Registered PF_PACKET protocol family Jan 29 14:13:12.132809 kernel: Key type dns_resolver registered Jan 29 14:13:12.132826 kernel: IPI shorthand broadcast: enabled Jan 29 14:13:12.132845 kernel: sched_clock: Marking stable (1276014063, 236875613)->(1640828610, -127938934) Jan 29 14:13:12.132858 kernel: registered taskstats version 1 Jan 29 14:13:12.132871 kernel: Loading compiled-in X.509 certificates Jan 29 14:13:12.132883 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: de92a621108c58f5771c86c5c3ccb1aa0728ed55' Jan 29 14:13:12.132896 kernel: Key type .fscrypt registered Jan 29 14:13:12.132917 kernel: Key type fscrypt-provisioning registered Jan 29 14:13:12.132930 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 14:13:12.132942 kernel: ima: Allocated hash algorithm: sha1 Jan 29 14:13:12.132960 kernel: ima: No architecture policies found Jan 29 14:13:12.132977 kernel: clk: Disabling unused clocks Jan 29 14:13:12.132990 kernel: Freeing unused kernel image (initmem) memory: 42972K Jan 29 14:13:12.133003 kernel: Write protecting the kernel read-only data: 36864k Jan 29 14:13:12.133016 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Jan 29 14:13:12.133028 kernel: Run /init as init process Jan 29 14:13:12.133041 kernel: with arguments: Jan 29 14:13:12.133053 kernel: /init Jan 29 14:13:12.133077 kernel: with environment: Jan 29 14:13:12.133095 kernel: HOME=/ Jan 29 14:13:12.133108 kernel: TERM=linux Jan 29 14:13:12.133120 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 14:13:12.133159 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 14:13:12.133178 systemd[1]: Detected virtualization kvm. Jan 29 14:13:12.133192 systemd[1]: Detected architecture x86-64. Jan 29 14:13:12.133206 systemd[1]: Running in initrd. Jan 29 14:13:12.133221 systemd[1]: No hostname configured, using default hostname. Jan 29 14:13:12.133270 systemd[1]: Hostname set to . Jan 29 14:13:12.133286 systemd[1]: Initializing machine ID from VM UUID. Jan 29 14:13:12.133302 systemd[1]: Queued start job for default target initrd.target. Jan 29 14:13:12.133317 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 14:13:12.133331 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 14:13:12.133346 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 14:13:12.133361 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 14:13:12.133382 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 14:13:12.133397 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 14:13:12.133414 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 14:13:12.133428 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 14:13:12.133443 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 14:13:12.133457 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 14:13:12.133472 systemd[1]: Reached target paths.target - Path Units. Jan 29 14:13:12.133492 systemd[1]: Reached target slices.target - Slice Units. Jan 29 14:13:12.133506 systemd[1]: Reached target swap.target - Swaps. Jan 29 14:13:12.133532 systemd[1]: Reached target timers.target - Timer Units. Jan 29 14:13:12.133546 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 14:13:12.133560 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 14:13:12.133574 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 14:13:12.133601 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 14:13:12.133614 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 14:13:12.133628 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 14:13:12.133659 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 14:13:12.133673 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 14:13:12.133687 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 14:13:12.133701 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 14:13:12.133715 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 14:13:12.133729 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 14:13:12.133743 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 14:13:12.133757 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 14:13:12.133771 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 14:13:12.133831 systemd-journald[202]: Collecting audit messages is disabled. Jan 29 14:13:12.133864 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 14:13:12.133879 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 14:13:12.133899 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 14:13:12.133919 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 14:13:12.133934 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 14:13:12.133956 kernel: Bridge firewalling registered Jan 29 14:13:12.133971 systemd-journald[202]: Journal started Jan 29 14:13:12.134012 systemd-journald[202]: Runtime Journal (/run/log/journal/bc271b7c69ef4b72a12a739bc39a0469) is 4.7M, max 38.0M, 33.2M free. Jan 29 14:13:12.070818 systemd-modules-load[203]: Inserted module 'overlay' Jan 29 14:13:12.188008 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 14:13:12.118217 systemd-modules-load[203]: Inserted module 'br_netfilter' Jan 29 14:13:12.189623 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 14:13:12.191649 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 14:13:12.200462 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 14:13:12.207391 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 14:13:12.216449 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 14:13:12.219347 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 14:13:12.222650 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 14:13:12.236440 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 14:13:12.243563 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 14:13:12.250440 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 14:13:12.252578 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 14:13:12.263403 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 14:13:12.265619 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 14:13:12.282908 dracut-cmdline[233]: dracut-dracut-053 Jan 29 14:13:12.289425 dracut-cmdline[233]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=519b8fded83181f8e61f734d5291f916d7548bfba9487c78bcb50d002d81719d Jan 29 14:13:12.305975 systemd-resolved[235]: Positive Trust Anchors: Jan 29 14:13:12.306016 systemd-resolved[235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 14:13:12.306062 systemd-resolved[235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 14:13:12.310992 systemd-resolved[235]: Defaulting to hostname 'linux'. Jan 29 14:13:12.313172 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 14:13:12.314428 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 14:13:12.407289 kernel: SCSI subsystem initialized Jan 29 14:13:12.420276 kernel: Loading iSCSI transport class v2.0-870. Jan 29 14:13:12.434282 kernel: iscsi: registered transport (tcp) Jan 29 14:13:12.461582 kernel: iscsi: registered transport (qla4xxx) Jan 29 14:13:12.461669 kernel: QLogic iSCSI HBA Driver Jan 29 14:13:12.519528 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 14:13:12.525517 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 14:13:12.567982 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 14:13:12.568071 kernel: device-mapper: uevent: version 1.0.3 Jan 29 14:13:12.568875 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 14:13:12.619335 kernel: raid6: sse2x4 gen() 13461 MB/s Jan 29 14:13:12.637278 kernel: raid6: sse2x2 gen() 8506 MB/s Jan 29 14:13:12.656143 kernel: raid6: sse2x1 gen() 9301 MB/s Jan 29 14:13:12.656183 kernel: raid6: using algorithm sse2x4 gen() 13461 MB/s Jan 29 14:13:12.675113 kernel: raid6: .... xor() 7429 MB/s, rmw enabled Jan 29 14:13:12.675166 kernel: raid6: using ssse3x2 recovery algorithm Jan 29 14:13:12.702278 kernel: xor: automatically using best checksumming function avx Jan 29 14:13:12.908413 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 14:13:12.927903 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 14:13:12.934578 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 14:13:12.966695 systemd-udevd[419]: Using default interface naming scheme 'v255'. Jan 29 14:13:12.973997 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 14:13:12.983411 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 14:13:13.005831 dracut-pre-trigger[429]: rd.md=0: removing MD RAID activation Jan 29 14:13:13.050185 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 14:13:13.058486 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 14:13:13.171369 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 14:13:13.179627 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 14:13:13.212500 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 14:13:13.215947 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 14:13:13.219512 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 14:13:13.221557 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 14:13:13.230510 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 14:13:13.260604 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 14:13:13.308265 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jan 29 14:13:13.410152 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 14:13:13.410182 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 29 14:13:13.410415 kernel: libata version 3.00 loaded. Jan 29 14:13:13.410439 kernel: AVX version of gcm_enc/dec engaged. Jan 29 14:13:13.410479 kernel: AES CTR mode by8 optimization enabled Jan 29 14:13:13.410500 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 14:13:13.410519 kernel: GPT:17805311 != 125829119 Jan 29 14:13:13.410537 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 14:13:13.410557 kernel: GPT:17805311 != 125829119 Jan 29 14:13:13.410574 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 14:13:13.410592 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 14:13:13.410617 kernel: ahci 0000:00:1f.2: version 3.0 Jan 29 14:13:13.421871 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 29 14:13:13.421900 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 29 14:13:13.422113 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 29 14:13:13.422357 kernel: ACPI: bus type USB registered Jan 29 14:13:13.422379 kernel: scsi host0: ahci Jan 29 14:13:13.422834 kernel: scsi host1: ahci Jan 29 14:13:13.423034 kernel: usbcore: registered new interface driver usbfs Jan 29 14:13:13.423056 kernel: scsi host2: ahci Jan 29 14:13:13.423322 kernel: scsi host3: ahci Jan 29 14:13:13.423531 kernel: scsi host4: ahci Jan 29 14:13:13.423718 kernel: usbcore: registered new interface driver hub Jan 29 14:13:13.423739 kernel: usbcore: registered new device driver usb Jan 29 14:13:13.423764 kernel: scsi host5: ahci Jan 29 14:13:13.423951 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Jan 29 14:13:13.423972 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Jan 29 14:13:13.423998 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Jan 29 14:13:13.424018 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Jan 29 14:13:13.424036 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Jan 29 14:13:13.424055 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Jan 29 14:13:13.358779 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 14:13:13.529677 kernel: BTRFS: device fsid 5ba3c9ea-61f2-4fe6-a507-2966757f6d44 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (469) Jan 29 14:13:13.529728 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (470) Jan 29 14:13:13.358959 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 14:13:13.361764 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 14:13:13.362688 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 14:13:13.362868 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 14:13:13.369511 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 14:13:13.386548 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 14:13:13.470682 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 14:13:13.530825 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 14:13:13.537581 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 14:13:13.540260 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 14:13:13.559275 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 14:13:13.566473 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 14:13:13.578456 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 14:13:13.582415 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 14:13:13.585948 disk-uuid[557]: Primary Header is updated. Jan 29 14:13:13.585948 disk-uuid[557]: Secondary Entries is updated. Jan 29 14:13:13.585948 disk-uuid[557]: Secondary Header is updated. Jan 29 14:13:13.594267 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 14:13:13.601272 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 14:13:13.621719 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 14:13:13.734919 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 29 14:13:13.734995 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 29 14:13:13.735025 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 29 14:13:13.735044 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 29 14:13:13.738207 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 29 14:13:13.740285 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 29 14:13:13.779281 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 29 14:13:13.797683 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jan 29 14:13:13.797946 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 29 14:13:13.798169 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 29 14:13:13.798421 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jan 29 14:13:13.798642 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jan 29 14:13:13.798880 kernel: hub 1-0:1.0: USB hub found Jan 29 14:13:13.799156 kernel: hub 1-0:1.0: 4 ports detected Jan 29 14:13:13.799409 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 29 14:13:13.799653 kernel: hub 2-0:1.0: USB hub found Jan 29 14:13:13.799873 kernel: hub 2-0:1.0: 4 ports detected Jan 29 14:13:14.029470 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 29 14:13:14.170271 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 14:13:14.176772 kernel: usbcore: registered new interface driver usbhid Jan 29 14:13:14.176813 kernel: usbhid: USB HID core driver Jan 29 14:13:14.185615 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 29 14:13:14.186152 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jan 29 14:13:14.605377 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 14:13:14.606462 disk-uuid[558]: The operation has completed successfully. Jan 29 14:13:14.665838 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 14:13:14.666029 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 14:13:14.678458 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 14:13:14.692589 sh[585]: Success Jan 29 14:13:14.710380 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Jan 29 14:13:14.777898 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 14:13:14.797382 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 14:13:14.801508 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 14:13:14.834275 kernel: BTRFS info (device dm-0): first mount of filesystem 5ba3c9ea-61f2-4fe6-a507-2966757f6d44 Jan 29 14:13:14.834330 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 14:13:14.838900 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 14:13:14.838940 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 14:13:14.842148 kernel: BTRFS info (device dm-0): using free space tree Jan 29 14:13:14.853500 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 14:13:14.855255 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 14:13:14.861478 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 14:13:14.864102 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 14:13:14.886542 kernel: BTRFS info (device vda6): first mount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 14:13:14.886583 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 14:13:14.886602 kernel: BTRFS info (device vda6): using free space tree Jan 29 14:13:14.892263 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 14:13:14.905924 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 14:13:14.909367 kernel: BTRFS info (device vda6): last unmount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 14:13:14.915935 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 14:13:14.923030 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 14:13:15.039098 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 14:13:15.054564 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 14:13:15.091270 ignition[677]: Ignition 2.20.0 Jan 29 14:13:15.092481 ignition[677]: Stage: fetch-offline Jan 29 14:13:15.092608 ignition[677]: no configs at "/usr/lib/ignition/base.d" Jan 29 14:13:15.096844 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 14:13:15.092627 ignition[677]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 14:13:15.092794 ignition[677]: parsed url from cmdline: "" Jan 29 14:13:15.092801 ignition[677]: no config URL provided Jan 29 14:13:15.092823 ignition[677]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 14:13:15.092839 ignition[677]: no config at "/usr/lib/ignition/user.ign" Jan 29 14:13:15.092848 ignition[677]: failed to fetch config: resource requires networking Jan 29 14:13:15.093200 ignition[677]: Ignition finished successfully Jan 29 14:13:15.105823 systemd-networkd[774]: lo: Link UP Jan 29 14:13:15.105838 systemd-networkd[774]: lo: Gained carrier Jan 29 14:13:15.108478 systemd-networkd[774]: Enumeration completed Jan 29 14:13:15.108646 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 14:13:15.109074 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 14:13:15.109080 systemd-networkd[774]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 14:13:15.111257 systemd-networkd[774]: eth0: Link UP Jan 29 14:13:15.111264 systemd-networkd[774]: eth0: Gained carrier Jan 29 14:13:15.111276 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 14:13:15.112449 systemd[1]: Reached target network.target - Network. Jan 29 14:13:15.122469 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 14:13:15.139409 ignition[777]: Ignition 2.20.0 Jan 29 14:13:15.139444 ignition[777]: Stage: fetch Jan 29 14:13:15.139749 ignition[777]: no configs at "/usr/lib/ignition/base.d" Jan 29 14:13:15.141366 systemd-networkd[774]: eth0: DHCPv4 address 10.230.31.206/30, gateway 10.230.31.205 acquired from 10.230.31.205 Jan 29 14:13:15.139769 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 14:13:15.139919 ignition[777]: parsed url from cmdline: "" Jan 29 14:13:15.139927 ignition[777]: no config URL provided Jan 29 14:13:15.139937 ignition[777]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 14:13:15.139955 ignition[777]: no config at "/usr/lib/ignition/user.ign" Jan 29 14:13:15.140112 ignition[777]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 29 14:13:15.140217 ignition[777]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 29 14:13:15.140284 ignition[777]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 29 14:13:15.140477 ignition[777]: GET error: Get "http://169.254.169.254/openstack/latest/user_data": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 29 14:13:15.340653 ignition[777]: GET http://169.254.169.254/openstack/latest/user_data: attempt #2 Jan 29 14:13:15.357422 ignition[777]: GET result: OK Jan 29 14:13:15.357547 ignition[777]: parsing config with SHA512: bdc85488146c2953bca591c96b03b278681981b69366581576ebffbd3608f8b06de2ace8dfbf3fd18a42bec8bbe3bd071c412aa9b191b4a6eb001d799a448110 Jan 29 14:13:15.362979 unknown[777]: fetched base config from "system" Jan 29 14:13:15.362994 unknown[777]: fetched base config from "system" Jan 29 14:13:15.363533 ignition[777]: fetch: fetch complete Jan 29 14:13:15.363003 unknown[777]: fetched user config from "openstack" Jan 29 14:13:15.363543 ignition[777]: fetch: fetch passed Jan 29 14:13:15.365911 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 14:13:15.363619 ignition[777]: Ignition finished successfully Jan 29 14:13:15.379491 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 14:13:15.400221 ignition[784]: Ignition 2.20.0 Jan 29 14:13:15.400266 ignition[784]: Stage: kargs Jan 29 14:13:15.400525 ignition[784]: no configs at "/usr/lib/ignition/base.d" Jan 29 14:13:15.403100 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 14:13:15.400554 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 14:13:15.401485 ignition[784]: kargs: kargs passed Jan 29 14:13:15.401578 ignition[784]: Ignition finished successfully Jan 29 14:13:15.411423 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 14:13:15.431550 ignition[790]: Ignition 2.20.0 Jan 29 14:13:15.431570 ignition[790]: Stage: disks Jan 29 14:13:15.431800 ignition[790]: no configs at "/usr/lib/ignition/base.d" Jan 29 14:13:15.433962 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 14:13:15.431820 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 14:13:15.436651 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 14:13:15.432729 ignition[790]: disks: disks passed Jan 29 14:13:15.437562 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 14:13:15.432806 ignition[790]: Ignition finished successfully Jan 29 14:13:15.439328 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 14:13:15.440997 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 14:13:15.442677 systemd[1]: Reached target basic.target - Basic System. Jan 29 14:13:15.449488 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 14:13:15.472189 systemd-fsck[798]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 29 14:13:15.475700 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 14:13:15.483751 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 14:13:15.608260 kernel: EXT4-fs (vda9): mounted filesystem 2fbf9359-701e-4995-b3f7-74280bd2b1c9 r/w with ordered data mode. Quota mode: none. Jan 29 14:13:15.608828 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 14:13:15.610385 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 14:13:15.619350 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 14:13:15.622389 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 14:13:15.623555 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 14:13:15.626525 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 29 14:13:15.628688 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 14:13:15.628737 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 14:13:15.642045 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 14:13:15.646809 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (806) Jan 29 14:13:15.650271 kernel: BTRFS info (device vda6): first mount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 14:13:15.653406 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 14:13:15.653442 kernel: BTRFS info (device vda6): using free space tree Jan 29 14:13:15.653623 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 14:13:15.667265 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 14:13:15.672830 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 14:13:15.733345 initrd-setup-root[833]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 14:13:15.745104 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Jan 29 14:13:15.755735 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 14:13:15.765060 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 14:13:15.876613 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 14:13:15.881389 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 14:13:15.885430 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 14:13:15.900307 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 14:13:15.901590 kernel: BTRFS info (device vda6): last unmount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 14:13:15.931739 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 14:13:15.934088 ignition[923]: INFO : Ignition 2.20.0 Jan 29 14:13:15.934088 ignition[923]: INFO : Stage: mount Jan 29 14:13:15.936086 ignition[923]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 14:13:15.936086 ignition[923]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 14:13:15.936086 ignition[923]: INFO : mount: mount passed Jan 29 14:13:15.936086 ignition[923]: INFO : Ignition finished successfully Jan 29 14:13:15.937032 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 14:13:16.673745 systemd-networkd[774]: eth0: Gained IPv6LL Jan 29 14:13:18.145398 systemd-networkd[774]: eth0: Ignoring DHCPv6 address 2a02:1348:179:87f3:24:19ff:fee6:1fce/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:87f3:24:19ff:fee6:1fce/64 assigned by NDisc. Jan 29 14:13:18.145414 systemd-networkd[774]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 29 14:13:22.816680 coreos-metadata[808]: Jan 29 14:13:22.816 WARN failed to locate config-drive, using the metadata service API instead Jan 29 14:13:22.842141 coreos-metadata[808]: Jan 29 14:13:22.842 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 29 14:13:22.853755 coreos-metadata[808]: Jan 29 14:13:22.853 INFO Fetch successful Jan 29 14:13:22.854652 coreos-metadata[808]: Jan 29 14:13:22.854 INFO wrote hostname srv-wt390.gb1.brightbox.com to /sysroot/etc/hostname Jan 29 14:13:22.856383 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 29 14:13:22.856613 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 29 14:13:22.865377 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 14:13:22.884570 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 14:13:22.896275 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (940) Jan 29 14:13:22.899539 kernel: BTRFS info (device vda6): first mount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 14:13:22.899585 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 14:13:22.901442 kernel: BTRFS info (device vda6): using free space tree Jan 29 14:13:22.907451 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 14:13:22.909678 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 14:13:22.939350 ignition[958]: INFO : Ignition 2.20.0 Jan 29 14:13:22.939350 ignition[958]: INFO : Stage: files Jan 29 14:13:22.941223 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 14:13:22.941223 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 14:13:22.941223 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Jan 29 14:13:22.944268 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 14:13:22.944268 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 14:13:22.946318 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 14:13:22.946318 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 14:13:22.948432 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 14:13:22.948432 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 29 14:13:22.948432 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 14:13:22.946570 unknown[958]: wrote ssh authorized keys file for user: core Jan 29 14:13:22.952952 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 14:13:22.952952 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 14:13:22.952952 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 29 14:13:22.952952 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 29 14:13:22.952952 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 29 14:13:22.952952 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Jan 29 14:13:23.520362 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 29 14:13:25.245394 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 29 14:13:25.247907 ignition[958]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 14:13:25.247907 ignition[958]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 14:13:25.247907 ignition[958]: INFO : files: files passed Jan 29 14:13:25.247907 ignition[958]: INFO : Ignition finished successfully Jan 29 14:13:25.248985 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 14:13:25.266615 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 14:13:25.271758 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 14:13:25.276301 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 14:13:25.277306 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 14:13:25.294890 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 14:13:25.294890 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 14:13:25.298384 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 14:13:25.300567 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 14:13:25.302095 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 14:13:25.309475 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 14:13:25.345024 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 14:13:25.345260 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 14:13:25.347711 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 14:13:25.348731 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 14:13:25.351516 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 14:13:25.356435 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 14:13:25.387807 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 14:13:25.395481 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 14:13:25.409354 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 14:13:25.411318 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 14:13:25.412387 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 14:13:25.414018 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 14:13:25.414181 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 14:13:25.416188 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 14:13:25.417193 systemd[1]: Stopped target basic.target - Basic System. Jan 29 14:13:25.418668 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 14:13:25.420084 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 14:13:25.421555 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 14:13:25.423108 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 14:13:25.424702 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 14:13:25.426390 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 14:13:25.427911 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 14:13:25.429567 systemd[1]: Stopped target swap.target - Swaps. Jan 29 14:13:25.430955 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 14:13:25.431119 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 14:13:25.432939 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 14:13:25.433924 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 14:13:25.435341 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 14:13:25.437738 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 14:13:25.439000 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 14:13:25.439183 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 14:13:25.441318 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 14:13:25.441586 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 14:13:25.443543 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 14:13:25.443708 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 14:13:25.452588 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 14:13:25.454460 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 14:13:25.459554 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 14:13:25.459749 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 14:13:25.463570 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 14:13:25.463752 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 14:13:25.472401 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 14:13:25.472554 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 14:13:25.483026 ignition[1010]: INFO : Ignition 2.20.0 Jan 29 14:13:25.483026 ignition[1010]: INFO : Stage: umount Jan 29 14:13:25.485528 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 14:13:25.485528 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 14:13:25.485528 ignition[1010]: INFO : umount: umount passed Jan 29 14:13:25.485528 ignition[1010]: INFO : Ignition finished successfully Jan 29 14:13:25.485714 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 14:13:25.487346 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 14:13:25.488778 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 14:13:25.488955 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 14:13:25.491529 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 14:13:25.491601 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 14:13:25.492369 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 14:13:25.492445 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 14:13:25.493168 systemd[1]: Stopped target network.target - Network. Jan 29 14:13:25.498293 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 14:13:25.498375 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 14:13:25.500890 systemd[1]: Stopped target paths.target - Path Units. Jan 29 14:13:25.502198 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 14:13:25.507356 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 14:13:25.508169 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 14:13:25.515886 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 14:13:25.517413 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 14:13:25.517494 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 14:13:25.518735 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 14:13:25.518849 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 14:13:25.520089 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 14:13:25.520157 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 14:13:25.521476 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 14:13:25.521543 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 14:13:25.523206 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 14:13:25.524906 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 14:13:25.527770 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 14:13:25.529387 systemd-networkd[774]: eth0: DHCPv6 lease lost Jan 29 14:13:25.533217 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 14:13:25.533459 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 14:13:25.536847 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 14:13:25.537041 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 14:13:25.541024 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 14:13:25.541128 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 14:13:25.548418 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 14:13:25.550290 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 14:13:25.550370 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 14:13:25.551382 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 14:13:25.551452 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 14:13:25.554086 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 14:13:25.554155 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 14:13:25.556879 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 14:13:25.556947 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 14:13:25.558751 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 14:13:25.577030 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 14:13:25.577355 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 14:13:25.579902 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 14:13:25.580039 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 14:13:25.581863 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 14:13:25.581921 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 14:13:25.583610 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 14:13:25.583686 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 14:13:25.585906 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 14:13:25.585981 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 14:13:25.587594 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 14:13:25.587712 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 14:13:25.599502 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 14:13:25.601928 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 14:13:25.602008 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 14:13:25.602838 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 14:13:25.602910 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 14:13:25.606611 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 14:13:25.606685 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 14:13:25.610373 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 14:13:25.610454 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 14:13:25.613079 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 14:13:25.613253 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 14:13:25.614495 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 14:13:25.614635 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 14:13:25.616335 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 14:13:25.616470 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 14:13:25.619477 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 14:13:25.620723 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 14:13:25.620839 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 14:13:25.629540 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 14:13:25.640646 systemd[1]: Switching root. Jan 29 14:13:25.677377 systemd-journald[202]: Journal stopped Jan 29 14:13:27.145428 systemd-journald[202]: Received SIGTERM from PID 1 (systemd). Jan 29 14:13:27.145541 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 14:13:27.145568 kernel: SELinux: policy capability open_perms=1 Jan 29 14:13:27.145600 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 14:13:27.145641 kernel: SELinux: policy capability always_check_network=0 Jan 29 14:13:27.145668 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 14:13:27.145688 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 14:13:27.145706 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 14:13:27.145732 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 14:13:27.145753 systemd[1]: Successfully loaded SELinux policy in 50.986ms. Jan 29 14:13:27.145804 kernel: audit: type=1403 audit(1738160005.920:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 14:13:27.145833 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.122ms. Jan 29 14:13:27.145866 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 14:13:27.145889 systemd[1]: Detected virtualization kvm. Jan 29 14:13:27.145922 systemd[1]: Detected architecture x86-64. Jan 29 14:13:27.145943 systemd[1]: Detected first boot. Jan 29 14:13:27.145961 systemd[1]: Hostname set to . Jan 29 14:13:27.145994 systemd[1]: Initializing machine ID from VM UUID. Jan 29 14:13:27.146016 zram_generator::config[1054]: No configuration found. Jan 29 14:13:27.146036 systemd[1]: Populated /etc with preset unit settings. Jan 29 14:13:27.146068 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 14:13:27.146089 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 14:13:27.146109 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 14:13:27.146129 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 14:13:27.146149 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 14:13:27.146180 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 14:13:27.146203 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 14:13:27.146341 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 14:13:27.146371 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 14:13:27.146392 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 14:13:27.146411 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 14:13:27.146431 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 14:13:27.146452 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 14:13:27.146474 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 14:13:27.146510 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 14:13:27.146537 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 14:13:27.146558 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 14:13:27.146578 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 14:13:27.146598 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 14:13:27.146617 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 14:13:27.146636 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 14:13:27.146668 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 14:13:27.146691 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 14:13:27.146721 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 14:13:27.146742 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 14:13:27.146777 systemd[1]: Reached target slices.target - Slice Units. Jan 29 14:13:27.146799 systemd[1]: Reached target swap.target - Swaps. Jan 29 14:13:27.146819 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 14:13:27.146839 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 14:13:27.146876 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 14:13:27.146897 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 14:13:27.146924 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 14:13:27.146945 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 14:13:27.146965 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 14:13:27.146984 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 14:13:27.147004 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 14:13:27.147025 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 14:13:27.147044 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 14:13:27.147075 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 14:13:27.147097 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 14:13:27.147118 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 14:13:27.147137 systemd[1]: Reached target machines.target - Containers. Jan 29 14:13:27.147171 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 14:13:27.147191 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 14:13:27.147212 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 14:13:27.147301 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 14:13:27.147334 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 14:13:27.147354 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 14:13:27.147373 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 14:13:27.147393 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 14:13:27.147425 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 14:13:27.147447 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 14:13:27.147478 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 14:13:27.147499 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 14:13:27.147530 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 14:13:27.147549 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 14:13:27.147568 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 14:13:27.147586 kernel: fuse: init (API version 7.39) Jan 29 14:13:27.147604 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 14:13:27.147623 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 14:13:27.147642 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 14:13:27.147681 kernel: ACPI: bus type drm_connector registered Jan 29 14:13:27.147703 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 14:13:27.147723 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 14:13:27.147771 systemd[1]: Stopped verity-setup.service. Jan 29 14:13:27.147793 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 14:13:27.147824 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 14:13:27.147843 kernel: loop: module loaded Jan 29 14:13:27.147862 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 14:13:27.147881 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 14:13:27.147914 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 14:13:27.147936 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 14:13:27.147963 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 14:13:27.148014 systemd-journald[1146]: Collecting audit messages is disabled. Jan 29 14:13:27.148073 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 14:13:27.148098 systemd-journald[1146]: Journal started Jan 29 14:13:27.148133 systemd-journald[1146]: Runtime Journal (/run/log/journal/bc271b7c69ef4b72a12a739bc39a0469) is 4.7M, max 38.0M, 33.2M free. Jan 29 14:13:26.700643 systemd[1]: Queued start job for default target multi-user.target. Jan 29 14:13:26.725087 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 14:13:26.725836 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 14:13:27.150249 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 14:13:27.154033 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 14:13:27.155300 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 14:13:27.155552 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 14:13:27.156929 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 14:13:27.157160 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 14:13:27.158411 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 14:13:27.158650 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 14:13:27.159898 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 14:13:27.160155 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 14:13:27.161681 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 14:13:27.161921 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 14:13:27.169112 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 14:13:27.169427 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 14:13:27.170603 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 14:13:27.181167 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 14:13:27.182629 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 14:13:27.201275 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 14:13:27.213324 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 14:13:27.223412 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 14:13:27.226392 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 14:13:27.226473 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 14:13:27.232799 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 14:13:27.246463 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 14:13:27.248890 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 14:13:27.249940 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 14:13:27.259391 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 14:13:27.261792 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 14:13:27.262681 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 14:13:27.266468 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 14:13:27.268422 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 14:13:27.273454 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 14:13:27.278468 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 14:13:27.282463 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 14:13:27.287999 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 14:13:27.295515 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 14:13:27.297933 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 14:13:27.326352 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 14:13:27.328576 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 14:13:27.337493 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 14:13:27.366428 systemd-journald[1146]: Time spent on flushing to /var/log/journal/bc271b7c69ef4b72a12a739bc39a0469 is 106.243ms for 1129 entries. Jan 29 14:13:27.366428 systemd-journald[1146]: System Journal (/var/log/journal/bc271b7c69ef4b72a12a739bc39a0469) is 8.0M, max 584.8M, 576.8M free. Jan 29 14:13:27.507531 systemd-journald[1146]: Received client request to flush runtime journal. Jan 29 14:13:27.507611 kernel: loop0: detected capacity change from 0 to 140992 Jan 29 14:13:27.507642 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 14:13:27.400373 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Jan 29 14:13:27.512282 kernel: loop1: detected capacity change from 0 to 138184 Jan 29 14:13:27.400394 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Jan 29 14:13:27.410689 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 14:13:27.411845 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 14:13:27.413374 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 14:13:27.422513 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 14:13:27.444277 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 14:13:27.467336 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 14:13:27.486088 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 14:13:27.512918 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 14:13:27.516836 udevadm[1201]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 29 14:13:27.553607 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 14:13:27.565423 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 14:13:27.575129 kernel: loop2: detected capacity change from 0 to 218376 Jan 29 14:13:27.656672 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Jan 29 14:13:27.656708 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Jan 29 14:13:27.670285 kernel: loop3: detected capacity change from 0 to 8 Jan 29 14:13:27.686481 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 14:13:27.706593 kernel: loop4: detected capacity change from 0 to 140992 Jan 29 14:13:27.749294 kernel: loop5: detected capacity change from 0 to 138184 Jan 29 14:13:27.790011 kernel: loop6: detected capacity change from 0 to 218376 Jan 29 14:13:27.828946 kernel: loop7: detected capacity change from 0 to 8 Jan 29 14:13:27.833669 (sd-merge)[1214]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 29 14:13:27.837777 (sd-merge)[1214]: Merged extensions into '/usr'. Jan 29 14:13:27.854448 systemd[1]: Reloading requested from client PID 1186 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 14:13:27.854484 systemd[1]: Reloading... Jan 29 14:13:28.036277 zram_generator::config[1237]: No configuration found. Jan 29 14:13:28.118284 ldconfig[1181]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 14:13:28.276198 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 14:13:28.350793 systemd[1]: Reloading finished in 495 ms. Jan 29 14:13:28.379849 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 14:13:28.381307 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 14:13:28.382545 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 14:13:28.393577 systemd[1]: Starting ensure-sysext.service... Jan 29 14:13:28.398439 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 14:13:28.404597 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 14:13:28.413359 systemd[1]: Reloading requested from client PID 1297 ('systemctl') (unit ensure-sysext.service)... Jan 29 14:13:28.413395 systemd[1]: Reloading... Jan 29 14:13:28.460573 systemd-tmpfiles[1298]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 14:13:28.461198 systemd-tmpfiles[1298]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 14:13:28.465862 systemd-tmpfiles[1298]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 14:13:28.466298 systemd-tmpfiles[1298]: ACLs are not supported, ignoring. Jan 29 14:13:28.466410 systemd-tmpfiles[1298]: ACLs are not supported, ignoring. Jan 29 14:13:28.469019 systemd-udevd[1299]: Using default interface naming scheme 'v255'. Jan 29 14:13:28.474974 systemd-tmpfiles[1298]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 14:13:28.474992 systemd-tmpfiles[1298]: Skipping /boot Jan 29 14:13:28.501075 systemd-tmpfiles[1298]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 14:13:28.501097 systemd-tmpfiles[1298]: Skipping /boot Jan 29 14:13:28.540325 zram_generator::config[1325]: No configuration found. Jan 29 14:13:28.683297 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1343) Jan 29 14:13:28.813284 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 29 14:13:28.845488 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 14:13:28.865369 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 14:13:28.883431 kernel: ACPI: button: Power Button [PWRF] Jan 29 14:13:28.933266 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 29 14:13:28.941868 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 29 14:13:28.942136 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 29 14:13:28.952301 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 29 14:13:28.964927 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 14:13:28.966555 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 14:13:28.968278 systemd[1]: Reloading finished in 554 ms. Jan 29 14:13:28.998137 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 14:13:29.005972 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 14:13:29.064518 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 14:13:29.073664 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 14:13:29.118788 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 14:13:29.121939 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 14:13:29.134685 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 14:13:29.142550 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 14:13:29.152594 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 14:13:29.153691 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 14:13:29.155433 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 14:13:29.160600 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 14:13:29.163917 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 14:13:29.180770 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 14:13:29.186569 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 14:13:29.187478 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 14:13:29.198042 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 14:13:29.198531 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 14:13:29.198935 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 14:13:29.199169 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 14:13:29.209609 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 14:13:29.209866 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 14:13:29.218122 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 14:13:29.219369 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 14:13:29.222439 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 14:13:29.222652 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 14:13:29.232067 systemd[1]: Finished ensure-sysext.service. Jan 29 14:13:29.240946 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 14:13:29.242402 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 14:13:29.251644 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 14:13:29.252627 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 14:13:29.252873 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 14:13:29.253101 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 14:13:29.261995 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 14:13:29.274914 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 14:13:29.286514 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 14:13:29.287812 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 14:13:29.288756 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 14:13:29.289510 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 14:13:29.308352 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 14:13:29.312653 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 14:13:29.338476 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 14:13:29.347605 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 14:13:29.355337 augenrules[1450]: No rules Jan 29 14:13:29.360640 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 14:13:29.361011 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 14:13:29.371880 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 14:13:29.373628 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 14:13:29.400646 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 14:13:29.409381 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 14:13:29.492111 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 14:13:29.503536 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 14:13:29.536891 lvm[1468]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 14:13:29.575149 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 14:13:29.597115 systemd-networkd[1422]: lo: Link UP Jan 29 14:13:29.597130 systemd-networkd[1422]: lo: Gained carrier Jan 29 14:13:29.600227 systemd-networkd[1422]: Enumeration completed Jan 29 14:13:29.601040 systemd-networkd[1422]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 14:13:29.601157 systemd-networkd[1422]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 14:13:29.603402 systemd-networkd[1422]: eth0: Link UP Jan 29 14:13:29.603516 systemd-networkd[1422]: eth0: Gained carrier Jan 29 14:13:29.603630 systemd-networkd[1422]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 14:13:29.618417 systemd-resolved[1423]: Positive Trust Anchors: Jan 29 14:13:29.618437 systemd-resolved[1423]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 14:13:29.618483 systemd-resolved[1423]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 14:13:29.632654 systemd-resolved[1423]: Using system hostname 'srv-wt390.gb1.brightbox.com'. Jan 29 14:13:29.634396 systemd-networkd[1422]: eth0: DHCPv4 address 10.230.31.206/30, gateway 10.230.31.205 acquired from 10.230.31.205 Jan 29 14:13:29.635859 systemd-timesyncd[1438]: Network configuration changed, trying to establish connection. Jan 29 14:13:29.674561 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 14:13:29.676113 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 14:13:29.677172 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 14:13:29.678512 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 14:13:29.680457 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 14:13:29.681308 systemd[1]: Reached target network.target - Network. Jan 29 14:13:29.682035 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 14:13:29.683109 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 14:13:29.684019 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 14:13:29.685013 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 14:13:29.685895 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 14:13:29.686778 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 14:13:29.686828 systemd[1]: Reached target paths.target - Path Units. Jan 29 14:13:29.687506 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 14:13:29.688544 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 14:13:29.689520 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 14:13:29.690461 systemd[1]: Reached target timers.target - Timer Units. Jan 29 14:13:29.692546 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 14:13:29.695314 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 14:13:29.701514 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 14:13:29.704326 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 14:13:29.708461 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 14:13:29.712133 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 14:13:29.713136 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 14:13:29.713853 systemd[1]: Reached target basic.target - Basic System. Jan 29 14:13:29.716366 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 14:13:29.716419 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 14:13:29.720792 lvm[1476]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 14:13:29.725426 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 14:13:29.731805 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 14:13:29.742306 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 14:13:29.752384 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 14:13:29.763502 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 14:13:29.764348 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 14:13:30.376957 systemd-resolved[1423]: Clock change detected. Flushing caches. Jan 29 14:13:30.377338 systemd-timesyncd[1438]: Contacted time server 213.5.132.231:123 (0.flatcar.pool.ntp.org). Jan 29 14:13:30.377611 systemd-timesyncd[1438]: Initial clock synchronization to Wed 2025-01-29 14:13:30.376874 UTC. Jan 29 14:13:30.383499 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 14:13:30.390701 jq[1481]: false Jan 29 14:13:30.394352 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 14:13:30.399367 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 14:13:30.407337 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 14:13:30.409022 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 14:13:30.411685 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 14:13:30.413359 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 14:13:30.418297 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 14:13:30.421661 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 14:13:30.431781 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 14:13:30.432035 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 14:13:30.440073 extend-filesystems[1482]: Found loop4 Jan 29 14:13:30.440073 extend-filesystems[1482]: Found loop5 Jan 29 14:13:30.440073 extend-filesystems[1482]: Found loop6 Jan 29 14:13:30.440073 extend-filesystems[1482]: Found loop7 Jan 29 14:13:30.440073 extend-filesystems[1482]: Found vda Jan 29 14:13:30.440073 extend-filesystems[1482]: Found vda1 Jan 29 14:13:30.440073 extend-filesystems[1482]: Found vda2 Jan 29 14:13:30.440073 extend-filesystems[1482]: Found vda3 Jan 29 14:13:30.440073 extend-filesystems[1482]: Found usr Jan 29 14:13:30.440073 extend-filesystems[1482]: Found vda4 Jan 29 14:13:30.440073 extend-filesystems[1482]: Found vda6 Jan 29 14:13:30.440073 extend-filesystems[1482]: Found vda7 Jan 29 14:13:30.440073 extend-filesystems[1482]: Found vda9 Jan 29 14:13:30.440073 extend-filesystems[1482]: Checking size of /dev/vda9 Jan 29 14:13:30.452558 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 14:13:30.455828 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 14:13:30.472396 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 14:13:30.474326 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 14:13:30.478007 jq[1491]: true Jan 29 14:13:30.492373 dbus-daemon[1480]: [system] SELinux support is enabled Jan 29 14:13:30.503901 dbus-daemon[1480]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1422 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 29 14:13:30.504018 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 14:13:30.509883 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 14:13:30.509996 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 14:13:30.511746 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 14:13:30.511793 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 14:13:30.518348 extend-filesystems[1482]: Resized partition /dev/vda9 Jan 29 14:13:30.520100 (ntainerd)[1509]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 14:13:30.533139 extend-filesystems[1514]: resize2fs 1.47.1 (20-May-2024) Jan 29 14:13:30.524373 dbus-daemon[1480]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 29 14:13:30.546896 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Jan 29 14:13:30.547034 jq[1507]: true Jan 29 14:13:30.540394 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 29 14:13:30.569500 update_engine[1490]: I20250129 14:13:30.568984 1490 main.cc:92] Flatcar Update Engine starting Jan 29 14:13:30.589529 update_engine[1490]: I20250129 14:13:30.589423 1490 update_check_scheduler.cc:74] Next update check in 11m3s Jan 29 14:13:30.591248 systemd[1]: Started update-engine.service - Update Engine. Jan 29 14:13:30.608723 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 14:13:30.624010 systemd-logind[1488]: Watching system buttons on /dev/input/event2 (Power Button) Jan 29 14:13:30.627706 systemd-logind[1488]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 14:13:30.628458 systemd-logind[1488]: New seat seat0. Jan 29 14:13:30.647578 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 14:13:30.689197 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1331) Jan 29 14:13:30.771232 dbus-daemon[1480]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 29 14:13:30.771678 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 29 14:13:30.771918 dbus-daemon[1480]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1516 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 29 14:13:30.783575 systemd[1]: Starting polkit.service - Authorization Manager... Jan 29 14:13:30.802792 polkitd[1535]: Started polkitd version 121 Jan 29 14:13:30.849202 bash[1534]: Updated "/home/core/.ssh/authorized_keys" Jan 29 14:13:30.851968 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 14:13:30.868967 polkitd[1535]: Loading rules from directory /etc/polkit-1/rules.d Jan 29 14:13:30.870212 systemd[1]: Starting sshkeys.service... Jan 29 14:13:30.869072 polkitd[1535]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 29 14:13:30.880153 polkitd[1535]: Finished loading, compiling and executing 2 rules Jan 29 14:13:30.884439 dbus-daemon[1480]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 29 14:13:30.892650 systemd[1]: Started polkit.service - Authorization Manager. Jan 29 14:13:30.885045 polkitd[1535]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 29 14:13:30.946885 systemd-hostnamed[1516]: Hostname set to (static) Jan 29 14:13:30.949068 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 14:13:30.961392 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 14:13:30.990303 locksmithd[1519]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 14:13:31.002140 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 29 14:13:31.032320 extend-filesystems[1514]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 14:13:31.032320 extend-filesystems[1514]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 29 14:13:31.032320 extend-filesystems[1514]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 29 14:13:31.038785 extend-filesystems[1482]: Resized filesystem in /dev/vda9 Jan 29 14:13:31.035822 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 14:13:31.036667 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 14:13:31.080172 containerd[1509]: time="2025-01-29T14:13:31.078965637Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 14:13:31.110675 containerd[1509]: time="2025-01-29T14:13:31.110612084Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 14:13:31.116700 containerd[1509]: time="2025-01-29T14:13:31.116658906Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 14:13:31.116870 containerd[1509]: time="2025-01-29T14:13:31.116843079Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 14:13:31.116993 containerd[1509]: time="2025-01-29T14:13:31.116967648Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 14:13:31.117399 containerd[1509]: time="2025-01-29T14:13:31.117371899Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 14:13:31.117560 containerd[1509]: time="2025-01-29T14:13:31.117512154Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 14:13:31.117748 containerd[1509]: time="2025-01-29T14:13:31.117718491Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 14:13:31.117913 containerd[1509]: time="2025-01-29T14:13:31.117886647Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 14:13:31.118320 containerd[1509]: time="2025-01-29T14:13:31.118283623Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 14:13:31.118423 containerd[1509]: time="2025-01-29T14:13:31.118401091Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 14:13:31.118548 containerd[1509]: time="2025-01-29T14:13:31.118501654Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 14:13:31.118635 containerd[1509]: time="2025-01-29T14:13:31.118612887Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 14:13:31.118904 containerd[1509]: time="2025-01-29T14:13:31.118877784Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 14:13:31.119461 containerd[1509]: time="2025-01-29T14:13:31.119425744Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 14:13:31.120046 containerd[1509]: time="2025-01-29T14:13:31.119747410Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 14:13:31.120046 containerd[1509]: time="2025-01-29T14:13:31.119786274Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 14:13:31.120046 containerd[1509]: time="2025-01-29T14:13:31.119920621Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 14:13:31.120046 containerd[1509]: time="2025-01-29T14:13:31.120002951Z" level=info msg="metadata content store policy set" policy=shared Jan 29 14:13:31.125135 containerd[1509]: time="2025-01-29T14:13:31.123863547Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 14:13:31.125135 containerd[1509]: time="2025-01-29T14:13:31.123951527Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 14:13:31.125135 containerd[1509]: time="2025-01-29T14:13:31.123981229Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 14:13:31.125135 containerd[1509]: time="2025-01-29T14:13:31.124004873Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 14:13:31.125135 containerd[1509]: time="2025-01-29T14:13:31.124027844Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 14:13:31.125135 containerd[1509]: time="2025-01-29T14:13:31.124250326Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 14:13:31.125135 containerd[1509]: time="2025-01-29T14:13:31.124578526Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 14:13:31.125135 containerd[1509]: time="2025-01-29T14:13:31.124752920Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 14:13:31.125135 containerd[1509]: time="2025-01-29T14:13:31.124778676Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 14:13:31.125135 containerd[1509]: time="2025-01-29T14:13:31.124800561Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 14:13:31.125135 containerd[1509]: time="2025-01-29T14:13:31.124821692Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 14:13:31.125135 containerd[1509]: time="2025-01-29T14:13:31.124842375Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 14:13:31.125135 containerd[1509]: time="2025-01-29T14:13:31.124861904Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 14:13:31.125135 containerd[1509]: time="2025-01-29T14:13:31.124884330Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 14:13:31.125647 containerd[1509]: time="2025-01-29T14:13:31.124927835Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 14:13:31.125647 containerd[1509]: time="2025-01-29T14:13:31.124960022Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 14:13:31.125647 containerd[1509]: time="2025-01-29T14:13:31.124982818Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 14:13:31.125647 containerd[1509]: time="2025-01-29T14:13:31.125001886Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 14:13:31.125647 containerd[1509]: time="2025-01-29T14:13:31.125038687Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 14:13:31.125647 containerd[1509]: time="2025-01-29T14:13:31.125062956Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 14:13:31.125647 containerd[1509]: time="2025-01-29T14:13:31.125082552Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 14:13:31.125995 containerd[1509]: time="2025-01-29T14:13:31.125103477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 14:13:31.126143 containerd[1509]: time="2025-01-29T14:13:31.126097022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 14:13:31.126262 containerd[1509]: time="2025-01-29T14:13:31.126239224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 14:13:31.126396 containerd[1509]: time="2025-01-29T14:13:31.126371475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 14:13:31.126575 containerd[1509]: time="2025-01-29T14:13:31.126540234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 14:13:31.126691 containerd[1509]: time="2025-01-29T14:13:31.126667293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 14:13:31.126820 containerd[1509]: time="2025-01-29T14:13:31.126786357Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 14:13:31.126976 containerd[1509]: time="2025-01-29T14:13:31.126953725Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 14:13:31.127101 containerd[1509]: time="2025-01-29T14:13:31.127066626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 14:13:31.127306 containerd[1509]: time="2025-01-29T14:13:31.127210106Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 14:13:31.127306 containerd[1509]: time="2025-01-29T14:13:31.127240824Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 14:13:31.127571 containerd[1509]: time="2025-01-29T14:13:31.127444042Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 14:13:31.127571 containerd[1509]: time="2025-01-29T14:13:31.127474320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 14:13:31.127571 containerd[1509]: time="2025-01-29T14:13:31.127493963Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 14:13:31.127812 containerd[1509]: time="2025-01-29T14:13:31.127760286Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 14:13:31.128375 containerd[1509]: time="2025-01-29T14:13:31.127796886Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 14:13:31.128375 containerd[1509]: time="2025-01-29T14:13:31.128027312Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 14:13:31.128375 containerd[1509]: time="2025-01-29T14:13:31.128050415Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 14:13:31.128375 containerd[1509]: time="2025-01-29T14:13:31.128065869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 14:13:31.128375 containerd[1509]: time="2025-01-29T14:13:31.128087343Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 14:13:31.128375 containerd[1509]: time="2025-01-29T14:13:31.128127215Z" level=info msg="NRI interface is disabled by configuration." Jan 29 14:13:31.128375 containerd[1509]: time="2025-01-29T14:13:31.128148464Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 14:13:31.129033 containerd[1509]: time="2025-01-29T14:13:31.128939176Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 14:13:31.130135 containerd[1509]: time="2025-01-29T14:13:31.129399633Z" level=info msg="Connect containerd service" Jan 29 14:13:31.130135 containerd[1509]: time="2025-01-29T14:13:31.129451053Z" level=info msg="using legacy CRI server" Jan 29 14:13:31.130135 containerd[1509]: time="2025-01-29T14:13:31.129465445Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 14:13:31.130135 containerd[1509]: time="2025-01-29T14:13:31.129630054Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 14:13:31.130943 containerd[1509]: time="2025-01-29T14:13:31.130908030Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 14:13:31.131490 containerd[1509]: time="2025-01-29T14:13:31.131428508Z" level=info msg="Start subscribing containerd event" Jan 29 14:13:31.131638 containerd[1509]: time="2025-01-29T14:13:31.131611965Z" level=info msg="Start recovering state" Jan 29 14:13:31.131893 containerd[1509]: time="2025-01-29T14:13:31.131819001Z" level=info msg="Start event monitor" Jan 29 14:13:31.132380 containerd[1509]: time="2025-01-29T14:13:31.132346001Z" level=info msg="Start snapshots syncer" Jan 29 14:13:31.132597 containerd[1509]: time="2025-01-29T14:13:31.132541983Z" level=info msg="Start cni network conf syncer for default" Jan 29 14:13:31.132813 containerd[1509]: time="2025-01-29T14:13:31.132754149Z" level=info msg="Start streaming server" Jan 29 14:13:31.133188 containerd[1509]: time="2025-01-29T14:13:31.132277373Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 14:13:31.133188 containerd[1509]: time="2025-01-29T14:13:31.132975508Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 14:13:31.133188 containerd[1509]: time="2025-01-29T14:13:31.133104850Z" level=info msg="containerd successfully booted in 0.058040s" Jan 29 14:13:31.133272 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 14:13:31.675459 sshd_keygen[1515]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 14:13:31.703589 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 14:13:31.728760 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 14:13:31.737533 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 14:13:31.737852 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 14:13:31.745626 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 14:13:31.764872 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 14:13:31.771668 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 14:13:31.779827 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 14:13:31.781545 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 14:13:32.197697 systemd-networkd[1422]: eth0: Gained IPv6LL Jan 29 14:13:32.201744 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 14:13:32.204468 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 14:13:32.211527 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 14:13:32.214598 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 14:13:32.249653 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 14:13:32.759534 systemd-networkd[1422]: eth0: Ignoring DHCPv6 address 2a02:1348:179:87f3:24:19ff:fee6:1fce/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:87f3:24:19ff:fee6:1fce/64 assigned by NDisc. Jan 29 14:13:32.759550 systemd-networkd[1422]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 29 14:13:33.257323 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 14:13:33.264866 (kubelet)[1599]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 14:13:33.952481 kubelet[1599]: E0129 14:13:33.952331 1599 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 14:13:33.955192 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 14:13:33.955475 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 14:13:33.956825 systemd[1]: kubelet.service: Consumed 1.108s CPU time. Jan 29 14:13:36.431561 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 14:13:36.450852 systemd[1]: Started sshd@0-10.230.31.206:22-147.75.109.163:39166.service - OpenSSH per-connection server daemon (147.75.109.163:39166). Jan 29 14:13:36.864078 login[1579]: pam_lastlog(login:session): file /var/log/lastlog is locked/read, retrying Jan 29 14:13:36.865623 login[1578]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 29 14:13:36.885365 systemd-logind[1488]: New session 2 of user core. Jan 29 14:13:36.888618 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 14:13:36.902717 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 14:13:36.924495 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 14:13:36.932661 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 14:13:36.945531 (systemd)[1617]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 14:13:37.085641 systemd[1617]: Queued start job for default target default.target. Jan 29 14:13:37.097003 systemd[1617]: Created slice app.slice - User Application Slice. Jan 29 14:13:37.097799 systemd[1617]: Reached target paths.target - Paths. Jan 29 14:13:37.097830 systemd[1617]: Reached target timers.target - Timers. Jan 29 14:13:37.100222 systemd[1617]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 14:13:37.116009 systemd[1617]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 14:13:37.116238 systemd[1617]: Reached target sockets.target - Sockets. Jan 29 14:13:37.116265 systemd[1617]: Reached target basic.target - Basic System. Jan 29 14:13:37.116354 systemd[1617]: Reached target default.target - Main User Target. Jan 29 14:13:37.116420 systemd[1617]: Startup finished in 159ms. Jan 29 14:13:37.116656 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 14:13:37.122427 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 14:13:37.381437 sshd[1609]: Accepted publickey for core from 147.75.109.163 port 39166 ssh2: RSA SHA256:qjXGBTkJVpYqh3NiDOavMP+6N/OEcEvnZQsnPap0wc0 Jan 29 14:13:37.384940 sshd-session[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 14:13:37.398281 systemd-logind[1488]: New session 3 of user core. Jan 29 14:13:37.411415 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 14:13:37.868501 login[1579]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 29 14:13:37.878838 systemd-logind[1488]: New session 1 of user core. Jan 29 14:13:37.886319 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 14:13:37.989813 coreos-metadata[1479]: Jan 29 14:13:37.989 WARN failed to locate config-drive, using the metadata service API instead Jan 29 14:13:38.016735 coreos-metadata[1479]: Jan 29 14:13:38.016 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 29 14:13:38.023029 coreos-metadata[1479]: Jan 29 14:13:38.022 INFO Fetch failed with 404: resource not found Jan 29 14:13:38.023029 coreos-metadata[1479]: Jan 29 14:13:38.022 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 29 14:13:38.023677 coreos-metadata[1479]: Jan 29 14:13:38.023 INFO Fetch successful Jan 29 14:13:38.023828 coreos-metadata[1479]: Jan 29 14:13:38.023 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 29 14:13:38.036374 coreos-metadata[1479]: Jan 29 14:13:38.036 INFO Fetch successful Jan 29 14:13:38.036374 coreos-metadata[1479]: Jan 29 14:13:38.036 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 29 14:13:38.052475 coreos-metadata[1479]: Jan 29 14:13:38.052 INFO Fetch successful Jan 29 14:13:38.052475 coreos-metadata[1479]: Jan 29 14:13:38.052 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 29 14:13:38.067851 coreos-metadata[1479]: Jan 29 14:13:38.067 INFO Fetch successful Jan 29 14:13:38.067851 coreos-metadata[1479]: Jan 29 14:13:38.067 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 29 14:13:38.070170 coreos-metadata[1554]: Jan 29 14:13:38.069 WARN failed to locate config-drive, using the metadata service API instead Jan 29 14:13:38.085833 coreos-metadata[1479]: Jan 29 14:13:38.085 INFO Fetch successful Jan 29 14:13:38.093080 coreos-metadata[1554]: Jan 29 14:13:38.093 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 29 14:13:38.110523 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 14:13:38.111832 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 14:13:38.117891 coreos-metadata[1554]: Jan 29 14:13:38.117 INFO Fetch successful Jan 29 14:13:38.118035 coreos-metadata[1554]: Jan 29 14:13:38.117 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 29 14:13:38.147619 systemd[1]: Started sshd@1-10.230.31.206:22-147.75.109.163:43428.service - OpenSSH per-connection server daemon (147.75.109.163:43428). Jan 29 14:13:38.151225 coreos-metadata[1554]: Jan 29 14:13:38.151 INFO Fetch successful Jan 29 14:13:38.154308 unknown[1554]: wrote ssh authorized keys file for user: core Jan 29 14:13:38.176382 update-ssh-keys[1658]: Updated "/home/core/.ssh/authorized_keys" Jan 29 14:13:38.178096 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 14:13:38.181291 systemd[1]: Finished sshkeys.service. Jan 29 14:13:38.185506 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 14:13:38.185945 systemd[1]: Startup finished in 1.455s (kernel) + 14.163s (initrd) + 11.704s (userspace) = 27.323s. Jan 29 14:13:39.047912 sshd[1657]: Accepted publickey for core from 147.75.109.163 port 43428 ssh2: RSA SHA256:qjXGBTkJVpYqh3NiDOavMP+6N/OEcEvnZQsnPap0wc0 Jan 29 14:13:39.050578 sshd-session[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 14:13:39.057728 systemd-logind[1488]: New session 4 of user core. Jan 29 14:13:39.065355 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 14:13:39.664445 sshd[1663]: Connection closed by 147.75.109.163 port 43428 Jan 29 14:13:39.667490 sshd-session[1657]: pam_unix(sshd:session): session closed for user core Jan 29 14:13:39.671891 systemd[1]: sshd@1-10.230.31.206:22-147.75.109.163:43428.service: Deactivated successfully. Jan 29 14:13:39.674617 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 14:13:39.676685 systemd-logind[1488]: Session 4 logged out. Waiting for processes to exit. Jan 29 14:13:39.678142 systemd-logind[1488]: Removed session 4. Jan 29 14:13:39.826457 systemd[1]: Started sshd@2-10.230.31.206:22-147.75.109.163:43430.service - OpenSSH per-connection server daemon (147.75.109.163:43430). Jan 29 14:13:40.724160 sshd[1668]: Accepted publickey for core from 147.75.109.163 port 43430 ssh2: RSA SHA256:qjXGBTkJVpYqh3NiDOavMP+6N/OEcEvnZQsnPap0wc0 Jan 29 14:13:40.726254 sshd-session[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 14:13:40.733130 systemd-logind[1488]: New session 5 of user core. Jan 29 14:13:40.744174 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 14:13:41.342281 sshd[1670]: Connection closed by 147.75.109.163 port 43430 Jan 29 14:13:41.343365 sshd-session[1668]: pam_unix(sshd:session): session closed for user core Jan 29 14:13:41.348340 systemd[1]: sshd@2-10.230.31.206:22-147.75.109.163:43430.service: Deactivated successfully. Jan 29 14:13:41.350499 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 14:13:41.351536 systemd-logind[1488]: Session 5 logged out. Waiting for processes to exit. Jan 29 14:13:41.353224 systemd-logind[1488]: Removed session 5. Jan 29 14:13:41.503576 systemd[1]: Started sshd@3-10.230.31.206:22-147.75.109.163:43444.service - OpenSSH per-connection server daemon (147.75.109.163:43444). Jan 29 14:13:42.392937 sshd[1675]: Accepted publickey for core from 147.75.109.163 port 43444 ssh2: RSA SHA256:qjXGBTkJVpYqh3NiDOavMP+6N/OEcEvnZQsnPap0wc0 Jan 29 14:13:42.395000 sshd-session[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 14:13:42.402764 systemd-logind[1488]: New session 6 of user core. Jan 29 14:13:42.409340 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 14:13:43.012652 sshd[1677]: Connection closed by 147.75.109.163 port 43444 Jan 29 14:13:43.013880 sshd-session[1675]: pam_unix(sshd:session): session closed for user core Jan 29 14:13:43.019544 systemd[1]: sshd@3-10.230.31.206:22-147.75.109.163:43444.service: Deactivated successfully. Jan 29 14:13:43.021965 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 14:13:43.023276 systemd-logind[1488]: Session 6 logged out. Waiting for processes to exit. Jan 29 14:13:43.025051 systemd-logind[1488]: Removed session 6. Jan 29 14:13:43.185552 systemd[1]: Started sshd@4-10.230.31.206:22-147.75.109.163:43452.service - OpenSSH per-connection server daemon (147.75.109.163:43452). Jan 29 14:13:44.000370 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 14:13:44.012431 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 14:13:44.075713 sshd[1682]: Accepted publickey for core from 147.75.109.163 port 43452 ssh2: RSA SHA256:qjXGBTkJVpYqh3NiDOavMP+6N/OEcEvnZQsnPap0wc0 Jan 29 14:13:44.078778 sshd-session[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 14:13:44.088463 systemd-logind[1488]: New session 7 of user core. Jan 29 14:13:44.094425 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 14:13:44.260317 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 14:13:44.272690 (kubelet)[1693]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 14:13:44.358831 kubelet[1693]: E0129 14:13:44.358713 1693 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 14:13:44.362683 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 14:13:44.362946 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 14:13:44.567996 sudo[1700]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 14:13:44.568541 sudo[1700]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 14:13:44.583952 sudo[1700]: pam_unix(sudo:session): session closed for user root Jan 29 14:13:44.727231 sshd[1687]: Connection closed by 147.75.109.163 port 43452 Jan 29 14:13:44.728881 sshd-session[1682]: pam_unix(sshd:session): session closed for user core Jan 29 14:13:44.734468 systemd[1]: sshd@4-10.230.31.206:22-147.75.109.163:43452.service: Deactivated successfully. Jan 29 14:13:44.737374 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 14:13:44.738752 systemd-logind[1488]: Session 7 logged out. Waiting for processes to exit. Jan 29 14:13:44.740490 systemd-logind[1488]: Removed session 7. Jan 29 14:13:44.893157 systemd[1]: Started sshd@5-10.230.31.206:22-147.75.109.163:43466.service - OpenSSH per-connection server daemon (147.75.109.163:43466). Jan 29 14:13:45.783570 sshd[1705]: Accepted publickey for core from 147.75.109.163 port 43466 ssh2: RSA SHA256:qjXGBTkJVpYqh3NiDOavMP+6N/OEcEvnZQsnPap0wc0 Jan 29 14:13:45.785651 sshd-session[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 14:13:45.792163 systemd-logind[1488]: New session 8 of user core. Jan 29 14:13:45.800332 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 14:13:46.261717 sudo[1709]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 14:13:46.262678 sudo[1709]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 14:13:46.268981 sudo[1709]: pam_unix(sudo:session): session closed for user root Jan 29 14:13:46.278283 sudo[1708]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 14:13:46.278762 sudo[1708]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 14:13:46.301647 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 14:13:46.348626 augenrules[1731]: No rules Jan 29 14:13:46.349813 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 14:13:46.350252 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 14:13:46.352486 sudo[1708]: pam_unix(sudo:session): session closed for user root Jan 29 14:13:46.496143 sshd[1707]: Connection closed by 147.75.109.163 port 43466 Jan 29 14:13:46.497158 sshd-session[1705]: pam_unix(sshd:session): session closed for user core Jan 29 14:13:46.502396 systemd-logind[1488]: Session 8 logged out. Waiting for processes to exit. Jan 29 14:13:46.503649 systemd[1]: sshd@5-10.230.31.206:22-147.75.109.163:43466.service: Deactivated successfully. Jan 29 14:13:46.506573 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 14:13:46.508447 systemd-logind[1488]: Removed session 8. Jan 29 14:13:46.664650 systemd[1]: Started sshd@6-10.230.31.206:22-147.75.109.163:43478.service - OpenSSH per-connection server daemon (147.75.109.163:43478). Jan 29 14:13:47.552565 sshd[1739]: Accepted publickey for core from 147.75.109.163 port 43478 ssh2: RSA SHA256:qjXGBTkJVpYqh3NiDOavMP+6N/OEcEvnZQsnPap0wc0 Jan 29 14:13:47.554670 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 14:13:47.563318 systemd-logind[1488]: New session 9 of user core. Jan 29 14:13:47.574335 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 14:13:48.029541 sudo[1742]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 14:13:48.030039 sudo[1742]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 14:13:48.786450 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 14:13:48.798453 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 14:13:48.842634 systemd[1]: Reloading requested from client PID 1775 ('systemctl') (unit session-9.scope)... Jan 29 14:13:48.842677 systemd[1]: Reloading... Jan 29 14:13:48.973197 zram_generator::config[1810]: No configuration found. Jan 29 14:13:49.173663 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 14:13:49.283869 systemd[1]: Reloading finished in 440 ms. Jan 29 14:13:49.364506 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 14:13:49.369845 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 14:13:49.370200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 14:13:49.376505 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 14:13:49.543675 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 14:13:49.556630 (kubelet)[1884]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 14:13:49.632136 kubelet[1884]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 14:13:49.632136 kubelet[1884]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 29 14:13:49.632136 kubelet[1884]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 14:13:49.632820 kubelet[1884]: I0129 14:13:49.632271 1884 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 14:13:50.157778 kubelet[1884]: I0129 14:13:50.157719 1884 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 29 14:13:50.158042 kubelet[1884]: I0129 14:13:50.158022 1884 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 14:13:50.158667 kubelet[1884]: I0129 14:13:50.158642 1884 server.go:954] "Client rotation is on, will bootstrap in background" Jan 29 14:13:50.184542 kubelet[1884]: I0129 14:13:50.184500 1884 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 14:13:50.196018 kubelet[1884]: E0129 14:13:50.195949 1884 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 14:13:50.196018 kubelet[1884]: I0129 14:13:50.196022 1884 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 14:13:50.202998 kubelet[1884]: I0129 14:13:50.202961 1884 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 14:13:50.203627 kubelet[1884]: I0129 14:13:50.203557 1884 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 14:13:50.203868 kubelet[1884]: I0129 14:13:50.203623 1884 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.230.31.206","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 14:13:50.204214 kubelet[1884]: I0129 14:13:50.203897 1884 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 14:13:50.204214 kubelet[1884]: I0129 14:13:50.203926 1884 container_manager_linux.go:304] "Creating device plugin manager" Jan 29 14:13:50.204214 kubelet[1884]: I0129 14:13:50.204177 1884 state_mem.go:36] "Initialized new in-memory state store" Jan 29 14:13:50.208714 kubelet[1884]: I0129 14:13:50.208672 1884 kubelet.go:446] "Attempting to sync node with API server" Jan 29 14:13:50.208969 kubelet[1884]: I0129 14:13:50.208724 1884 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 14:13:50.208969 kubelet[1884]: I0129 14:13:50.208775 1884 kubelet.go:352] "Adding apiserver pod source" Jan 29 14:13:50.208969 kubelet[1884]: I0129 14:13:50.208814 1884 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 14:13:50.211894 kubelet[1884]: E0129 14:13:50.211723 1884 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:13:50.211894 kubelet[1884]: E0129 14:13:50.211836 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:13:50.217155 kubelet[1884]: I0129 14:13:50.217101 1884 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 14:13:50.218158 kubelet[1884]: I0129 14:13:50.217968 1884 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 14:13:50.219152 kubelet[1884]: W0129 14:13:50.218789 1884 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 14:13:50.221463 kubelet[1884]: I0129 14:13:50.221431 1884 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 29 14:13:50.221549 kubelet[1884]: I0129 14:13:50.221500 1884 server.go:1287] "Started kubelet" Jan 29 14:13:50.222810 kubelet[1884]: I0129 14:13:50.221693 1884 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 14:13:50.224067 kubelet[1884]: I0129 14:13:50.223386 1884 server.go:490] "Adding debug handlers to kubelet server" Jan 29 14:13:50.225621 kubelet[1884]: I0129 14:13:50.225314 1884 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 14:13:50.228133 kubelet[1884]: I0129 14:13:50.226588 1884 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 14:13:50.233520 kubelet[1884]: I0129 14:13:50.233335 1884 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 14:13:50.233707 kubelet[1884]: E0129 14:13:50.231984 1884 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.230.31.206.181f2f569c21e221 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.230.31.206,UID:10.230.31.206,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.230.31.206,},FirstTimestamp:2025-01-29 14:13:50.221460001 +0000 UTC m=+0.656171774,LastTimestamp:2025-01-29 14:13:50.221460001 +0000 UTC m=+0.656171774,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.230.31.206,}" Jan 29 14:13:50.237876 kubelet[1884]: I0129 14:13:50.237838 1884 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 14:13:50.241033 kubelet[1884]: I0129 14:13:50.241003 1884 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 29 14:13:50.241404 kubelet[1884]: E0129 14:13:50.241373 1884 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.230.31.206\" not found" Jan 29 14:13:50.250993 kubelet[1884]: E0129 14:13:50.250955 1884 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.230.31.206\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 29 14:13:50.251238 kubelet[1884]: W0129 14:13:50.251210 1884 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 29 14:13:50.251420 kubelet[1884]: E0129 14:13:50.251380 1884 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 29 14:13:50.251652 kubelet[1884]: W0129 14:13:50.251628 1884 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.230.31.206" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 29 14:13:50.251962 kubelet[1884]: E0129 14:13:50.251936 1884 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.230.31.206\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 29 14:13:50.252482 kubelet[1884]: E0129 14:13:50.252455 1884 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 14:13:50.253053 kubelet[1884]: I0129 14:13:50.253031 1884 reconciler.go:26] "Reconciler: start to sync state" Jan 29 14:13:50.253330 kubelet[1884]: I0129 14:13:50.253307 1884 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 14:13:50.255147 kubelet[1884]: I0129 14:13:50.254074 1884 factory.go:221] Registration of the containerd container factory successfully Jan 29 14:13:50.255287 kubelet[1884]: I0129 14:13:50.255267 1884 factory.go:221] Registration of the systemd container factory successfully Jan 29 14:13:50.255445 kubelet[1884]: E0129 14:13:50.254479 1884 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.230.31.206.181f2f569dfaa131 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.230.31.206,UID:10.230.31.206,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.230.31.206,},FirstTimestamp:2025-01-29 14:13:50.252441905 +0000 UTC m=+0.687153668,LastTimestamp:2025-01-29 14:13:50.252441905 +0000 UTC m=+0.687153668,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.230.31.206,}" Jan 29 14:13:50.256231 kubelet[1884]: I0129 14:13:50.256201 1884 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 14:13:50.297194 kubelet[1884]: I0129 14:13:50.297100 1884 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 29 14:13:50.297194 kubelet[1884]: I0129 14:13:50.297185 1884 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 29 14:13:50.297447 kubelet[1884]: I0129 14:13:50.297227 1884 state_mem.go:36] "Initialized new in-memory state store" Jan 29 14:13:50.299584 kubelet[1884]: I0129 14:13:50.299376 1884 policy_none.go:49] "None policy: Start" Jan 29 14:13:50.299584 kubelet[1884]: I0129 14:13:50.299419 1884 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 29 14:13:50.299584 kubelet[1884]: I0129 14:13:50.299449 1884 state_mem.go:35] "Initializing new in-memory state store" Jan 29 14:13:50.315049 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 14:13:50.337524 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 14:13:50.344154 kubelet[1884]: E0129 14:13:50.341768 1884 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.230.31.206\" not found" Jan 29 14:13:50.354148 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 14:13:50.357236 kubelet[1884]: I0129 14:13:50.357189 1884 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 14:13:50.357669 kubelet[1884]: I0129 14:13:50.357644 1884 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 14:13:50.361045 kubelet[1884]: I0129 14:13:50.361018 1884 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 14:13:50.361152 kubelet[1884]: I0129 14:13:50.361057 1884 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 14:13:50.362162 kubelet[1884]: I0129 14:13:50.361344 1884 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 14:13:50.362722 kubelet[1884]: I0129 14:13:50.362694 1884 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 29 14:13:50.362840 kubelet[1884]: I0129 14:13:50.362817 1884 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 29 14:13:50.362888 kubelet[1884]: I0129 14:13:50.362876 1884 kubelet.go:2388] "Starting kubelet main sync loop" Jan 29 14:13:50.364252 kubelet[1884]: E0129 14:13:50.363103 1884 kubelet.go:2412] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 29 14:13:50.365847 kubelet[1884]: I0129 14:13:50.365822 1884 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 14:13:50.368409 kubelet[1884]: E0129 14:13:50.368383 1884 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 29 14:13:50.368640 kubelet[1884]: E0129 14:13:50.368614 1884 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.230.31.206\" not found" Jan 29 14:13:50.460246 kubelet[1884]: E0129 14:13:50.458339 1884 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.230.31.206\" not found" node="10.230.31.206" Jan 29 14:13:50.463138 kubelet[1884]: I0129 14:13:50.462983 1884 kubelet_node_status.go:76] "Attempting to register node" node="10.230.31.206" Jan 29 14:13:50.473086 kubelet[1884]: I0129 14:13:50.473015 1884 kubelet_node_status.go:79] "Successfully registered node" node="10.230.31.206" Jan 29 14:13:50.473086 kubelet[1884]: E0129 14:13:50.473084 1884 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"10.230.31.206\": node \"10.230.31.206\" not found" Jan 29 14:13:50.480410 kubelet[1884]: E0129 14:13:50.480340 1884 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.230.31.206\" not found" Jan 29 14:13:50.581523 kubelet[1884]: E0129 14:13:50.581451 1884 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.230.31.206\" not found" Jan 29 14:13:50.594253 sudo[1742]: pam_unix(sudo:session): session closed for user root Jan 29 14:13:50.682368 kubelet[1884]: E0129 14:13:50.682272 1884 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.230.31.206\" not found" Jan 29 14:13:50.739366 sshd[1741]: Connection closed by 147.75.109.163 port 43478 Jan 29 14:13:50.740871 sshd-session[1739]: pam_unix(sshd:session): session closed for user core Jan 29 14:13:50.747005 systemd[1]: sshd@6-10.230.31.206:22-147.75.109.163:43478.service: Deactivated successfully. Jan 29 14:13:50.750771 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 14:13:50.753873 systemd-logind[1488]: Session 9 logged out. Waiting for processes to exit. Jan 29 14:13:50.756371 systemd-logind[1488]: Removed session 9. Jan 29 14:13:50.782682 kubelet[1884]: E0129 14:13:50.782537 1884 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.230.31.206\" not found" Jan 29 14:13:50.883797 kubelet[1884]: E0129 14:13:50.883688 1884 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.230.31.206\" not found" Jan 29 14:13:50.984819 kubelet[1884]: E0129 14:13:50.984619 1884 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.230.31.206\" not found" Jan 29 14:13:51.085931 kubelet[1884]: E0129 14:13:51.085803 1884 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.230.31.206\" not found" Jan 29 14:13:51.161811 kubelet[1884]: I0129 14:13:51.161706 1884 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 29 14:13:51.162203 kubelet[1884]: W0129 14:13:51.162155 1884 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 14:13:51.162283 kubelet[1884]: W0129 14:13:51.162238 1884 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 14:13:51.186044 kubelet[1884]: E0129 14:13:51.185959 1884 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.230.31.206\" not found" Jan 29 14:13:51.212243 kubelet[1884]: E0129 14:13:51.212193 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:13:51.286740 kubelet[1884]: E0129 14:13:51.286694 1884 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.230.31.206\" not found" Jan 29 14:13:51.387567 kubelet[1884]: E0129 14:13:51.387315 1884 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.230.31.206\" not found" Jan 29 14:13:51.489277 kubelet[1884]: I0129 14:13:51.488759 1884 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 29 14:13:51.490124 containerd[1509]: time="2025-01-29T14:13:51.489791661Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 14:13:51.491425 kubelet[1884]: I0129 14:13:51.490262 1884 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 29 14:13:52.213461 kubelet[1884]: E0129 14:13:52.212950 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:13:52.213461 kubelet[1884]: I0129 14:13:52.213026 1884 apiserver.go:52] "Watching apiserver" Jan 29 14:13:52.221612 kubelet[1884]: E0129 14:13:52.221518 1884 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bkqlz" podUID="d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5" Jan 29 14:13:52.231398 systemd[1]: Created slice kubepods-besteffort-pod2067ba57_7f06_46a1_81e3_b6ac1c2bd8ea.slice - libcontainer container kubepods-besteffort-pod2067ba57_7f06_46a1_81e3_b6ac1c2bd8ea.slice. Jan 29 14:13:52.254056 kubelet[1884]: I0129 14:13:52.253978 1884 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 14:13:52.257264 systemd[1]: Created slice kubepods-besteffort-pod333c60dc_4d8b_4109_8091_e96919908a21.slice - libcontainer container kubepods-besteffort-pod333c60dc_4d8b_4109_8091_e96919908a21.slice. Jan 29 14:13:52.267692 kubelet[1884]: I0129 14:13:52.267633 1884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2067ba57-7f06-46a1-81e3-b6ac1c2bd8ea-xtables-lock\") pod \"kube-proxy-vn9db\" (UID: \"2067ba57-7f06-46a1-81e3-b6ac1c2bd8ea\") " pod="kube-system/kube-proxy-vn9db" Jan 29 14:13:52.267692 kubelet[1884]: I0129 14:13:52.267694 1884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2067ba57-7f06-46a1-81e3-b6ac1c2bd8ea-lib-modules\") pod \"kube-proxy-vn9db\" (UID: \"2067ba57-7f06-46a1-81e3-b6ac1c2bd8ea\") " pod="kube-system/kube-proxy-vn9db" Jan 29 14:13:52.267920 kubelet[1884]: I0129 14:13:52.267728 1884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/333c60dc-4d8b-4109-8091-e96919908a21-xtables-lock\") pod \"calico-node-xmthx\" (UID: \"333c60dc-4d8b-4109-8091-e96919908a21\") " pod="calico-system/calico-node-xmthx" Jan 29 14:13:52.267920 kubelet[1884]: I0129 14:13:52.267757 1884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/333c60dc-4d8b-4109-8091-e96919908a21-var-lib-calico\") pod \"calico-node-xmthx\" (UID: \"333c60dc-4d8b-4109-8091-e96919908a21\") " pod="calico-system/calico-node-xmthx" Jan 29 14:13:52.267920 kubelet[1884]: I0129 14:13:52.267800 1884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/333c60dc-4d8b-4109-8091-e96919908a21-cni-net-dir\") pod \"calico-node-xmthx\" (UID: \"333c60dc-4d8b-4109-8091-e96919908a21\") " pod="calico-system/calico-node-xmthx" Jan 29 14:13:52.267920 kubelet[1884]: I0129 14:13:52.267867 1884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5-varrun\") pod \"csi-node-driver-bkqlz\" (UID: \"d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5\") " pod="calico-system/csi-node-driver-bkqlz" Jan 29 14:13:52.267920 kubelet[1884]: I0129 14:13:52.267900 1884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5-socket-dir\") pod \"csi-node-driver-bkqlz\" (UID: \"d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5\") " pod="calico-system/csi-node-driver-bkqlz" Jan 29 14:13:52.268230 kubelet[1884]: I0129 14:13:52.267930 1884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr6cw\" (UniqueName: \"kubernetes.io/projected/d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5-kube-api-access-tr6cw\") pod \"csi-node-driver-bkqlz\" (UID: \"d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5\") " pod="calico-system/csi-node-driver-bkqlz" Jan 29 14:13:52.268230 kubelet[1884]: I0129 14:13:52.267964 1884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/333c60dc-4d8b-4109-8091-e96919908a21-flexvol-driver-host\") pod \"calico-node-xmthx\" (UID: \"333c60dc-4d8b-4109-8091-e96919908a21\") " pod="calico-system/calico-node-xmthx" Jan 29 14:13:52.268230 kubelet[1884]: I0129 14:13:52.267991 1884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8rls\" (UniqueName: \"kubernetes.io/projected/2067ba57-7f06-46a1-81e3-b6ac1c2bd8ea-kube-api-access-b8rls\") pod \"kube-proxy-vn9db\" (UID: \"2067ba57-7f06-46a1-81e3-b6ac1c2bd8ea\") " pod="kube-system/kube-proxy-vn9db" Jan 29 14:13:52.268230 kubelet[1884]: I0129 14:13:52.268019 1884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/333c60dc-4d8b-4109-8091-e96919908a21-policysync\") pod \"calico-node-xmthx\" (UID: \"333c60dc-4d8b-4109-8091-e96919908a21\") " pod="calico-system/calico-node-xmthx" Jan 29 14:13:52.268230 kubelet[1884]: I0129 14:13:52.268049 1884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/333c60dc-4d8b-4109-8091-e96919908a21-tigera-ca-bundle\") pod \"calico-node-xmthx\" (UID: \"333c60dc-4d8b-4109-8091-e96919908a21\") " pod="calico-system/calico-node-xmthx" Jan 29 14:13:52.268453 kubelet[1884]: I0129 14:13:52.268075 1884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/333c60dc-4d8b-4109-8091-e96919908a21-node-certs\") pod \"calico-node-xmthx\" (UID: \"333c60dc-4d8b-4109-8091-e96919908a21\") " pod="calico-system/calico-node-xmthx" Jan 29 14:13:52.268453 kubelet[1884]: I0129 14:13:52.268100 1884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/333c60dc-4d8b-4109-8091-e96919908a21-var-run-calico\") pod \"calico-node-xmthx\" (UID: \"333c60dc-4d8b-4109-8091-e96919908a21\") " pod="calico-system/calico-node-xmthx" Jan 29 14:13:52.268453 kubelet[1884]: I0129 14:13:52.268167 1884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/333c60dc-4d8b-4109-8091-e96919908a21-cni-bin-dir\") pod \"calico-node-xmthx\" (UID: \"333c60dc-4d8b-4109-8091-e96919908a21\") " pod="calico-system/calico-node-xmthx" Jan 29 14:13:52.268453 kubelet[1884]: I0129 14:13:52.268204 1884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/333c60dc-4d8b-4109-8091-e96919908a21-cni-log-dir\") pod \"calico-node-xmthx\" (UID: \"333c60dc-4d8b-4109-8091-e96919908a21\") " pod="calico-system/calico-node-xmthx" Jan 29 14:13:52.268453 kubelet[1884]: I0129 14:13:52.268231 1884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4jxf\" (UniqueName: \"kubernetes.io/projected/333c60dc-4d8b-4109-8091-e96919908a21-kube-api-access-h4jxf\") pod \"calico-node-xmthx\" (UID: \"333c60dc-4d8b-4109-8091-e96919908a21\") " pod="calico-system/calico-node-xmthx" Jan 29 14:13:52.268721 kubelet[1884]: I0129 14:13:52.268262 1884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5-registration-dir\") pod \"csi-node-driver-bkqlz\" (UID: \"d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5\") " pod="calico-system/csi-node-driver-bkqlz" Jan 29 14:13:52.268721 kubelet[1884]: I0129 14:13:52.268288 1884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2067ba57-7f06-46a1-81e3-b6ac1c2bd8ea-kube-proxy\") pod \"kube-proxy-vn9db\" (UID: \"2067ba57-7f06-46a1-81e3-b6ac1c2bd8ea\") " pod="kube-system/kube-proxy-vn9db" Jan 29 14:13:52.268721 kubelet[1884]: I0129 14:13:52.268313 1884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/333c60dc-4d8b-4109-8091-e96919908a21-lib-modules\") pod \"calico-node-xmthx\" (UID: \"333c60dc-4d8b-4109-8091-e96919908a21\") " pod="calico-system/calico-node-xmthx" Jan 29 14:13:52.268721 kubelet[1884]: I0129 14:13:52.268337 1884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5-kubelet-dir\") pod \"csi-node-driver-bkqlz\" (UID: \"d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5\") " pod="calico-system/csi-node-driver-bkqlz" Jan 29 14:13:52.371294 kubelet[1884]: E0129 14:13:52.371245 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.371294 kubelet[1884]: W0129 14:13:52.371279 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.371859 kubelet[1884]: E0129 14:13:52.371348 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.371859 kubelet[1884]: E0129 14:13:52.371716 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.371859 kubelet[1884]: W0129 14:13:52.371732 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.371859 kubelet[1884]: E0129 14:13:52.371748 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.372551 kubelet[1884]: E0129 14:13:52.372324 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.372551 kubelet[1884]: W0129 14:13:52.372347 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.372551 kubelet[1884]: E0129 14:13:52.372462 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.373089 kubelet[1884]: E0129 14:13:52.372661 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.373089 kubelet[1884]: W0129 14:13:52.372676 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.373089 kubelet[1884]: E0129 14:13:52.372724 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.373089 kubelet[1884]: E0129 14:13:52.372959 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.373089 kubelet[1884]: W0129 14:13:52.372974 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.373089 kubelet[1884]: E0129 14:13:52.373017 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.373737 kubelet[1884]: E0129 14:13:52.373258 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.373737 kubelet[1884]: W0129 14:13:52.373273 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.373737 kubelet[1884]: E0129 14:13:52.373330 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.373737 kubelet[1884]: E0129 14:13:52.373539 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.373737 kubelet[1884]: W0129 14:13:52.373553 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.374297 kubelet[1884]: E0129 14:13:52.373799 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.374297 kubelet[1884]: W0129 14:13:52.373844 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.374297 kubelet[1884]: E0129 14:13:52.373611 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.374297 kubelet[1884]: E0129 14:13:52.373901 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.374297 kubelet[1884]: E0129 14:13:52.374086 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.374297 kubelet[1884]: W0129 14:13:52.374100 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.374297 kubelet[1884]: E0129 14:13:52.374207 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.375338 kubelet[1884]: E0129 14:13:52.374382 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.375338 kubelet[1884]: W0129 14:13:52.374397 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.375338 kubelet[1884]: E0129 14:13:52.374442 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.375338 kubelet[1884]: E0129 14:13:52.374642 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.375338 kubelet[1884]: W0129 14:13:52.374656 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.375338 kubelet[1884]: E0129 14:13:52.374717 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.375338 kubelet[1884]: E0129 14:13:52.374938 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.375338 kubelet[1884]: W0129 14:13:52.374953 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.375338 kubelet[1884]: E0129 14:13:52.375026 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.375338 kubelet[1884]: E0129 14:13:52.375216 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.376371 kubelet[1884]: W0129 14:13:52.375230 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.376371 kubelet[1884]: E0129 14:13:52.375272 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.376371 kubelet[1884]: E0129 14:13:52.375493 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.376371 kubelet[1884]: W0129 14:13:52.375508 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.376371 kubelet[1884]: E0129 14:13:52.375557 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.376371 kubelet[1884]: E0129 14:13:52.375764 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.376371 kubelet[1884]: W0129 14:13:52.375778 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.376371 kubelet[1884]: E0129 14:13:52.375823 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.376371 kubelet[1884]: E0129 14:13:52.376054 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.376371 kubelet[1884]: W0129 14:13:52.376069 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.377310 kubelet[1884]: E0129 14:13:52.376150 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.377310 kubelet[1884]: E0129 14:13:52.376353 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.377310 kubelet[1884]: W0129 14:13:52.376367 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.377310 kubelet[1884]: E0129 14:13:52.376414 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.377310 kubelet[1884]: E0129 14:13:52.376600 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.377310 kubelet[1884]: W0129 14:13:52.376625 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.377310 kubelet[1884]: E0129 14:13:52.376678 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.377310 kubelet[1884]: E0129 14:13:52.376898 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.377310 kubelet[1884]: W0129 14:13:52.376915 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.377310 kubelet[1884]: E0129 14:13:52.376960 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.378092 kubelet[1884]: E0129 14:13:52.377171 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.378092 kubelet[1884]: W0129 14:13:52.377185 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.378092 kubelet[1884]: E0129 14:13:52.377232 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.378092 kubelet[1884]: E0129 14:13:52.377439 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.378092 kubelet[1884]: W0129 14:13:52.377453 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.378092 kubelet[1884]: E0129 14:13:52.377505 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.378092 kubelet[1884]: E0129 14:13:52.377702 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.378092 kubelet[1884]: W0129 14:13:52.377716 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.378092 kubelet[1884]: E0129 14:13:52.377757 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.378092 kubelet[1884]: E0129 14:13:52.377975 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.379064 kubelet[1884]: W0129 14:13:52.377988 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.379064 kubelet[1884]: E0129 14:13:52.378027 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.379064 kubelet[1884]: E0129 14:13:52.378296 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.379064 kubelet[1884]: W0129 14:13:52.378310 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.379064 kubelet[1884]: E0129 14:13:52.378343 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.379064 kubelet[1884]: E0129 14:13:52.378593 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.379064 kubelet[1884]: W0129 14:13:52.378607 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.379064 kubelet[1884]: E0129 14:13:52.378653 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.379064 kubelet[1884]: E0129 14:13:52.378893 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.379064 kubelet[1884]: W0129 14:13:52.378907 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.379890 kubelet[1884]: E0129 14:13:52.378943 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.379890 kubelet[1884]: E0129 14:13:52.379204 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.379890 kubelet[1884]: W0129 14:13:52.379218 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.379890 kubelet[1884]: E0129 14:13:52.379265 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.379890 kubelet[1884]: E0129 14:13:52.379481 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.379890 kubelet[1884]: W0129 14:13:52.379496 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.379890 kubelet[1884]: E0129 14:13:52.379534 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.379890 kubelet[1884]: E0129 14:13:52.379730 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.379890 kubelet[1884]: W0129 14:13:52.379743 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.379890 kubelet[1884]: E0129 14:13:52.379789 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.381503 kubelet[1884]: E0129 14:13:52.380027 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.381503 kubelet[1884]: W0129 14:13:52.380042 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.381503 kubelet[1884]: E0129 14:13:52.380298 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.381503 kubelet[1884]: W0129 14:13:52.380318 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.381503 kubelet[1884]: E0129 14:13:52.380550 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.381503 kubelet[1884]: W0129 14:13:52.380565 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.381503 kubelet[1884]: E0129 14:13:52.380799 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.381503 kubelet[1884]: W0129 14:13:52.380812 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.381503 kubelet[1884]: E0129 14:13:52.381076 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.381503 kubelet[1884]: W0129 14:13:52.381095 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.381503 kubelet[1884]: E0129 14:13:52.381466 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.381971 kubelet[1884]: W0129 14:13:52.381479 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.381971 kubelet[1884]: E0129 14:13:52.381712 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.381971 kubelet[1884]: W0129 14:13:52.381725 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.381971 kubelet[1884]: E0129 14:13:52.381952 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.381971 kubelet[1884]: W0129 14:13:52.381966 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.382277 kubelet[1884]: E0129 14:13:52.382252 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.382421 kubelet[1884]: E0129 14:13:52.382395 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.382563 kubelet[1884]: E0129 14:13:52.382529 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.382656 kubelet[1884]: E0129 14:13:52.382632 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.382756 kubelet[1884]: E0129 14:13:52.382735 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.383104 kubelet[1884]: E0129 14:13:52.382861 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.383104 kubelet[1884]: E0129 14:13:52.382884 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.383104 kubelet[1884]: E0129 14:13:52.382899 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.383104 kubelet[1884]: E0129 14:13:52.382949 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.383104 kubelet[1884]: W0129 14:13:52.382966 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.383818 kubelet[1884]: E0129 14:13:52.383672 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.383818 kubelet[1884]: W0129 14:13:52.383691 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.384131 kubelet[1884]: E0129 14:13:52.384096 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.384369 kubelet[1884]: W0129 14:13:52.384236 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.384652 kubelet[1884]: E0129 14:13:52.384632 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.384787 kubelet[1884]: W0129 14:13:52.384766 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.385389 kubelet[1884]: E0129 14:13:52.385318 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.385389 kubelet[1884]: W0129 14:13:52.385335 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.385973 kubelet[1884]: E0129 14:13:52.385866 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.385973 kubelet[1884]: W0129 14:13:52.385904 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.389356 kubelet[1884]: E0129 14:13:52.389325 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.389554 kubelet[1884]: W0129 14:13:52.389398 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.390042 kubelet[1884]: E0129 14:13:52.389918 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.390042 kubelet[1884]: W0129 14:13:52.389938 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.390624 kubelet[1884]: E0129 14:13:52.390483 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.390624 kubelet[1884]: W0129 14:13:52.390502 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.390624 kubelet[1884]: E0129 14:13:52.390615 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.390797 kubelet[1884]: E0129 14:13:52.390668 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.390797 kubelet[1884]: E0129 14:13:52.390693 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.390797 kubelet[1884]: E0129 14:13:52.390731 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.390797 kubelet[1884]: E0129 14:13:52.390759 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.390987 kubelet[1884]: E0129 14:13:52.390845 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.390987 kubelet[1884]: E0129 14:13:52.390871 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.390987 kubelet[1884]: E0129 14:13:52.390903 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.390987 kubelet[1884]: E0129 14:13:52.390940 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.395535 kubelet[1884]: E0129 14:13:52.395502 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.397369 kubelet[1884]: W0129 14:13:52.397136 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.397524 kubelet[1884]: E0129 14:13:52.397497 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.398619 kubelet[1884]: E0129 14:13:52.398598 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.398902 kubelet[1884]: W0129 14:13:52.398732 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.399050 kubelet[1884]: E0129 14:13:52.399027 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.399699 kubelet[1884]: E0129 14:13:52.399679 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.400357 kubelet[1884]: W0129 14:13:52.400171 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.400843 kubelet[1884]: E0129 14:13:52.400788 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.401758 kubelet[1884]: E0129 14:13:52.401467 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.401758 kubelet[1884]: W0129 14:13:52.401486 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.401758 kubelet[1884]: E0129 14:13:52.401759 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.402234 kubelet[1884]: E0129 14:13:52.402164 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.402234 kubelet[1884]: W0129 14:13:52.402183 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.402330 kubelet[1884]: E0129 14:13:52.402262 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.402598 kubelet[1884]: E0129 14:13:52.402574 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.402598 kubelet[1884]: W0129 14:13:52.402596 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.402769 kubelet[1884]: E0129 14:13:52.402743 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.403238 kubelet[1884]: E0129 14:13:52.403213 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.403238 kubelet[1884]: W0129 14:13:52.403236 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.403385 kubelet[1884]: E0129 14:13:52.403304 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.403690 kubelet[1884]: E0129 14:13:52.403666 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.403690 kubelet[1884]: W0129 14:13:52.403688 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.403809 kubelet[1884]: E0129 14:13:52.403792 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.404243 kubelet[1884]: E0129 14:13:52.404217 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.404243 kubelet[1884]: W0129 14:13:52.404241 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.404467 kubelet[1884]: E0129 14:13:52.404421 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.404682 kubelet[1884]: E0129 14:13:52.404566 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.404682 kubelet[1884]: W0129 14:13:52.404597 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.404682 kubelet[1884]: E0129 14:13:52.404644 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.404956 kubelet[1884]: E0129 14:13:52.404933 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.404956 kubelet[1884]: W0129 14:13:52.404955 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.405190 kubelet[1884]: E0129 14:13:52.405143 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.405516 kubelet[1884]: E0129 14:13:52.405483 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.405516 kubelet[1884]: W0129 14:13:52.405503 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.405723 kubelet[1884]: E0129 14:13:52.405660 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.405901 kubelet[1884]: E0129 14:13:52.405878 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.405901 kubelet[1884]: W0129 14:13:52.405900 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.407648 kubelet[1884]: E0129 14:13:52.406011 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.407648 kubelet[1884]: E0129 14:13:52.406214 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.407648 kubelet[1884]: W0129 14:13:52.406229 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.407648 kubelet[1884]: E0129 14:13:52.406488 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.407648 kubelet[1884]: W0129 14:13:52.406503 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.407648 kubelet[1884]: E0129 14:13:52.406814 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.407648 kubelet[1884]: W0129 14:13:52.406829 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.407648 kubelet[1884]: E0129 14:13:52.407103 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.407648 kubelet[1884]: W0129 14:13:52.407150 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.407648 kubelet[1884]: E0129 14:13:52.407259 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.408071 kubelet[1884]: E0129 14:13:52.407290 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.408071 kubelet[1884]: E0129 14:13:52.407404 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.408071 kubelet[1884]: W0129 14:13:52.407418 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.408071 kubelet[1884]: E0129 14:13:52.407678 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.408071 kubelet[1884]: W0129 14:13:52.407694 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.408323 kubelet[1884]: E0129 14:13:52.408193 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.408323 kubelet[1884]: W0129 14:13:52.408209 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.408600 kubelet[1884]: E0129 14:13:52.408517 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.411325 kubelet[1884]: E0129 14:13:52.411271 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.411325 kubelet[1884]: E0129 14:13:52.411323 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.411478 kubelet[1884]: E0129 14:13:52.411348 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.412206 kubelet[1884]: E0129 14:13:52.412166 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.412206 kubelet[1884]: E0129 14:13:52.408497 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.412290 kubelet[1884]: W0129 14:13:52.412218 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.412613 kubelet[1884]: E0129 14:13:52.412578 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.412613 kubelet[1884]: W0129 14:13:52.412601 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.415146 kubelet[1884]: E0129 14:13:52.414244 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.415146 kubelet[1884]: W0129 14:13:52.414268 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.415522 kubelet[1884]: E0129 14:13:52.415477 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.415522 kubelet[1884]: W0129 14:13:52.415518 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.419278 kubelet[1884]: E0129 14:13:52.416692 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.419278 kubelet[1884]: E0129 14:13:52.416738 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.419278 kubelet[1884]: E0129 14:13:52.416764 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.419278 kubelet[1884]: E0129 14:13:52.416785 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.419278 kubelet[1884]: E0129 14:13:52.418724 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.419278 kubelet[1884]: W0129 14:13:52.418739 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.419278 kubelet[1884]: E0129 14:13:52.419272 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.419665 kubelet[1884]: W0129 14:13:52.419287 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.420153 kubelet[1884]: E0129 14:13:52.419748 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.420153 kubelet[1884]: E0129 14:13:52.419790 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.420153 kubelet[1884]: E0129 14:13:52.419937 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.420153 kubelet[1884]: W0129 14:13:52.419953 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.421179 kubelet[1884]: E0129 14:13:52.420871 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.421179 kubelet[1884]: W0129 14:13:52.420894 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.422132 kubelet[1884]: E0129 14:13:52.421378 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.422132 kubelet[1884]: W0129 14:13:52.421457 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.422132 kubelet[1884]: E0129 14:13:52.421782 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.422132 kubelet[1884]: W0129 14:13:52.421809 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.422132 kubelet[1884]: E0129 14:13:52.421823 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.422132 kubelet[1884]: E0129 14:13:52.421899 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.422132 kubelet[1884]: E0129 14:13:52.421923 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.422132 kubelet[1884]: E0129 14:13:52.421958 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.429454 kubelet[1884]: E0129 14:13:52.429357 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:13:52.429454 kubelet[1884]: W0129 14:13:52.429395 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:13:52.429454 kubelet[1884]: E0129 14:13:52.429412 1884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:13:52.553262 containerd[1509]: time="2025-01-29T14:13:52.552841038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vn9db,Uid:2067ba57-7f06-46a1-81e3-b6ac1c2bd8ea,Namespace:kube-system,Attempt:0,}" Jan 29 14:13:52.562669 containerd[1509]: time="2025-01-29T14:13:52.562173568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xmthx,Uid:333c60dc-4d8b-4109-8091-e96919908a21,Namespace:calico-system,Attempt:0,}" Jan 29 14:13:53.214060 kubelet[1884]: E0129 14:13:53.213999 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:13:53.217468 containerd[1509]: time="2025-01-29T14:13:53.217394419Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 14:13:53.219739 containerd[1509]: time="2025-01-29T14:13:53.219678044Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 14:13:53.220818 containerd[1509]: time="2025-01-29T14:13:53.220765250Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 29 14:13:53.222440 containerd[1509]: time="2025-01-29T14:13:53.221992894Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 14:13:53.222440 containerd[1509]: time="2025-01-29T14:13:53.222366064Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 14:13:53.225117 containerd[1509]: time="2025-01-29T14:13:53.225037244Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 14:13:53.228839 containerd[1509]: time="2025-01-29T14:13:53.228416250Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 675.170577ms" Jan 29 14:13:53.230327 containerd[1509]: time="2025-01-29T14:13:53.230268219Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 667.906425ms" Jan 29 14:13:53.364207 kubelet[1884]: E0129 14:13:53.363711 1884 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bkqlz" podUID="d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5" Jan 29 14:13:53.376745 containerd[1509]: time="2025-01-29T14:13:53.374758273Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 14:13:53.377739 containerd[1509]: time="2025-01-29T14:13:53.373985286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 14:13:53.377739 containerd[1509]: time="2025-01-29T14:13:53.377537385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 14:13:53.377739 containerd[1509]: time="2025-01-29T14:13:53.377581342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 14:13:53.377739 containerd[1509]: time="2025-01-29T14:13:53.377295843Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 14:13:53.377739 containerd[1509]: time="2025-01-29T14:13:53.377325989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 14:13:53.378359 containerd[1509]: time="2025-01-29T14:13:53.377887004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 14:13:53.378680 containerd[1509]: time="2025-01-29T14:13:53.378551370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 14:13:53.395942 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3169647717.mount: Deactivated successfully. Jan 29 14:13:53.493367 systemd[1]: run-containerd-runc-k8s.io-9673d8a4a6ed1cb757c63b233b03643afd9f147854632fc42592d35fedc52fbc-runc.aIjIYM.mount: Deactivated successfully. Jan 29 14:13:53.498988 systemd[1]: run-containerd-runc-k8s.io-126387b6bba9800d3940e47314ee8c8a44b77c162a6f81836190085287ea2753-runc.qdYO2x.mount: Deactivated successfully. Jan 29 14:13:53.510373 systemd[1]: Started cri-containerd-126387b6bba9800d3940e47314ee8c8a44b77c162a6f81836190085287ea2753.scope - libcontainer container 126387b6bba9800d3940e47314ee8c8a44b77c162a6f81836190085287ea2753. Jan 29 14:13:53.512527 systemd[1]: Started cri-containerd-9673d8a4a6ed1cb757c63b233b03643afd9f147854632fc42592d35fedc52fbc.scope - libcontainer container 9673d8a4a6ed1cb757c63b233b03643afd9f147854632fc42592d35fedc52fbc. Jan 29 14:13:53.574260 containerd[1509]: time="2025-01-29T14:13:53.574174518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vn9db,Uid:2067ba57-7f06-46a1-81e3-b6ac1c2bd8ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"126387b6bba9800d3940e47314ee8c8a44b77c162a6f81836190085287ea2753\"" Jan 29 14:13:53.575038 containerd[1509]: time="2025-01-29T14:13:53.574438107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xmthx,Uid:333c60dc-4d8b-4109-8091-e96919908a21,Namespace:calico-system,Attempt:0,} returns sandbox id \"9673d8a4a6ed1cb757c63b233b03643afd9f147854632fc42592d35fedc52fbc\"" Jan 29 14:13:53.579005 containerd[1509]: time="2025-01-29T14:13:53.578896481Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 29 14:13:54.214731 kubelet[1884]: E0129 14:13:54.214641 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:13:54.933537 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1365859944.mount: Deactivated successfully. Jan 29 14:13:55.068205 containerd[1509]: time="2025-01-29T14:13:55.068060994Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:13:55.069703 containerd[1509]: time="2025-01-29T14:13:55.069422581Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 29 14:13:55.079949 containerd[1509]: time="2025-01-29T14:13:55.079700667Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:13:55.082786 containerd[1509]: time="2025-01-29T14:13:55.082671869Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:13:55.084741 containerd[1509]: time="2025-01-29T14:13:55.083989231Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.504995378s" Jan 29 14:13:55.084741 containerd[1509]: time="2025-01-29T14:13:55.084050695Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 29 14:13:55.086520 containerd[1509]: time="2025-01-29T14:13:55.086472994Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\"" Jan 29 14:13:55.089043 containerd[1509]: time="2025-01-29T14:13:55.089009560Z" level=info msg="CreateContainer within sandbox \"9673d8a4a6ed1cb757c63b233b03643afd9f147854632fc42592d35fedc52fbc\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 14:13:55.122062 containerd[1509]: time="2025-01-29T14:13:55.122004717Z" level=info msg="CreateContainer within sandbox \"9673d8a4a6ed1cb757c63b233b03643afd9f147854632fc42592d35fedc52fbc\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3447457144da981e630e879a6212ee02e34f610327631d9f5a5100d8ee684e24\"" Jan 29 14:13:55.123880 containerd[1509]: time="2025-01-29T14:13:55.123823398Z" level=info msg="StartContainer for \"3447457144da981e630e879a6212ee02e34f610327631d9f5a5100d8ee684e24\"" Jan 29 14:13:55.172370 systemd[1]: Started cri-containerd-3447457144da981e630e879a6212ee02e34f610327631d9f5a5100d8ee684e24.scope - libcontainer container 3447457144da981e630e879a6212ee02e34f610327631d9f5a5100d8ee684e24. Jan 29 14:13:55.216336 kubelet[1884]: E0129 14:13:55.214932 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:13:55.218354 containerd[1509]: time="2025-01-29T14:13:55.218262185Z" level=info msg="StartContainer for \"3447457144da981e630e879a6212ee02e34f610327631d9f5a5100d8ee684e24\" returns successfully" Jan 29 14:13:55.234306 systemd[1]: cri-containerd-3447457144da981e630e879a6212ee02e34f610327631d9f5a5100d8ee684e24.scope: Deactivated successfully. Jan 29 14:13:55.322159 containerd[1509]: time="2025-01-29T14:13:55.321977473Z" level=info msg="shim disconnected" id=3447457144da981e630e879a6212ee02e34f610327631d9f5a5100d8ee684e24 namespace=k8s.io Jan 29 14:13:55.322159 containerd[1509]: time="2025-01-29T14:13:55.322215628Z" level=warning msg="cleaning up after shim disconnected" id=3447457144da981e630e879a6212ee02e34f610327631d9f5a5100d8ee684e24 namespace=k8s.io Jan 29 14:13:55.322159 containerd[1509]: time="2025-01-29T14:13:55.322235648Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 14:13:55.364098 kubelet[1884]: E0129 14:13:55.363966 1884 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bkqlz" podUID="d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5" Jan 29 14:13:55.867784 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3447457144da981e630e879a6212ee02e34f610327631d9f5a5100d8ee684e24-rootfs.mount: Deactivated successfully. Jan 29 14:13:56.216263 kubelet[1884]: E0129 14:13:56.215577 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:13:56.748377 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1994303135.mount: Deactivated successfully. Jan 29 14:13:57.216601 kubelet[1884]: E0129 14:13:57.216360 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:13:57.364442 kubelet[1884]: E0129 14:13:57.364258 1884 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bkqlz" podUID="d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5" Jan 29 14:13:57.489569 containerd[1509]: time="2025-01-29T14:13:57.489357147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:13:57.490920 containerd[1509]: time="2025-01-29T14:13:57.490570626Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.1: active requests=0, bytes read=30909474" Jan 29 14:13:57.491684 containerd[1509]: time="2025-01-29T14:13:57.491572403Z" level=info msg="ImageCreate event name:\"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:13:57.494722 containerd[1509]: time="2025-01-29T14:13:57.494598673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:13:57.496549 containerd[1509]: time="2025-01-29T14:13:57.495840847Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.1\" with image id \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\", repo tag \"registry.k8s.io/kube-proxy:v1.32.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\", size \"30908485\" in 2.408649851s" Jan 29 14:13:57.496549 containerd[1509]: time="2025-01-29T14:13:57.495888979Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\" returns image reference \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\"" Jan 29 14:13:57.498578 containerd[1509]: time="2025-01-29T14:13:57.498458674Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 29 14:13:57.500224 containerd[1509]: time="2025-01-29T14:13:57.499970559Z" level=info msg="CreateContainer within sandbox \"126387b6bba9800d3940e47314ee8c8a44b77c162a6f81836190085287ea2753\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 14:13:57.523842 containerd[1509]: time="2025-01-29T14:13:57.523782345Z" level=info msg="CreateContainer within sandbox \"126387b6bba9800d3940e47314ee8c8a44b77c162a6f81836190085287ea2753\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e4ea1e246b78f4486e8875f9776bdc18d698ee9fc1341ecd20b7a6ed53390dd5\"" Jan 29 14:13:57.524945 containerd[1509]: time="2025-01-29T14:13:57.524660381Z" level=info msg="StartContainer for \"e4ea1e246b78f4486e8875f9776bdc18d698ee9fc1341ecd20b7a6ed53390dd5\"" Jan 29 14:13:57.571880 systemd[1]: run-containerd-runc-k8s.io-e4ea1e246b78f4486e8875f9776bdc18d698ee9fc1341ecd20b7a6ed53390dd5-runc.EATIp0.mount: Deactivated successfully. Jan 29 14:13:57.588364 systemd[1]: Started cri-containerd-e4ea1e246b78f4486e8875f9776bdc18d698ee9fc1341ecd20b7a6ed53390dd5.scope - libcontainer container e4ea1e246b78f4486e8875f9776bdc18d698ee9fc1341ecd20b7a6ed53390dd5. Jan 29 14:13:57.637728 containerd[1509]: time="2025-01-29T14:13:57.637586918Z" level=info msg="StartContainer for \"e4ea1e246b78f4486e8875f9776bdc18d698ee9fc1341ecd20b7a6ed53390dd5\" returns successfully" Jan 29 14:13:58.217173 kubelet[1884]: E0129 14:13:58.217009 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:13:58.421082 kubelet[1884]: I0129 14:13:58.420723 1884 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vn9db" podStartSLOduration=4.502184811 podStartE2EDuration="8.420676983s" podCreationTimestamp="2025-01-29 14:13:50 +0000 UTC" firstStartedPulling="2025-01-29 14:13:53.578849501 +0000 UTC m=+4.013561263" lastFinishedPulling="2025-01-29 14:13:57.497341644 +0000 UTC m=+7.932053435" observedRunningTime="2025-01-29 14:13:58.419070553 +0000 UTC m=+8.853782363" watchObservedRunningTime="2025-01-29 14:13:58.420676983 +0000 UTC m=+8.855388760" Jan 29 14:13:59.218266 kubelet[1884]: E0129 14:13:59.218180 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:13:59.363640 kubelet[1884]: E0129 14:13:59.363530 1884 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bkqlz" podUID="d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5" Jan 29 14:14:00.219300 kubelet[1884]: E0129 14:14:00.219156 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:01.219630 kubelet[1884]: E0129 14:14:01.219523 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:01.364660 kubelet[1884]: E0129 14:14:01.363973 1884 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bkqlz" podUID="d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5" Jan 29 14:14:02.220473 kubelet[1884]: E0129 14:14:02.220330 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:02.612930 containerd[1509]: time="2025-01-29T14:14:02.612837342Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:14:02.614348 containerd[1509]: time="2025-01-29T14:14:02.614286880Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 29 14:14:02.615103 containerd[1509]: time="2025-01-29T14:14:02.615029433Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:14:02.618265 containerd[1509]: time="2025-01-29T14:14:02.618180009Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:14:02.619648 containerd[1509]: time="2025-01-29T14:14:02.619407508Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.120880699s" Jan 29 14:14:02.619648 containerd[1509]: time="2025-01-29T14:14:02.619461680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 29 14:14:02.623180 containerd[1509]: time="2025-01-29T14:14:02.623145449Z" level=info msg="CreateContainer within sandbox \"9673d8a4a6ed1cb757c63b233b03643afd9f147854632fc42592d35fedc52fbc\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 14:14:02.642032 containerd[1509]: time="2025-01-29T14:14:02.641919190Z" level=info msg="CreateContainer within sandbox \"9673d8a4a6ed1cb757c63b233b03643afd9f147854632fc42592d35fedc52fbc\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b9445756ffd324e4176a1018e97259bc91a3f0cd7e535afc336d8296d48cce71\"" Jan 29 14:14:02.643297 containerd[1509]: time="2025-01-29T14:14:02.643093840Z" level=info msg="StartContainer for \"b9445756ffd324e4176a1018e97259bc91a3f0cd7e535afc336d8296d48cce71\"" Jan 29 14:14:02.699429 systemd[1]: Started cri-containerd-b9445756ffd324e4176a1018e97259bc91a3f0cd7e535afc336d8296d48cce71.scope - libcontainer container b9445756ffd324e4176a1018e97259bc91a3f0cd7e535afc336d8296d48cce71. Jan 29 14:14:02.749064 containerd[1509]: time="2025-01-29T14:14:02.748700274Z" level=info msg="StartContainer for \"b9445756ffd324e4176a1018e97259bc91a3f0cd7e535afc336d8296d48cce71\" returns successfully" Jan 29 14:14:02.806065 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 29 14:14:03.220611 kubelet[1884]: E0129 14:14:03.220535 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:03.364290 kubelet[1884]: E0129 14:14:03.364182 1884 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bkqlz" podUID="d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5" Jan 29 14:14:03.585269 containerd[1509]: time="2025-01-29T14:14:03.585177872Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 14:14:03.588529 systemd[1]: cri-containerd-b9445756ffd324e4176a1018e97259bc91a3f0cd7e535afc336d8296d48cce71.scope: Deactivated successfully. Jan 29 14:14:03.610720 kubelet[1884]: I0129 14:14:03.609424 1884 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Jan 29 14:14:03.630821 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9445756ffd324e4176a1018e97259bc91a3f0cd7e535afc336d8296d48cce71-rootfs.mount: Deactivated successfully. Jan 29 14:14:03.656680 systemd[1]: Started sshd@7-10.230.31.206:22-116.193.190.91:55736.service - OpenSSH per-connection server daemon (116.193.190.91:55736). Jan 29 14:14:03.847264 containerd[1509]: time="2025-01-29T14:14:03.846838632Z" level=info msg="shim disconnected" id=b9445756ffd324e4176a1018e97259bc91a3f0cd7e535afc336d8296d48cce71 namespace=k8s.io Jan 29 14:14:03.847264 containerd[1509]: time="2025-01-29T14:14:03.847104415Z" level=warning msg="cleaning up after shim disconnected" id=b9445756ffd324e4176a1018e97259bc91a3f0cd7e535afc336d8296d48cce71 namespace=k8s.io Jan 29 14:14:03.847264 containerd[1509]: time="2025-01-29T14:14:03.847145836Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 14:14:04.221229 kubelet[1884]: E0129 14:14:04.220988 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:04.415695 containerd[1509]: time="2025-01-29T14:14:04.415631373Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 29 14:14:05.137257 sshd[2405]: Invalid user sabina from 116.193.190.91 port 55736 Jan 29 14:14:05.221582 kubelet[1884]: E0129 14:14:05.221478 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:05.374370 systemd[1]: Created slice kubepods-besteffort-podd7eeb6ab_5d1a_4a03_b3d9_08caf13db3b5.slice - libcontainer container kubepods-besteffort-podd7eeb6ab_5d1a_4a03_b3d9_08caf13db3b5.slice. Jan 29 14:14:05.380910 containerd[1509]: time="2025-01-29T14:14:05.380821294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bkqlz,Uid:d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5,Namespace:calico-system,Attempt:0,}" Jan 29 14:14:05.476911 containerd[1509]: time="2025-01-29T14:14:05.476708489Z" level=error msg="Failed to destroy network for sandbox \"7b44a68c6d4513bceebe82e13a0b01a12baf1ed1989e5384f977f63e80482e0e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:14:05.478874 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7b44a68c6d4513bceebe82e13a0b01a12baf1ed1989e5384f977f63e80482e0e-shm.mount: Deactivated successfully. Jan 29 14:14:05.480351 containerd[1509]: time="2025-01-29T14:14:05.479999552Z" level=error msg="encountered an error cleaning up failed sandbox \"7b44a68c6d4513bceebe82e13a0b01a12baf1ed1989e5384f977f63e80482e0e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:14:05.480351 containerd[1509]: time="2025-01-29T14:14:05.480200458Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bkqlz,Uid:d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7b44a68c6d4513bceebe82e13a0b01a12baf1ed1989e5384f977f63e80482e0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:14:05.480754 kubelet[1884]: E0129 14:14:05.480660 1884 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b44a68c6d4513bceebe82e13a0b01a12baf1ed1989e5384f977f63e80482e0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:14:05.481335 kubelet[1884]: E0129 14:14:05.480813 1884 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b44a68c6d4513bceebe82e13a0b01a12baf1ed1989e5384f977f63e80482e0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bkqlz" Jan 29 14:14:05.481335 kubelet[1884]: E0129 14:14:05.480863 1884 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b44a68c6d4513bceebe82e13a0b01a12baf1ed1989e5384f977f63e80482e0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bkqlz" Jan 29 14:14:05.481335 kubelet[1884]: E0129 14:14:05.480976 1884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bkqlz_calico-system(d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bkqlz_calico-system(d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7b44a68c6d4513bceebe82e13a0b01a12baf1ed1989e5384f977f63e80482e0e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bkqlz" podUID="d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5" Jan 29 14:14:05.664552 sshd[2405]: Received disconnect from 116.193.190.91 port 55736:11: Bye Bye [preauth] Jan 29 14:14:05.664552 sshd[2405]: Disconnected from invalid user sabina 116.193.190.91 port 55736 [preauth] Jan 29 14:14:05.670342 systemd[1]: sshd@7-10.230.31.206:22-116.193.190.91:55736.service: Deactivated successfully. Jan 29 14:14:06.221895 kubelet[1884]: E0129 14:14:06.221710 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:06.437140 kubelet[1884]: I0129 14:14:06.436958 1884 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b44a68c6d4513bceebe82e13a0b01a12baf1ed1989e5384f977f63e80482e0e" Jan 29 14:14:06.438543 containerd[1509]: time="2025-01-29T14:14:06.438493996Z" level=info msg="StopPodSandbox for \"7b44a68c6d4513bceebe82e13a0b01a12baf1ed1989e5384f977f63e80482e0e\"" Jan 29 14:14:06.442134 containerd[1509]: time="2025-01-29T14:14:06.439473632Z" level=info msg="Ensure that sandbox 7b44a68c6d4513bceebe82e13a0b01a12baf1ed1989e5384f977f63e80482e0e in task-service has been cleanup successfully" Jan 29 14:14:06.444403 containerd[1509]: time="2025-01-29T14:14:06.444246584Z" level=info msg="TearDown network for sandbox \"7b44a68c6d4513bceebe82e13a0b01a12baf1ed1989e5384f977f63e80482e0e\" successfully" Jan 29 14:14:06.444403 containerd[1509]: time="2025-01-29T14:14:06.444280515Z" level=info msg="StopPodSandbox for \"7b44a68c6d4513bceebe82e13a0b01a12baf1ed1989e5384f977f63e80482e0e\" returns successfully" Jan 29 14:14:06.449405 containerd[1509]: time="2025-01-29T14:14:06.445827965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bkqlz,Uid:d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5,Namespace:calico-system,Attempt:1,}" Jan 29 14:14:06.448170 systemd[1]: run-netns-cni\x2df8ed97f7\x2d0f30\x2d52fd\x2d8edc\x2d93879d2c7d03.mount: Deactivated successfully. Jan 29 14:14:06.566139 containerd[1509]: time="2025-01-29T14:14:06.566066690Z" level=error msg="Failed to destroy network for sandbox \"c259ca8a71ba11a49c56fd1c112c81f8f1945c1f63f949c78e0eb411b23d5577\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:14:06.570664 containerd[1509]: time="2025-01-29T14:14:06.570614596Z" level=error msg="encountered an error cleaning up failed sandbox \"c259ca8a71ba11a49c56fd1c112c81f8f1945c1f63f949c78e0eb411b23d5577\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:14:06.571127 containerd[1509]: time="2025-01-29T14:14:06.570796424Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bkqlz,Uid:d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"c259ca8a71ba11a49c56fd1c112c81f8f1945c1f63f949c78e0eb411b23d5577\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:14:06.571679 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c259ca8a71ba11a49c56fd1c112c81f8f1945c1f63f949c78e0eb411b23d5577-shm.mount: Deactivated successfully. Jan 29 14:14:06.572234 kubelet[1884]: E0129 14:14:06.571993 1884 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c259ca8a71ba11a49c56fd1c112c81f8f1945c1f63f949c78e0eb411b23d5577\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:14:06.572234 kubelet[1884]: E0129 14:14:06.572095 1884 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c259ca8a71ba11a49c56fd1c112c81f8f1945c1f63f949c78e0eb411b23d5577\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bkqlz" Jan 29 14:14:06.572234 kubelet[1884]: E0129 14:14:06.572163 1884 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c259ca8a71ba11a49c56fd1c112c81f8f1945c1f63f949c78e0eb411b23d5577\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bkqlz" Jan 29 14:14:06.572431 kubelet[1884]: E0129 14:14:06.572250 1884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bkqlz_calico-system(d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bkqlz_calico-system(d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c259ca8a71ba11a49c56fd1c112c81f8f1945c1f63f949c78e0eb411b23d5577\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bkqlz" podUID="d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5" Jan 29 14:14:07.223567 kubelet[1884]: E0129 14:14:07.223476 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:07.441055 kubelet[1884]: I0129 14:14:07.441007 1884 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c259ca8a71ba11a49c56fd1c112c81f8f1945c1f63f949c78e0eb411b23d5577" Jan 29 14:14:07.442407 containerd[1509]: time="2025-01-29T14:14:07.442360702Z" level=info msg="StopPodSandbox for \"c259ca8a71ba11a49c56fd1c112c81f8f1945c1f63f949c78e0eb411b23d5577\"" Jan 29 14:14:07.442845 containerd[1509]: time="2025-01-29T14:14:07.442750924Z" level=info msg="Ensure that sandbox c259ca8a71ba11a49c56fd1c112c81f8f1945c1f63f949c78e0eb411b23d5577 in task-service has been cleanup successfully" Jan 29 14:14:07.445392 containerd[1509]: time="2025-01-29T14:14:07.445358646Z" level=info msg="TearDown network for sandbox \"c259ca8a71ba11a49c56fd1c112c81f8f1945c1f63f949c78e0eb411b23d5577\" successfully" Jan 29 14:14:07.445519 containerd[1509]: time="2025-01-29T14:14:07.445470401Z" level=info msg="StopPodSandbox for \"c259ca8a71ba11a49c56fd1c112c81f8f1945c1f63f949c78e0eb411b23d5577\" returns successfully" Jan 29 14:14:07.446077 systemd[1]: run-netns-cni\x2dd2a7eb0c\x2df1e5\x2d26cf\x2d14bd\x2decd0da3d446f.mount: Deactivated successfully. Jan 29 14:14:07.446402 containerd[1509]: time="2025-01-29T14:14:07.446329212Z" level=info msg="StopPodSandbox for \"7b44a68c6d4513bceebe82e13a0b01a12baf1ed1989e5384f977f63e80482e0e\"" Jan 29 14:14:07.446466 containerd[1509]: time="2025-01-29T14:14:07.446441198Z" level=info msg="TearDown network for sandbox \"7b44a68c6d4513bceebe82e13a0b01a12baf1ed1989e5384f977f63e80482e0e\" successfully" Jan 29 14:14:07.446466 containerd[1509]: time="2025-01-29T14:14:07.446460225Z" level=info msg="StopPodSandbox for \"7b44a68c6d4513bceebe82e13a0b01a12baf1ed1989e5384f977f63e80482e0e\" returns successfully" Jan 29 14:14:07.448076 containerd[1509]: time="2025-01-29T14:14:07.447851412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bkqlz,Uid:d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5,Namespace:calico-system,Attempt:2,}" Jan 29 14:14:07.581342 containerd[1509]: time="2025-01-29T14:14:07.581266863Z" level=error msg="Failed to destroy network for sandbox \"d20be9ff8f6c39b1d90c5787ea8648a11208dd63d534f14ff1d52df21a7369c8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:14:07.584567 containerd[1509]: time="2025-01-29T14:14:07.584525619Z" level=error msg="encountered an error cleaning up failed sandbox \"d20be9ff8f6c39b1d90c5787ea8648a11208dd63d534f14ff1d52df21a7369c8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:14:07.584678 containerd[1509]: time="2025-01-29T14:14:07.584606386Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bkqlz,Uid:d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"d20be9ff8f6c39b1d90c5787ea8648a11208dd63d534f14ff1d52df21a7369c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:14:07.585003 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d20be9ff8f6c39b1d90c5787ea8648a11208dd63d534f14ff1d52df21a7369c8-shm.mount: Deactivated successfully. Jan 29 14:14:07.586544 kubelet[1884]: E0129 14:14:07.585786 1884 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d20be9ff8f6c39b1d90c5787ea8648a11208dd63d534f14ff1d52df21a7369c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:14:07.586544 kubelet[1884]: E0129 14:14:07.586251 1884 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d20be9ff8f6c39b1d90c5787ea8648a11208dd63d534f14ff1d52df21a7369c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bkqlz" Jan 29 14:14:07.586544 kubelet[1884]: E0129 14:14:07.586317 1884 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d20be9ff8f6c39b1d90c5787ea8648a11208dd63d534f14ff1d52df21a7369c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bkqlz" Jan 29 14:14:07.586964 kubelet[1884]: E0129 14:14:07.586441 1884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bkqlz_calico-system(d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bkqlz_calico-system(d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d20be9ff8f6c39b1d90c5787ea8648a11208dd63d534f14ff1d52df21a7369c8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bkqlz" podUID="d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5" Jan 29 14:14:08.224768 kubelet[1884]: E0129 14:14:08.224618 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:08.448159 kubelet[1884]: I0129 14:14:08.447394 1884 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d20be9ff8f6c39b1d90c5787ea8648a11208dd63d534f14ff1d52df21a7369c8" Jan 29 14:14:08.451548 containerd[1509]: time="2025-01-29T14:14:08.448559993Z" level=info msg="StopPodSandbox for \"d20be9ff8f6c39b1d90c5787ea8648a11208dd63d534f14ff1d52df21a7369c8\"" Jan 29 14:14:08.451548 containerd[1509]: time="2025-01-29T14:14:08.448908064Z" level=info msg="Ensure that sandbox d20be9ff8f6c39b1d90c5787ea8648a11208dd63d534f14ff1d52df21a7369c8 in task-service has been cleanup successfully" Jan 29 14:14:08.452505 containerd[1509]: time="2025-01-29T14:14:08.452418485Z" level=info msg="TearDown network for sandbox \"d20be9ff8f6c39b1d90c5787ea8648a11208dd63d534f14ff1d52df21a7369c8\" successfully" Jan 29 14:14:08.452621 containerd[1509]: time="2025-01-29T14:14:08.452595679Z" level=info msg="StopPodSandbox for \"d20be9ff8f6c39b1d90c5787ea8648a11208dd63d534f14ff1d52df21a7369c8\" returns successfully" Jan 29 14:14:08.452688 systemd[1]: run-netns-cni\x2dcc05d8f0\x2d1085\x2da1cc\x2d8627\x2d1c091f4801db.mount: Deactivated successfully. Jan 29 14:14:08.454058 containerd[1509]: time="2025-01-29T14:14:08.453868863Z" level=info msg="StopPodSandbox for \"c259ca8a71ba11a49c56fd1c112c81f8f1945c1f63f949c78e0eb411b23d5577\"" Jan 29 14:14:08.454058 containerd[1509]: time="2025-01-29T14:14:08.453972574Z" level=info msg="TearDown network for sandbox \"c259ca8a71ba11a49c56fd1c112c81f8f1945c1f63f949c78e0eb411b23d5577\" successfully" Jan 29 14:14:08.454058 containerd[1509]: time="2025-01-29T14:14:08.453991251Z" level=info msg="StopPodSandbox for \"c259ca8a71ba11a49c56fd1c112c81f8f1945c1f63f949c78e0eb411b23d5577\" returns successfully" Jan 29 14:14:08.454805 containerd[1509]: time="2025-01-29T14:14:08.454591135Z" level=info msg="StopPodSandbox for \"7b44a68c6d4513bceebe82e13a0b01a12baf1ed1989e5384f977f63e80482e0e\"" Jan 29 14:14:08.454805 containerd[1509]: time="2025-01-29T14:14:08.454714909Z" level=info msg="TearDown network for sandbox \"7b44a68c6d4513bceebe82e13a0b01a12baf1ed1989e5384f977f63e80482e0e\" successfully" Jan 29 14:14:08.454805 containerd[1509]: time="2025-01-29T14:14:08.454733757Z" level=info msg="StopPodSandbox for \"7b44a68c6d4513bceebe82e13a0b01a12baf1ed1989e5384f977f63e80482e0e\" returns successfully" Jan 29 14:14:08.455949 containerd[1509]: time="2025-01-29T14:14:08.455484924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bkqlz,Uid:d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5,Namespace:calico-system,Attempt:3,}" Jan 29 14:14:08.606154 containerd[1509]: time="2025-01-29T14:14:08.605497558Z" level=error msg="Failed to destroy network for sandbox \"83bd4c54b1d170d4b0ade5c8c9c72351d9e9b974a6197b8577fc8bf28093735d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:14:08.609214 containerd[1509]: time="2025-01-29T14:14:08.606639946Z" level=error msg="encountered an error cleaning up failed sandbox \"83bd4c54b1d170d4b0ade5c8c9c72351d9e9b974a6197b8577fc8bf28093735d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:14:08.609214 containerd[1509]: time="2025-01-29T14:14:08.606726171Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bkqlz,Uid:d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"83bd4c54b1d170d4b0ade5c8c9c72351d9e9b974a6197b8577fc8bf28093735d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:14:08.609357 kubelet[1884]: E0129 14:14:08.608285 1884 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83bd4c54b1d170d4b0ade5c8c9c72351d9e9b974a6197b8577fc8bf28093735d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:14:08.609357 kubelet[1884]: E0129 14:14:08.608357 1884 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83bd4c54b1d170d4b0ade5c8c9c72351d9e9b974a6197b8577fc8bf28093735d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bkqlz" Jan 29 14:14:08.609357 kubelet[1884]: E0129 14:14:08.608390 1884 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83bd4c54b1d170d4b0ade5c8c9c72351d9e9b974a6197b8577fc8bf28093735d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bkqlz" Jan 29 14:14:08.607892 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-83bd4c54b1d170d4b0ade5c8c9c72351d9e9b974a6197b8577fc8bf28093735d-shm.mount: Deactivated successfully. Jan 29 14:14:08.609904 kubelet[1884]: E0129 14:14:08.608473 1884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bkqlz_calico-system(d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bkqlz_calico-system(d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"83bd4c54b1d170d4b0ade5c8c9c72351d9e9b974a6197b8577fc8bf28093735d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bkqlz" podUID="d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5" Jan 29 14:14:08.934755 systemd[1]: Created slice kubepods-besteffort-pod9f03ac48_26b4_459a_b077_970ffd89c283.slice - libcontainer container kubepods-besteffort-pod9f03ac48_26b4_459a_b077_970ffd89c283.slice. Jan 29 14:14:08.981013 kubelet[1884]: I0129 14:14:08.980915 1884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj7bp\" (UniqueName: \"kubernetes.io/projected/9f03ac48-26b4-459a-b077-970ffd89c283-kube-api-access-bj7bp\") pod \"nginx-deployment-7fcdb87857-p8xmr\" (UID: \"9f03ac48-26b4-459a-b077-970ffd89c283\") " pod="default/nginx-deployment-7fcdb87857-p8xmr" Jan 29 14:14:09.225579 kubelet[1884]: E0129 14:14:09.225415 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:09.245744 containerd[1509]: time="2025-01-29T14:14:09.244964580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-p8xmr,Uid:9f03ac48-26b4-459a-b077-970ffd89c283,Namespace:default,Attempt:0,}" Jan 29 14:14:09.349095 containerd[1509]: time="2025-01-29T14:14:09.348756107Z" level=error msg="Failed to destroy network for sandbox \"3edb3341714ab9b96d12296b10a401d4876b484a8b43e9151714917f1e544bd4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:14:09.349514 containerd[1509]: time="2025-01-29T14:14:09.349400853Z" level=error msg="encountered an error cleaning up failed sandbox \"3edb3341714ab9b96d12296b10a401d4876b484a8b43e9151714917f1e544bd4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:14:09.349514 containerd[1509]: time="2025-01-29T14:14:09.349481431Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-p8xmr,Uid:9f03ac48-26b4-459a-b077-970ffd89c283,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3edb3341714ab9b96d12296b10a401d4876b484a8b43e9151714917f1e544bd4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:14:09.350331 kubelet[1884]: E0129 14:14:09.349826 1884 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3edb3341714ab9b96d12296b10a401d4876b484a8b43e9151714917f1e544bd4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:14:09.350331 kubelet[1884]: E0129 14:14:09.349924 1884 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3edb3341714ab9b96d12296b10a401d4876b484a8b43e9151714917f1e544bd4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-p8xmr" Jan 29 14:14:09.350331 kubelet[1884]: E0129 14:14:09.349961 1884 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3edb3341714ab9b96d12296b10a401d4876b484a8b43e9151714917f1e544bd4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-p8xmr" Jan 29 14:14:09.350532 kubelet[1884]: E0129 14:14:09.350048 1884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-p8xmr_default(9f03ac48-26b4-459a-b077-970ffd89c283)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-p8xmr_default(9f03ac48-26b4-459a-b077-970ffd89c283)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3edb3341714ab9b96d12296b10a401d4876b484a8b43e9151714917f1e544bd4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-p8xmr" podUID="9f03ac48-26b4-459a-b077-970ffd89c283" Jan 29 14:14:09.458434 kubelet[1884]: I0129 14:14:09.458180 1884 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83bd4c54b1d170d4b0ade5c8c9c72351d9e9b974a6197b8577fc8bf28093735d" Jan 29 14:14:09.459177 containerd[1509]: time="2025-01-29T14:14:09.459105072Z" level=info msg="StopPodSandbox for \"83bd4c54b1d170d4b0ade5c8c9c72351d9e9b974a6197b8577fc8bf28093735d\"" Jan 29 14:14:09.460454 containerd[1509]: time="2025-01-29T14:14:09.460138229Z" level=info msg="Ensure that sandbox 83bd4c54b1d170d4b0ade5c8c9c72351d9e9b974a6197b8577fc8bf28093735d in task-service has been cleanup successfully" Jan 29 14:14:09.460454 containerd[1509]: time="2025-01-29T14:14:09.460353551Z" level=info msg="TearDown network for sandbox \"83bd4c54b1d170d4b0ade5c8c9c72351d9e9b974a6197b8577fc8bf28093735d\" successfully" Jan 29 14:14:09.460454 containerd[1509]: time="2025-01-29T14:14:09.460376361Z" level=info msg="StopPodSandbox for \"83bd4c54b1d170d4b0ade5c8c9c72351d9e9b974a6197b8577fc8bf28093735d\" returns successfully" Jan 29 14:14:09.463297 containerd[1509]: time="2025-01-29T14:14:09.461607805Z" level=info msg="StopPodSandbox for \"d20be9ff8f6c39b1d90c5787ea8648a11208dd63d534f14ff1d52df21a7369c8\"" Jan 29 14:14:09.463368 kubelet[1884]: I0129 14:14:09.463004 1884 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3edb3341714ab9b96d12296b10a401d4876b484a8b43e9151714917f1e544bd4" Jan 29 14:14:09.462504 systemd[1]: run-netns-cni\x2def9432a0\x2d8e40\x2d83bb\x2d95b2\x2d74bd9c63d568.mount: Deactivated successfully. Jan 29 14:14:09.464867 containerd[1509]: time="2025-01-29T14:14:09.463998440Z" level=info msg="StopPodSandbox for \"3edb3341714ab9b96d12296b10a401d4876b484a8b43e9151714917f1e544bd4\"" Jan 29 14:14:09.464867 containerd[1509]: time="2025-01-29T14:14:09.464187571Z" level=info msg="TearDown network for sandbox \"d20be9ff8f6c39b1d90c5787ea8648a11208dd63d534f14ff1d52df21a7369c8\" successfully" Jan 29 14:14:09.464867 containerd[1509]: time="2025-01-29T14:14:09.464216591Z" level=info msg="StopPodSandbox for \"d20be9ff8f6c39b1d90c5787ea8648a11208dd63d534f14ff1d52df21a7369c8\" returns successfully" Jan 29 14:14:09.464867 containerd[1509]: time="2025-01-29T14:14:09.464256757Z" level=info msg="Ensure that sandbox 3edb3341714ab9b96d12296b10a401d4876b484a8b43e9151714917f1e544bd4 in task-service has been cleanup successfully" Jan 29 14:14:09.466263 containerd[1509]: time="2025-01-29T14:14:09.465567605Z" level=info msg="StopPodSandbox for \"c259ca8a71ba11a49c56fd1c112c81f8f1945c1f63f949c78e0eb411b23d5577\"" Jan 29 14:14:09.466263 containerd[1509]: time="2025-01-29T14:14:09.465859490Z" level=info msg="TearDown network for sandbox \"c259ca8a71ba11a49c56fd1c112c81f8f1945c1f63f949c78e0eb411b23d5577\" successfully" Jan 29 14:14:09.466263 containerd[1509]: time="2025-01-29T14:14:09.465886742Z" level=info msg="StopPodSandbox for \"c259ca8a71ba11a49c56fd1c112c81f8f1945c1f63f949c78e0eb411b23d5577\" returns successfully" Jan 29 14:14:09.466447 containerd[1509]: time="2025-01-29T14:14:09.466390905Z" level=info msg="StopPodSandbox for \"7b44a68c6d4513bceebe82e13a0b01a12baf1ed1989e5384f977f63e80482e0e\"" Jan 29 14:14:09.466548 containerd[1509]: time="2025-01-29T14:14:09.466500614Z" level=info msg="TearDown network for sandbox \"7b44a68c6d4513bceebe82e13a0b01a12baf1ed1989e5384f977f63e80482e0e\" successfully" Jan 29 14:14:09.466610 containerd[1509]: time="2025-01-29T14:14:09.466548954Z" level=info msg="StopPodSandbox for \"7b44a68c6d4513bceebe82e13a0b01a12baf1ed1989e5384f977f63e80482e0e\" returns successfully" Jan 29 14:14:09.467027 containerd[1509]: time="2025-01-29T14:14:09.466999149Z" level=info msg="TearDown network for sandbox \"3edb3341714ab9b96d12296b10a401d4876b484a8b43e9151714917f1e544bd4\" successfully" Jan 29 14:14:09.467233 containerd[1509]: time="2025-01-29T14:14:09.467135604Z" level=info msg="StopPodSandbox for \"3edb3341714ab9b96d12296b10a401d4876b484a8b43e9151714917f1e544bd4\" returns successfully" Jan 29 14:14:09.469828 systemd[1]: run-netns-cni\x2d34994ab9\x2d60a2\x2dcb95\x2d4ce0\x2d494400cfedff.mount: Deactivated successfully. Jan 29 14:14:09.470874 containerd[1509]: time="2025-01-29T14:14:09.470267970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-p8xmr,Uid:9f03ac48-26b4-459a-b077-970ffd89c283,Namespace:default,Attempt:1,}" Jan 29 14:14:09.476271 containerd[1509]: time="2025-01-29T14:14:09.475733729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bkqlz,Uid:d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5,Namespace:calico-system,Attempt:4,}" Jan 29 14:14:09.622042 containerd[1509]: time="2025-01-29T14:14:09.621968518Z" level=error msg="Failed to destroy network for sandbox \"f70a7538889833d64ac87d078703876fec7b41abf382c08b473269c78b850921\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:14:09.623091 containerd[1509]: time="2025-01-29T14:14:09.622691569Z" level=error msg="encountered an error cleaning up failed sandbox \"f70a7538889833d64ac87d078703876fec7b41abf382c08b473269c78b850921\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:14:09.623091 containerd[1509]: time="2025-01-29T14:14:09.622794403Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-p8xmr,Uid:9f03ac48-26b4-459a-b077-970ffd89c283,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"f70a7538889833d64ac87d078703876fec7b41abf382c08b473269c78b850921\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:14:09.623534 kubelet[1884]: E0129 14:14:09.623371 1884 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f70a7538889833d64ac87d078703876fec7b41abf382c08b473269c78b850921\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:14:09.623635 kubelet[1884]: E0129 14:14:09.623570 1884 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f70a7538889833d64ac87d078703876fec7b41abf382c08b473269c78b850921\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-p8xmr" Jan 29 14:14:09.623635 kubelet[1884]: E0129 14:14:09.623616 1884 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f70a7538889833d64ac87d078703876fec7b41abf382c08b473269c78b850921\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-p8xmr" Jan 29 14:14:09.623751 kubelet[1884]: E0129 14:14:09.623693 1884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-p8xmr_default(9f03ac48-26b4-459a-b077-970ffd89c283)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-p8xmr_default(9f03ac48-26b4-459a-b077-970ffd89c283)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f70a7538889833d64ac87d078703876fec7b41abf382c08b473269c78b850921\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-p8xmr" podUID="9f03ac48-26b4-459a-b077-970ffd89c283" Jan 29 14:14:09.665382 containerd[1509]: time="2025-01-29T14:14:09.665320159Z" level=error msg="Failed to destroy network for sandbox \"b7c085b0c413b5c87c05583e2ece6eda68ffabc28f61ff972b3616d52deae2c7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:14:09.666006 containerd[1509]: time="2025-01-29T14:14:09.665971002Z" level=error msg="encountered an error cleaning up failed sandbox \"b7c085b0c413b5c87c05583e2ece6eda68ffabc28f61ff972b3616d52deae2c7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:14:09.666213 containerd[1509]: time="2025-01-29T14:14:09.666176985Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bkqlz,Uid:d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"b7c085b0c413b5c87c05583e2ece6eda68ffabc28f61ff972b3616d52deae2c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:14:09.666760 kubelet[1884]: E0129 14:14:09.666570 1884 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7c085b0c413b5c87c05583e2ece6eda68ffabc28f61ff972b3616d52deae2c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:14:09.666760 kubelet[1884]: E0129 14:14:09.666660 1884 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7c085b0c413b5c87c05583e2ece6eda68ffabc28f61ff972b3616d52deae2c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bkqlz" Jan 29 14:14:09.666901 kubelet[1884]: E0129 14:14:09.666766 1884 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7c085b0c413b5c87c05583e2ece6eda68ffabc28f61ff972b3616d52deae2c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bkqlz" Jan 29 14:14:09.666901 kubelet[1884]: E0129 14:14:09.666833 1884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bkqlz_calico-system(d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bkqlz_calico-system(d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b7c085b0c413b5c87c05583e2ece6eda68ffabc28f61ff972b3616d52deae2c7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bkqlz" podUID="d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5" Jan 29 14:14:10.209878 kubelet[1884]: E0129 14:14:10.209810 1884 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:10.226797 kubelet[1884]: E0129 14:14:10.226523 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:10.453517 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b7c085b0c413b5c87c05583e2ece6eda68ffabc28f61ff972b3616d52deae2c7-shm.mount: Deactivated successfully. Jan 29 14:14:10.454288 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f70a7538889833d64ac87d078703876fec7b41abf382c08b473269c78b850921-shm.mount: Deactivated successfully. Jan 29 14:14:10.472903 kubelet[1884]: I0129 14:14:10.472784 1884 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7c085b0c413b5c87c05583e2ece6eda68ffabc28f61ff972b3616d52deae2c7" Jan 29 14:14:10.475850 containerd[1509]: time="2025-01-29T14:14:10.475704704Z" level=info msg="StopPodSandbox for \"b7c085b0c413b5c87c05583e2ece6eda68ffabc28f61ff972b3616d52deae2c7\"" Jan 29 14:14:10.477304 containerd[1509]: time="2025-01-29T14:14:10.477271337Z" level=info msg="Ensure that sandbox b7c085b0c413b5c87c05583e2ece6eda68ffabc28f61ff972b3616d52deae2c7 in task-service has been cleanup successfully" Jan 29 14:14:10.483168 kubelet[1884]: I0129 14:14:10.478681 1884 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f70a7538889833d64ac87d078703876fec7b41abf382c08b473269c78b850921" Jan 29 14:14:10.481439 systemd[1]: run-netns-cni\x2dc8a6b5c5\x2dac6a\x2d549b\x2daa0d\x2d4d71ca121076.mount: Deactivated successfully. Jan 29 14:14:10.483396 containerd[1509]: time="2025-01-29T14:14:10.480199519Z" level=info msg="TearDown network for sandbox \"b7c085b0c413b5c87c05583e2ece6eda68ffabc28f61ff972b3616d52deae2c7\" successfully" Jan 29 14:14:10.483396 containerd[1509]: time="2025-01-29T14:14:10.480236083Z" level=info msg="StopPodSandbox for \"b7c085b0c413b5c87c05583e2ece6eda68ffabc28f61ff972b3616d52deae2c7\" returns successfully" Jan 29 14:14:10.483396 containerd[1509]: time="2025-01-29T14:14:10.480408610Z" level=info msg="StopPodSandbox for \"f70a7538889833d64ac87d078703876fec7b41abf382c08b473269c78b850921\"" Jan 29 14:14:10.483396 containerd[1509]: time="2025-01-29T14:14:10.480645792Z" level=info msg="Ensure that sandbox f70a7538889833d64ac87d078703876fec7b41abf382c08b473269c78b850921 in task-service has been cleanup successfully" Jan 29 14:14:10.484430 containerd[1509]: time="2025-01-29T14:14:10.484396483Z" level=info msg="StopPodSandbox for \"83bd4c54b1d170d4b0ade5c8c9c72351d9e9b974a6197b8577fc8bf28093735d\"" Jan 29 14:14:10.484534 containerd[1509]: time="2025-01-29T14:14:10.484508733Z" level=info msg="TearDown network for sandbox \"83bd4c54b1d170d4b0ade5c8c9c72351d9e9b974a6197b8577fc8bf28093735d\" successfully" Jan 29 14:14:10.484602 containerd[1509]: time="2025-01-29T14:14:10.484534761Z" level=info msg="StopPodSandbox for \"83bd4c54b1d170d4b0ade5c8c9c72351d9e9b974a6197b8577fc8bf28093735d\" returns successfully" Jan 29 14:14:10.485389 systemd[1]: run-netns-cni\x2d4c63f7df\x2d5681\x2d586c\x2d3ce8\x2d86f26317c4ab.mount: Deactivated successfully. Jan 29 14:14:10.486002 containerd[1509]: time="2025-01-29T14:14:10.485576433Z" level=info msg="TearDown network for sandbox \"f70a7538889833d64ac87d078703876fec7b41abf382c08b473269c78b850921\" successfully" Jan 29 14:14:10.486002 containerd[1509]: time="2025-01-29T14:14:10.485599494Z" level=info msg="StopPodSandbox for \"f70a7538889833d64ac87d078703876fec7b41abf382c08b473269c78b850921\" returns successfully" Jan 29 14:14:10.487434 containerd[1509]: time="2025-01-29T14:14:10.487012479Z" level=info msg="StopPodSandbox for \"3edb3341714ab9b96d12296b10a401d4876b484a8b43e9151714917f1e544bd4\"" Jan 29 14:14:10.487434 containerd[1509]: time="2025-01-29T14:14:10.487146005Z" level=info msg="TearDown network for sandbox \"3edb3341714ab9b96d12296b10a401d4876b484a8b43e9151714917f1e544bd4\" successfully" Jan 29 14:14:10.487434 containerd[1509]: time="2025-01-29T14:14:10.487175232Z" level=info msg="StopPodSandbox for \"3edb3341714ab9b96d12296b10a401d4876b484a8b43e9151714917f1e544bd4\" returns successfully" Jan 29 14:14:10.487434 containerd[1509]: time="2025-01-29T14:14:10.487257635Z" level=info msg="StopPodSandbox for \"d20be9ff8f6c39b1d90c5787ea8648a11208dd63d534f14ff1d52df21a7369c8\"" Jan 29 14:14:10.487434 containerd[1509]: time="2025-01-29T14:14:10.487342256Z" level=info msg="TearDown network for sandbox \"d20be9ff8f6c39b1d90c5787ea8648a11208dd63d534f14ff1d52df21a7369c8\" successfully" Jan 29 14:14:10.487434 containerd[1509]: time="2025-01-29T14:14:10.487360168Z" level=info msg="StopPodSandbox for \"d20be9ff8f6c39b1d90c5787ea8648a11208dd63d534f14ff1d52df21a7369c8\" returns successfully" Jan 29 14:14:10.489312 containerd[1509]: time="2025-01-29T14:14:10.489281960Z" level=info msg="StopPodSandbox for \"c259ca8a71ba11a49c56fd1c112c81f8f1945c1f63f949c78e0eb411b23d5577\"" Jan 29 14:14:10.489467 containerd[1509]: time="2025-01-29T14:14:10.489423423Z" level=info msg="TearDown network for sandbox \"c259ca8a71ba11a49c56fd1c112c81f8f1945c1f63f949c78e0eb411b23d5577\" successfully" Jan 29 14:14:10.489467 containerd[1509]: time="2025-01-29T14:14:10.489443901Z" level=info msg="StopPodSandbox for \"c259ca8a71ba11a49c56fd1c112c81f8f1945c1f63f949c78e0eb411b23d5577\" returns successfully" Jan 29 14:14:10.490735 containerd[1509]: time="2025-01-29T14:14:10.489954423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-p8xmr,Uid:9f03ac48-26b4-459a-b077-970ffd89c283,Namespace:default,Attempt:2,}" Jan 29 14:14:10.491070 containerd[1509]: time="2025-01-29T14:14:10.491039572Z" level=info msg="StopPodSandbox for \"7b44a68c6d4513bceebe82e13a0b01a12baf1ed1989e5384f977f63e80482e0e\"" Jan 29 14:14:10.491190 containerd[1509]: time="2025-01-29T14:14:10.491164074Z" level=info msg="TearDown network for sandbox \"7b44a68c6d4513bceebe82e13a0b01a12baf1ed1989e5384f977f63e80482e0e\" successfully" Jan 29 14:14:10.491511 containerd[1509]: time="2025-01-29T14:14:10.491189845Z" level=info msg="StopPodSandbox for \"7b44a68c6d4513bceebe82e13a0b01a12baf1ed1989e5384f977f63e80482e0e\" returns successfully" Jan 29 14:14:10.491883 containerd[1509]: time="2025-01-29T14:14:10.491850948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bkqlz,Uid:d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5,Namespace:calico-system,Attempt:5,}" Jan 29 14:14:10.711758 containerd[1509]: time="2025-01-29T14:14:10.711634639Z" level=error msg="Failed to destroy network for sandbox \"4dc8772f2d31cc037fab4af2a8550bf1e17ca9d6e836f5f407d02ec77a9ec77f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:14:10.713511 containerd[1509]: time="2025-01-29T14:14:10.712081261Z" level=error msg="encountered an error cleaning up failed sandbox \"4dc8772f2d31cc037fab4af2a8550bf1e17ca9d6e836f5f407d02ec77a9ec77f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:14:10.713511 containerd[1509]: time="2025-01-29T14:14:10.712201099Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-p8xmr,Uid:9f03ac48-26b4-459a-b077-970ffd89c283,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"4dc8772f2d31cc037fab4af2a8550bf1e17ca9d6e836f5f407d02ec77a9ec77f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:14:10.713980 kubelet[1884]: E0129 14:14:10.712556 1884 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4dc8772f2d31cc037fab4af2a8550bf1e17ca9d6e836f5f407d02ec77a9ec77f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:14:10.713980 kubelet[1884]: E0129 14:14:10.712709 1884 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4dc8772f2d31cc037fab4af2a8550bf1e17ca9d6e836f5f407d02ec77a9ec77f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-p8xmr" Jan 29 14:14:10.713980 kubelet[1884]: E0129 14:14:10.712800 1884 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4dc8772f2d31cc037fab4af2a8550bf1e17ca9d6e836f5f407d02ec77a9ec77f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-p8xmr" Jan 29 14:14:10.714530 kubelet[1884]: E0129 14:14:10.712989 1884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-p8xmr_default(9f03ac48-26b4-459a-b077-970ffd89c283)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-p8xmr_default(9f03ac48-26b4-459a-b077-970ffd89c283)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4dc8772f2d31cc037fab4af2a8550bf1e17ca9d6e836f5f407d02ec77a9ec77f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-p8xmr" podUID="9f03ac48-26b4-459a-b077-970ffd89c283" Jan 29 14:14:10.716126 containerd[1509]: time="2025-01-29T14:14:10.715470117Z" level=error msg="Failed to destroy network for sandbox \"88fdc4aefe6f2dfe63e23ee5fdf36ccf79aa5653def7479b3986dee24345b6ac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:14:10.716857 containerd[1509]: time="2025-01-29T14:14:10.716820957Z" level=error msg="encountered an error cleaning up failed sandbox \"88fdc4aefe6f2dfe63e23ee5fdf36ccf79aa5653def7479b3986dee24345b6ac\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:14:10.717014 containerd[1509]: time="2025-01-29T14:14:10.716978953Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bkqlz,Uid:d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"88fdc4aefe6f2dfe63e23ee5fdf36ccf79aa5653def7479b3986dee24345b6ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:14:10.717493 kubelet[1884]: E0129 14:14:10.717283 1884 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88fdc4aefe6f2dfe63e23ee5fdf36ccf79aa5653def7479b3986dee24345b6ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:14:10.717493 kubelet[1884]: E0129 14:14:10.717330 1884 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88fdc4aefe6f2dfe63e23ee5fdf36ccf79aa5653def7479b3986dee24345b6ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bkqlz" Jan 29 14:14:10.717493 kubelet[1884]: E0129 14:14:10.717365 1884 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88fdc4aefe6f2dfe63e23ee5fdf36ccf79aa5653def7479b3986dee24345b6ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bkqlz" Jan 29 14:14:10.717734 kubelet[1884]: E0129 14:14:10.717425 1884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bkqlz_calico-system(d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bkqlz_calico-system(d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"88fdc4aefe6f2dfe63e23ee5fdf36ccf79aa5653def7479b3986dee24345b6ac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bkqlz" podUID="d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5" Jan 29 14:14:11.227769 kubelet[1884]: E0129 14:14:11.227646 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:11.454963 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-88fdc4aefe6f2dfe63e23ee5fdf36ccf79aa5653def7479b3986dee24345b6ac-shm.mount: Deactivated successfully. Jan 29 14:14:11.456468 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4dc8772f2d31cc037fab4af2a8550bf1e17ca9d6e836f5f407d02ec77a9ec77f-shm.mount: Deactivated successfully. Jan 29 14:14:11.483761 kubelet[1884]: I0129 14:14:11.483671 1884 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4dc8772f2d31cc037fab4af2a8550bf1e17ca9d6e836f5f407d02ec77a9ec77f" Jan 29 14:14:11.486099 containerd[1509]: time="2025-01-29T14:14:11.486051509Z" level=info msg="StopPodSandbox for \"4dc8772f2d31cc037fab4af2a8550bf1e17ca9d6e836f5f407d02ec77a9ec77f\"" Jan 29 14:14:11.486996 containerd[1509]: time="2025-01-29T14:14:11.486964120Z" level=info msg="Ensure that sandbox 4dc8772f2d31cc037fab4af2a8550bf1e17ca9d6e836f5f407d02ec77a9ec77f in task-service has been cleanup successfully" Jan 29 14:14:11.488786 containerd[1509]: time="2025-01-29T14:14:11.488739694Z" level=info msg="TearDown network for sandbox \"4dc8772f2d31cc037fab4af2a8550bf1e17ca9d6e836f5f407d02ec77a9ec77f\" successfully" Jan 29 14:14:11.489144 containerd[1509]: time="2025-01-29T14:14:11.489016620Z" level=info msg="StopPodSandbox for \"4dc8772f2d31cc037fab4af2a8550bf1e17ca9d6e836f5f407d02ec77a9ec77f\" returns successfully" Jan 29 14:14:11.492826 systemd[1]: run-netns-cni\x2d8ffecc05\x2d10e5\x2ddabf\x2d84ea\x2daa123f388e3e.mount: Deactivated successfully. Jan 29 14:14:11.493415 containerd[1509]: time="2025-01-29T14:14:11.493304831Z" level=info msg="StopPodSandbox for \"f70a7538889833d64ac87d078703876fec7b41abf382c08b473269c78b850921\"" Jan 29 14:14:11.493497 containerd[1509]: time="2025-01-29T14:14:11.493413157Z" level=info msg="TearDown network for sandbox \"f70a7538889833d64ac87d078703876fec7b41abf382c08b473269c78b850921\" successfully" Jan 29 14:14:11.493497 containerd[1509]: time="2025-01-29T14:14:11.493431932Z" level=info msg="StopPodSandbox for \"f70a7538889833d64ac87d078703876fec7b41abf382c08b473269c78b850921\" returns successfully" Jan 29 14:14:11.494181 kubelet[1884]: I0129 14:14:11.493880 1884 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88fdc4aefe6f2dfe63e23ee5fdf36ccf79aa5653def7479b3986dee24345b6ac" Jan 29 14:14:11.495821 containerd[1509]: time="2025-01-29T14:14:11.495580033Z" level=info msg="StopPodSandbox for \"3edb3341714ab9b96d12296b10a401d4876b484a8b43e9151714917f1e544bd4\"" Jan 29 14:14:11.495821 containerd[1509]: time="2025-01-29T14:14:11.495723045Z" level=info msg="TearDown network for sandbox \"3edb3341714ab9b96d12296b10a401d4876b484a8b43e9151714917f1e544bd4\" successfully" Jan 29 14:14:11.495821 containerd[1509]: time="2025-01-29T14:14:11.495742942Z" level=info msg="StopPodSandbox for \"3edb3341714ab9b96d12296b10a401d4876b484a8b43e9151714917f1e544bd4\" returns successfully" Jan 29 14:14:11.495821 containerd[1509]: time="2025-01-29T14:14:11.495810111Z" level=info msg="StopPodSandbox for \"88fdc4aefe6f2dfe63e23ee5fdf36ccf79aa5653def7479b3986dee24345b6ac\"" Jan 29 14:14:11.496044 containerd[1509]: time="2025-01-29T14:14:11.496012434Z" level=info msg="Ensure that sandbox 88fdc4aefe6f2dfe63e23ee5fdf36ccf79aa5653def7479b3986dee24345b6ac in task-service has been cleanup successfully" Jan 29 14:14:11.498963 systemd[1]: run-netns-cni\x2dc74888e2\x2dc0e4\x2dc70a\x2d2871\x2d78af3b270d2f.mount: Deactivated successfully. Jan 29 14:14:11.500535 containerd[1509]: time="2025-01-29T14:14:11.500504570Z" level=info msg="TearDown network for sandbox \"88fdc4aefe6f2dfe63e23ee5fdf36ccf79aa5653def7479b3986dee24345b6ac\" successfully" Jan 29 14:14:11.500816 containerd[1509]: time="2025-01-29T14:14:11.500534664Z" level=info msg="StopPodSandbox for \"88fdc4aefe6f2dfe63e23ee5fdf36ccf79aa5653def7479b3986dee24345b6ac\" returns successfully" Jan 29 14:14:11.502195 containerd[1509]: time="2025-01-29T14:14:11.501880571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-p8xmr,Uid:9f03ac48-26b4-459a-b077-970ffd89c283,Namespace:default,Attempt:3,}" Jan 29 14:14:11.508884 containerd[1509]: time="2025-01-29T14:14:11.508814010Z" level=info msg="StopPodSandbox for \"b7c085b0c413b5c87c05583e2ece6eda68ffabc28f61ff972b3616d52deae2c7\"" Jan 29 14:14:11.508999 containerd[1509]: time="2025-01-29T14:14:11.508924884Z" level=info msg="TearDown network for sandbox \"b7c085b0c413b5c87c05583e2ece6eda68ffabc28f61ff972b3616d52deae2c7\" successfully" Jan 29 14:14:11.508999 containerd[1509]: time="2025-01-29T14:14:11.508944482Z" level=info msg="StopPodSandbox for \"b7c085b0c413b5c87c05583e2ece6eda68ffabc28f61ff972b3616d52deae2c7\" returns successfully" Jan 29 14:14:11.514301 containerd[1509]: time="2025-01-29T14:14:11.512381133Z" level=info msg="StopPodSandbox for \"83bd4c54b1d170d4b0ade5c8c9c72351d9e9b974a6197b8577fc8bf28093735d\"" Jan 29 14:14:11.514301 containerd[1509]: time="2025-01-29T14:14:11.512492272Z" level=info msg="TearDown network for sandbox \"83bd4c54b1d170d4b0ade5c8c9c72351d9e9b974a6197b8577fc8bf28093735d\" successfully" Jan 29 14:14:11.514301 containerd[1509]: time="2025-01-29T14:14:11.512518560Z" level=info msg="StopPodSandbox for \"83bd4c54b1d170d4b0ade5c8c9c72351d9e9b974a6197b8577fc8bf28093735d\" returns successfully" Jan 29 14:14:11.520997 containerd[1509]: time="2025-01-29T14:14:11.516417142Z" level=info msg="StopPodSandbox for \"d20be9ff8f6c39b1d90c5787ea8648a11208dd63d534f14ff1d52df21a7369c8\"" Jan 29 14:14:11.520997 containerd[1509]: time="2025-01-29T14:14:11.516516744Z" level=info msg="TearDown network for sandbox \"d20be9ff8f6c39b1d90c5787ea8648a11208dd63d534f14ff1d52df21a7369c8\" successfully" Jan 29 14:14:11.520997 containerd[1509]: time="2025-01-29T14:14:11.516535100Z" level=info msg="StopPodSandbox for \"d20be9ff8f6c39b1d90c5787ea8648a11208dd63d534f14ff1d52df21a7369c8\" returns successfully" Jan 29 14:14:11.520997 containerd[1509]: time="2025-01-29T14:14:11.516890352Z" level=info msg="StopPodSandbox for \"c259ca8a71ba11a49c56fd1c112c81f8f1945c1f63f949c78e0eb411b23d5577\"" Jan 29 14:14:11.520997 containerd[1509]: time="2025-01-29T14:14:11.517069637Z" level=info msg="TearDown network for sandbox \"c259ca8a71ba11a49c56fd1c112c81f8f1945c1f63f949c78e0eb411b23d5577\" successfully" Jan 29 14:14:11.520997 containerd[1509]: time="2025-01-29T14:14:11.517124800Z" level=info msg="StopPodSandbox for \"c259ca8a71ba11a49c56fd1c112c81f8f1945c1f63f949c78e0eb411b23d5577\" returns successfully" Jan 29 14:14:11.526903 containerd[1509]: time="2025-01-29T14:14:11.526821502Z" level=info msg="StopPodSandbox for \"7b44a68c6d4513bceebe82e13a0b01a12baf1ed1989e5384f977f63e80482e0e\"" Jan 29 14:14:11.527214 containerd[1509]: time="2025-01-29T14:14:11.526951200Z" level=info msg="TearDown network for sandbox \"7b44a68c6d4513bceebe82e13a0b01a12baf1ed1989e5384f977f63e80482e0e\" successfully" Jan 29 14:14:11.527214 containerd[1509]: time="2025-01-29T14:14:11.526975194Z" level=info msg="StopPodSandbox for \"7b44a68c6d4513bceebe82e13a0b01a12baf1ed1989e5384f977f63e80482e0e\" returns successfully" Jan 29 14:14:11.528869 containerd[1509]: time="2025-01-29T14:14:11.527779403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bkqlz,Uid:d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5,Namespace:calico-system,Attempt:6,}" Jan 29 14:14:11.677712 containerd[1509]: time="2025-01-29T14:14:11.677614945Z" level=error msg="Failed to destroy network for sandbox \"2d60c69065707dc941dd2638d266a88e83ca08c5fcf92d93b3fe424dcd534434\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:14:11.678579 containerd[1509]: time="2025-01-29T14:14:11.678327823Z" level=error msg="encountered an error cleaning up failed sandbox \"2d60c69065707dc941dd2638d266a88e83ca08c5fcf92d93b3fe424dcd534434\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:14:11.678579 containerd[1509]: time="2025-01-29T14:14:11.678403321Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-p8xmr,Uid:9f03ac48-26b4-459a-b077-970ffd89c283,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"2d60c69065707dc941dd2638d266a88e83ca08c5fcf92d93b3fe424dcd534434\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:14:11.679361 kubelet[1884]: E0129 14:14:11.678856 1884 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d60c69065707dc941dd2638d266a88e83ca08c5fcf92d93b3fe424dcd534434\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:14:11.679361 kubelet[1884]: E0129 14:14:11.678934 1884 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d60c69065707dc941dd2638d266a88e83ca08c5fcf92d93b3fe424dcd534434\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-p8xmr" Jan 29 14:14:11.679361 kubelet[1884]: E0129 14:14:11.678988 1884 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d60c69065707dc941dd2638d266a88e83ca08c5fcf92d93b3fe424dcd534434\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-p8xmr" Jan 29 14:14:11.679661 kubelet[1884]: E0129 14:14:11.679067 1884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-p8xmr_default(9f03ac48-26b4-459a-b077-970ffd89c283)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-p8xmr_default(9f03ac48-26b4-459a-b077-970ffd89c283)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2d60c69065707dc941dd2638d266a88e83ca08c5fcf92d93b3fe424dcd534434\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-p8xmr" podUID="9f03ac48-26b4-459a-b077-970ffd89c283" Jan 29 14:14:11.697618 containerd[1509]: time="2025-01-29T14:14:11.697135837Z" level=error msg="Failed to destroy network for sandbox \"407221a1f1c500d27a764909eb3a6e5481e2355759aba4ae013c9b96519bfd18\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:14:11.697850 containerd[1509]: time="2025-01-29T14:14:11.697811309Z" level=error msg="encountered an error cleaning up failed sandbox \"407221a1f1c500d27a764909eb3a6e5481e2355759aba4ae013c9b96519bfd18\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:14:11.697922 containerd[1509]: time="2025-01-29T14:14:11.697889327Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bkqlz,Uid:d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"407221a1f1c500d27a764909eb3a6e5481e2355759aba4ae013c9b96519bfd18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:14:11.698272 kubelet[1884]: E0129 14:14:11.698224 1884 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"407221a1f1c500d27a764909eb3a6e5481e2355759aba4ae013c9b96519bfd18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:14:11.698505 kubelet[1884]: E0129 14:14:11.698302 1884 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"407221a1f1c500d27a764909eb3a6e5481e2355759aba4ae013c9b96519bfd18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bkqlz" Jan 29 14:14:11.698505 kubelet[1884]: E0129 14:14:11.698337 1884 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"407221a1f1c500d27a764909eb3a6e5481e2355759aba4ae013c9b96519bfd18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bkqlz" Jan 29 14:14:11.698505 kubelet[1884]: E0129 14:14:11.698424 1884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bkqlz_calico-system(d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bkqlz_calico-system(d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"407221a1f1c500d27a764909eb3a6e5481e2355759aba4ae013c9b96519bfd18\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bkqlz" podUID="d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5" Jan 29 14:14:12.228614 kubelet[1884]: E0129 14:14:12.228557 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:12.454559 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2d60c69065707dc941dd2638d266a88e83ca08c5fcf92d93b3fe424dcd534434-shm.mount: Deactivated successfully. Jan 29 14:14:12.499762 kubelet[1884]: I0129 14:14:12.498714 1884 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="407221a1f1c500d27a764909eb3a6e5481e2355759aba4ae013c9b96519bfd18" Jan 29 14:14:12.499990 containerd[1509]: time="2025-01-29T14:14:12.499657317Z" level=info msg="StopPodSandbox for \"407221a1f1c500d27a764909eb3a6e5481e2355759aba4ae013c9b96519bfd18\"" Jan 29 14:14:12.499990 containerd[1509]: time="2025-01-29T14:14:12.499921525Z" level=info msg="Ensure that sandbox 407221a1f1c500d27a764909eb3a6e5481e2355759aba4ae013c9b96519bfd18 in task-service has been cleanup successfully" Jan 29 14:14:12.503701 containerd[1509]: time="2025-01-29T14:14:12.502502602Z" level=info msg="TearDown network for sandbox \"407221a1f1c500d27a764909eb3a6e5481e2355759aba4ae013c9b96519bfd18\" successfully" Jan 29 14:14:12.503701 containerd[1509]: time="2025-01-29T14:14:12.502534172Z" level=info msg="StopPodSandbox for \"407221a1f1c500d27a764909eb3a6e5481e2355759aba4ae013c9b96519bfd18\" returns successfully" Jan 29 14:14:12.503283 systemd[1]: run-netns-cni\x2db35db4b3\x2d2788\x2d8bed\x2daef0\x2de1ca465d0bd3.mount: Deactivated successfully. Jan 29 14:14:12.504640 containerd[1509]: time="2025-01-29T14:14:12.504606700Z" level=info msg="StopPodSandbox for \"88fdc4aefe6f2dfe63e23ee5fdf36ccf79aa5653def7479b3986dee24345b6ac\"" Jan 29 14:14:12.504801 containerd[1509]: time="2025-01-29T14:14:12.504771264Z" level=info msg="TearDown network for sandbox \"88fdc4aefe6f2dfe63e23ee5fdf36ccf79aa5653def7479b3986dee24345b6ac\" successfully" Jan 29 14:14:12.504871 containerd[1509]: time="2025-01-29T14:14:12.504800336Z" level=info msg="StopPodSandbox for \"88fdc4aefe6f2dfe63e23ee5fdf36ccf79aa5653def7479b3986dee24345b6ac\" returns successfully" Jan 29 14:14:12.506484 containerd[1509]: time="2025-01-29T14:14:12.506445705Z" level=info msg="StopPodSandbox for \"b7c085b0c413b5c87c05583e2ece6eda68ffabc28f61ff972b3616d52deae2c7\"" Jan 29 14:14:12.507847 containerd[1509]: time="2025-01-29T14:14:12.506701625Z" level=info msg="TearDown network for sandbox \"b7c085b0c413b5c87c05583e2ece6eda68ffabc28f61ff972b3616d52deae2c7\" successfully" Jan 29 14:14:12.507847 containerd[1509]: time="2025-01-29T14:14:12.506745084Z" level=info msg="StopPodSandbox for \"b7c085b0c413b5c87c05583e2ece6eda68ffabc28f61ff972b3616d52deae2c7\" returns successfully" Jan 29 14:14:12.508281 containerd[1509]: time="2025-01-29T14:14:12.508089226Z" level=info msg="StopPodSandbox for \"83bd4c54b1d170d4b0ade5c8c9c72351d9e9b974a6197b8577fc8bf28093735d\"" Jan 29 14:14:12.508633 containerd[1509]: time="2025-01-29T14:14:12.508579832Z" level=info msg="TearDown network for sandbox \"83bd4c54b1d170d4b0ade5c8c9c72351d9e9b974a6197b8577fc8bf28093735d\" successfully" Jan 29 14:14:12.508633 containerd[1509]: time="2025-01-29T14:14:12.508627435Z" level=info msg="StopPodSandbox for \"83bd4c54b1d170d4b0ade5c8c9c72351d9e9b974a6197b8577fc8bf28093735d\" returns successfully" Jan 29 14:14:12.509326 containerd[1509]: time="2025-01-29T14:14:12.509293306Z" level=info msg="StopPodSandbox for \"d20be9ff8f6c39b1d90c5787ea8648a11208dd63d534f14ff1d52df21a7369c8\"" Jan 29 14:14:12.509476 containerd[1509]: time="2025-01-29T14:14:12.509446282Z" level=info msg="TearDown network for sandbox \"d20be9ff8f6c39b1d90c5787ea8648a11208dd63d534f14ff1d52df21a7369c8\" successfully" Jan 29 14:14:12.509476 containerd[1509]: time="2025-01-29T14:14:12.509471432Z" level=info msg="StopPodSandbox for \"d20be9ff8f6c39b1d90c5787ea8648a11208dd63d534f14ff1d52df21a7369c8\" returns successfully" Jan 29 14:14:12.509919 containerd[1509]: time="2025-01-29T14:14:12.509882867Z" level=info msg="StopPodSandbox for \"c259ca8a71ba11a49c56fd1c112c81f8f1945c1f63f949c78e0eb411b23d5577\"" Jan 29 14:14:12.511233 containerd[1509]: time="2025-01-29T14:14:12.510006001Z" level=info msg="TearDown network for sandbox \"c259ca8a71ba11a49c56fd1c112c81f8f1945c1f63f949c78e0eb411b23d5577\" successfully" Jan 29 14:14:12.511233 containerd[1509]: time="2025-01-29T14:14:12.511225649Z" level=info msg="StopPodSandbox for \"c259ca8a71ba11a49c56fd1c112c81f8f1945c1f63f949c78e0eb411b23d5577\" returns successfully" Jan 29 14:14:12.512249 kubelet[1884]: I0129 14:14:12.511540 1884 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d60c69065707dc941dd2638d266a88e83ca08c5fcf92d93b3fe424dcd534434" Jan 29 14:14:12.512351 containerd[1509]: time="2025-01-29T14:14:12.511700936Z" level=info msg="StopPodSandbox for \"7b44a68c6d4513bceebe82e13a0b01a12baf1ed1989e5384f977f63e80482e0e\"" Jan 29 14:14:12.512351 containerd[1509]: time="2025-01-29T14:14:12.511791748Z" level=info msg="TearDown network for sandbox \"7b44a68c6d4513bceebe82e13a0b01a12baf1ed1989e5384f977f63e80482e0e\" successfully" Jan 29 14:14:12.512351 containerd[1509]: time="2025-01-29T14:14:12.511809891Z" level=info msg="StopPodSandbox for \"7b44a68c6d4513bceebe82e13a0b01a12baf1ed1989e5384f977f63e80482e0e\" returns successfully" Jan 29 14:14:12.512351 containerd[1509]: time="2025-01-29T14:14:12.512341485Z" level=info msg="StopPodSandbox for \"2d60c69065707dc941dd2638d266a88e83ca08c5fcf92d93b3fe424dcd534434\"" Jan 29 14:14:12.512663 containerd[1509]: time="2025-01-29T14:14:12.512632399Z" level=info msg="Ensure that sandbox 2d60c69065707dc941dd2638d266a88e83ca08c5fcf92d93b3fe424dcd534434 in task-service has been cleanup successfully" Jan 29 14:14:12.513080 containerd[1509]: time="2025-01-29T14:14:12.513046628Z" level=info msg="TearDown network for sandbox \"2d60c69065707dc941dd2638d266a88e83ca08c5fcf92d93b3fe424dcd534434\" successfully" Jan 29 14:14:12.513080 containerd[1509]: time="2025-01-29T14:14:12.513074649Z" level=info msg="StopPodSandbox for \"2d60c69065707dc941dd2638d266a88e83ca08c5fcf92d93b3fe424dcd534434\" returns successfully" Jan 29 14:14:12.515273 containerd[1509]: time="2025-01-29T14:14:12.515238886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bkqlz,Uid:d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5,Namespace:calico-system,Attempt:7,}" Jan 29 14:14:12.516849 systemd[1]: run-netns-cni\x2d10527f78\x2d6af4\x2d3842\x2d8171\x2d0f5e13adcfdc.mount: Deactivated successfully. Jan 29 14:14:12.517735 containerd[1509]: time="2025-01-29T14:14:12.517590896Z" level=info msg="StopPodSandbox for \"4dc8772f2d31cc037fab4af2a8550bf1e17ca9d6e836f5f407d02ec77a9ec77f\"" Jan 29 14:14:12.517735 containerd[1509]: time="2025-01-29T14:14:12.517710338Z" level=info msg="TearDown network for sandbox \"4dc8772f2d31cc037fab4af2a8550bf1e17ca9d6e836f5f407d02ec77a9ec77f\" successfully" Jan 29 14:14:12.517735 containerd[1509]: time="2025-01-29T14:14:12.517730762Z" level=info msg="StopPodSandbox for \"4dc8772f2d31cc037fab4af2a8550bf1e17ca9d6e836f5f407d02ec77a9ec77f\" returns successfully" Jan 29 14:14:12.520279 containerd[1509]: time="2025-01-29T14:14:12.519300927Z" level=info msg="StopPodSandbox for \"f70a7538889833d64ac87d078703876fec7b41abf382c08b473269c78b850921\"" Jan 29 14:14:12.520279 containerd[1509]: time="2025-01-29T14:14:12.519407141Z" level=info msg="TearDown network for sandbox \"f70a7538889833d64ac87d078703876fec7b41abf382c08b473269c78b850921\" successfully" Jan 29 14:14:12.520279 containerd[1509]: time="2025-01-29T14:14:12.519426779Z" level=info msg="StopPodSandbox for \"f70a7538889833d64ac87d078703876fec7b41abf382c08b473269c78b850921\" returns successfully" Jan 29 14:14:12.520279 containerd[1509]: time="2025-01-29T14:14:12.520194333Z" level=info msg="StopPodSandbox for \"3edb3341714ab9b96d12296b10a401d4876b484a8b43e9151714917f1e544bd4\"" Jan 29 14:14:12.520625 containerd[1509]: time="2025-01-29T14:14:12.520590380Z" level=info msg="TearDown network for sandbox \"3edb3341714ab9b96d12296b10a401d4876b484a8b43e9151714917f1e544bd4\" successfully" Jan 29 14:14:12.520625 containerd[1509]: time="2025-01-29T14:14:12.520618052Z" level=info msg="StopPodSandbox for \"3edb3341714ab9b96d12296b10a401d4876b484a8b43e9151714917f1e544bd4\" returns successfully" Jan 29 14:14:12.521382 containerd[1509]: time="2025-01-29T14:14:12.521337999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-p8xmr,Uid:9f03ac48-26b4-459a-b077-970ffd89c283,Namespace:default,Attempt:4,}" Jan 29 14:14:12.694342 containerd[1509]: time="2025-01-29T14:14:12.694179470Z" level=error msg="Failed to destroy network for sandbox \"f84f32eeb36054eabf32691580b337a1b5ff98267f4b26ad242bbfc75b7963aa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:14:12.694999 containerd[1509]: time="2025-01-29T14:14:12.694962641Z" level=error msg="encountered an error cleaning up failed sandbox \"f84f32eeb36054eabf32691580b337a1b5ff98267f4b26ad242bbfc75b7963aa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:14:12.695253 containerd[1509]: time="2025-01-29T14:14:12.695192809Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bkqlz,Uid:d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5,Namespace:calico-system,Attempt:7,} failed, error" error="failed to setup network for sandbox \"f84f32eeb36054eabf32691580b337a1b5ff98267f4b26ad242bbfc75b7963aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:14:12.697170 kubelet[1884]: E0129 14:14:12.696215 1884 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f84f32eeb36054eabf32691580b337a1b5ff98267f4b26ad242bbfc75b7963aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:14:12.697170 kubelet[1884]: E0129 14:14:12.696343 1884 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f84f32eeb36054eabf32691580b337a1b5ff98267f4b26ad242bbfc75b7963aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bkqlz" Jan 29 14:14:12.697170 kubelet[1884]: E0129 14:14:12.696384 1884 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f84f32eeb36054eabf32691580b337a1b5ff98267f4b26ad242bbfc75b7963aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bkqlz" Jan 29 14:14:12.697407 kubelet[1884]: E0129 14:14:12.696467 1884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bkqlz_calico-system(d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bkqlz_calico-system(d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f84f32eeb36054eabf32691580b337a1b5ff98267f4b26ad242bbfc75b7963aa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bkqlz" podUID="d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5" Jan 29 14:14:12.754456 containerd[1509]: time="2025-01-29T14:14:12.754280025Z" level=error msg="Failed to destroy network for sandbox \"e459b3c9842a6a6b1044e1d9eaf3176a26c0ad6fea5e4901f475d35d8e54ea71\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:14:12.757728 containerd[1509]: time="2025-01-29T14:14:12.757137706Z" level=error msg="encountered an error cleaning up failed sandbox \"e459b3c9842a6a6b1044e1d9eaf3176a26c0ad6fea5e4901f475d35d8e54ea71\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:14:12.757728 containerd[1509]: time="2025-01-29T14:14:12.757246218Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-p8xmr,Uid:9f03ac48-26b4-459a-b077-970ffd89c283,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"e459b3c9842a6a6b1044e1d9eaf3176a26c0ad6fea5e4901f475d35d8e54ea71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:14:12.758652 kubelet[1884]: E0129 14:14:12.758163 1884 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e459b3c9842a6a6b1044e1d9eaf3176a26c0ad6fea5e4901f475d35d8e54ea71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:14:12.758652 kubelet[1884]: E0129 14:14:12.758245 1884 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e459b3c9842a6a6b1044e1d9eaf3176a26c0ad6fea5e4901f475d35d8e54ea71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-p8xmr" Jan 29 14:14:12.758652 kubelet[1884]: E0129 14:14:12.758280 1884 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e459b3c9842a6a6b1044e1d9eaf3176a26c0ad6fea5e4901f475d35d8e54ea71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-p8xmr" Jan 29 14:14:12.758869 kubelet[1884]: E0129 14:14:12.758365 1884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-p8xmr_default(9f03ac48-26b4-459a-b077-970ffd89c283)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-p8xmr_default(9f03ac48-26b4-459a-b077-970ffd89c283)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e459b3c9842a6a6b1044e1d9eaf3176a26c0ad6fea5e4901f475d35d8e54ea71\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-p8xmr" podUID="9f03ac48-26b4-459a-b077-970ffd89c283" Jan 29 14:14:13.024288 containerd[1509]: time="2025-01-29T14:14:13.023504628Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:14:13.024749 containerd[1509]: time="2025-01-29T14:14:13.024654758Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 29 14:14:13.025600 containerd[1509]: time="2025-01-29T14:14:13.025523123Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:14:13.028264 containerd[1509]: time="2025-01-29T14:14:13.028178812Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:14:13.029696 containerd[1509]: time="2025-01-29T14:14:13.029450020Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 8.613740349s" Jan 29 14:14:13.029696 containerd[1509]: time="2025-01-29T14:14:13.029506276Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 29 14:14:13.058867 containerd[1509]: time="2025-01-29T14:14:13.058800584Z" level=info msg="CreateContainer within sandbox \"9673d8a4a6ed1cb757c63b233b03643afd9f147854632fc42592d35fedc52fbc\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 14:14:13.076166 containerd[1509]: time="2025-01-29T14:14:13.076015056Z" level=info msg="CreateContainer within sandbox \"9673d8a4a6ed1cb757c63b233b03643afd9f147854632fc42592d35fedc52fbc\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3fdef601e94bd226992f390ff5d8cdbb3a56d408492c497a29e02a09e5f2daf6\"" Jan 29 14:14:13.077540 containerd[1509]: time="2025-01-29T14:14:13.077442664Z" level=info msg="StartContainer for \"3fdef601e94bd226992f390ff5d8cdbb3a56d408492c497a29e02a09e5f2daf6\"" Jan 29 14:14:13.202472 systemd[1]: Started cri-containerd-3fdef601e94bd226992f390ff5d8cdbb3a56d408492c497a29e02a09e5f2daf6.scope - libcontainer container 3fdef601e94bd226992f390ff5d8cdbb3a56d408492c497a29e02a09e5f2daf6. Jan 29 14:14:13.229843 kubelet[1884]: E0129 14:14:13.229735 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:13.257629 containerd[1509]: time="2025-01-29T14:14:13.257524989Z" level=info msg="StartContainer for \"3fdef601e94bd226992f390ff5d8cdbb3a56d408492c497a29e02a09e5f2daf6\" returns successfully" Jan 29 14:14:13.371310 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 29 14:14:13.371870 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 29 14:14:13.460644 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e459b3c9842a6a6b1044e1d9eaf3176a26c0ad6fea5e4901f475d35d8e54ea71-shm.mount: Deactivated successfully. Jan 29 14:14:13.461493 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f84f32eeb36054eabf32691580b337a1b5ff98267f4b26ad242bbfc75b7963aa-shm.mount: Deactivated successfully. Jan 29 14:14:13.461631 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2275248478.mount: Deactivated successfully. Jan 29 14:14:13.520051 kubelet[1884]: I0129 14:14:13.519653 1884 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e459b3c9842a6a6b1044e1d9eaf3176a26c0ad6fea5e4901f475d35d8e54ea71" Jan 29 14:14:13.520614 containerd[1509]: time="2025-01-29T14:14:13.520557746Z" level=info msg="StopPodSandbox for \"e459b3c9842a6a6b1044e1d9eaf3176a26c0ad6fea5e4901f475d35d8e54ea71\"" Jan 29 14:14:13.521215 containerd[1509]: time="2025-01-29T14:14:13.520891613Z" level=info msg="Ensure that sandbox e459b3c9842a6a6b1044e1d9eaf3176a26c0ad6fea5e4901f475d35d8e54ea71 in task-service has been cleanup successfully" Jan 29 14:14:13.525950 containerd[1509]: time="2025-01-29T14:14:13.523975608Z" level=info msg="TearDown network for sandbox \"e459b3c9842a6a6b1044e1d9eaf3176a26c0ad6fea5e4901f475d35d8e54ea71\" successfully" Jan 29 14:14:13.525950 containerd[1509]: time="2025-01-29T14:14:13.524007859Z" level=info msg="StopPodSandbox for \"e459b3c9842a6a6b1044e1d9eaf3176a26c0ad6fea5e4901f475d35d8e54ea71\" returns successfully" Jan 29 14:14:13.525474 systemd[1]: run-netns-cni\x2d47f6f7f4\x2d6e2d\x2db378\x2d83b1\x2d584e289befd4.mount: Deactivated successfully. Jan 29 14:14:13.527328 containerd[1509]: time="2025-01-29T14:14:13.526735062Z" level=info msg="StopPodSandbox for \"2d60c69065707dc941dd2638d266a88e83ca08c5fcf92d93b3fe424dcd534434\"" Jan 29 14:14:13.527328 containerd[1509]: time="2025-01-29T14:14:13.526845961Z" level=info msg="TearDown network for sandbox \"2d60c69065707dc941dd2638d266a88e83ca08c5fcf92d93b3fe424dcd534434\" successfully" Jan 29 14:14:13.527328 containerd[1509]: time="2025-01-29T14:14:13.526865425Z" level=info msg="StopPodSandbox for \"2d60c69065707dc941dd2638d266a88e83ca08c5fcf92d93b3fe424dcd534434\" returns successfully" Jan 29 14:14:13.528625 containerd[1509]: time="2025-01-29T14:14:13.528432216Z" level=info msg="StopPodSandbox for \"4dc8772f2d31cc037fab4af2a8550bf1e17ca9d6e836f5f407d02ec77a9ec77f\"" Jan 29 14:14:13.528625 containerd[1509]: time="2025-01-29T14:14:13.528533085Z" level=info msg="TearDown network for sandbox \"4dc8772f2d31cc037fab4af2a8550bf1e17ca9d6e836f5f407d02ec77a9ec77f\" successfully" Jan 29 14:14:13.529365 containerd[1509]: time="2025-01-29T14:14:13.528625856Z" level=info msg="StopPodSandbox for \"4dc8772f2d31cc037fab4af2a8550bf1e17ca9d6e836f5f407d02ec77a9ec77f\" returns successfully" Jan 29 14:14:13.530601 containerd[1509]: time="2025-01-29T14:14:13.529603230Z" level=info msg="StopPodSandbox for \"f70a7538889833d64ac87d078703876fec7b41abf382c08b473269c78b850921\"" Jan 29 14:14:13.530601 containerd[1509]: time="2025-01-29T14:14:13.529720626Z" level=info msg="TearDown network for sandbox \"f70a7538889833d64ac87d078703876fec7b41abf382c08b473269c78b850921\" successfully" Jan 29 14:14:13.530601 containerd[1509]: time="2025-01-29T14:14:13.529751853Z" level=info msg="StopPodSandbox for \"f70a7538889833d64ac87d078703876fec7b41abf382c08b473269c78b850921\" returns successfully" Jan 29 14:14:13.530601 containerd[1509]: time="2025-01-29T14:14:13.530378309Z" level=info msg="StopPodSandbox for \"3edb3341714ab9b96d12296b10a401d4876b484a8b43e9151714917f1e544bd4\"" Jan 29 14:14:13.530601 containerd[1509]: time="2025-01-29T14:14:13.530474906Z" level=info msg="TearDown network for sandbox \"3edb3341714ab9b96d12296b10a401d4876b484a8b43e9151714917f1e544bd4\" successfully" Jan 29 14:14:13.530601 containerd[1509]: time="2025-01-29T14:14:13.530494572Z" level=info msg="StopPodSandbox for \"3edb3341714ab9b96d12296b10a401d4876b484a8b43e9151714917f1e544bd4\" returns successfully" Jan 29 14:14:13.532717 containerd[1509]: time="2025-01-29T14:14:13.531684306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-p8xmr,Uid:9f03ac48-26b4-459a-b077-970ffd89c283,Namespace:default,Attempt:5,}" Jan 29 14:14:13.546380 kubelet[1884]: I0129 14:14:13.545308 1884 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f84f32eeb36054eabf32691580b337a1b5ff98267f4b26ad242bbfc75b7963aa" Jan 29 14:14:13.546552 containerd[1509]: time="2025-01-29T14:14:13.546274889Z" level=info msg="StopPodSandbox for \"f84f32eeb36054eabf32691580b337a1b5ff98267f4b26ad242bbfc75b7963aa\"" Jan 29 14:14:13.546942 containerd[1509]: time="2025-01-29T14:14:13.546907938Z" level=info msg="Ensure that sandbox f84f32eeb36054eabf32691580b337a1b5ff98267f4b26ad242bbfc75b7963aa in task-service has been cleanup successfully" Jan 29 14:14:13.551088 systemd[1]: run-netns-cni\x2d2127747b\x2dad61\x2dd174\x2df12b\x2d26ac646b09f6.mount: Deactivated successfully. Jan 29 14:14:13.557683 containerd[1509]: time="2025-01-29T14:14:13.553170752Z" level=info msg="TearDown network for sandbox \"f84f32eeb36054eabf32691580b337a1b5ff98267f4b26ad242bbfc75b7963aa\" successfully" Jan 29 14:14:13.557683 containerd[1509]: time="2025-01-29T14:14:13.553221276Z" level=info msg="StopPodSandbox for \"f84f32eeb36054eabf32691580b337a1b5ff98267f4b26ad242bbfc75b7963aa\" returns successfully" Jan 29 14:14:13.557888 kubelet[1884]: I0129 14:14:13.556474 1884 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-xmthx" podStartSLOduration=4.101924843 podStartE2EDuration="23.556429047s" podCreationTimestamp="2025-01-29 14:13:50 +0000 UTC" firstStartedPulling="2025-01-29 14:13:53.576912611 +0000 UTC m=+4.011624379" lastFinishedPulling="2025-01-29 14:14:13.031416816 +0000 UTC m=+23.466128583" observedRunningTime="2025-01-29 14:14:13.55617577 +0000 UTC m=+23.990887550" watchObservedRunningTime="2025-01-29 14:14:13.556429047 +0000 UTC m=+23.991140814" Jan 29 14:14:13.561372 containerd[1509]: time="2025-01-29T14:14:13.561110686Z" level=info msg="StopPodSandbox for \"407221a1f1c500d27a764909eb3a6e5481e2355759aba4ae013c9b96519bfd18\"" Jan 29 14:14:13.561513 containerd[1509]: time="2025-01-29T14:14:13.561470672Z" level=info msg="TearDown network for sandbox \"407221a1f1c500d27a764909eb3a6e5481e2355759aba4ae013c9b96519bfd18\" successfully" Jan 29 14:14:13.561513 containerd[1509]: time="2025-01-29T14:14:13.561510035Z" level=info msg="StopPodSandbox for \"407221a1f1c500d27a764909eb3a6e5481e2355759aba4ae013c9b96519bfd18\" returns successfully" Jan 29 14:14:13.565100 containerd[1509]: time="2025-01-29T14:14:13.565064275Z" level=info msg="StopPodSandbox for \"88fdc4aefe6f2dfe63e23ee5fdf36ccf79aa5653def7479b3986dee24345b6ac\"" Jan 29 14:14:13.565313 containerd[1509]: time="2025-01-29T14:14:13.565276583Z" level=info msg="TearDown network for sandbox \"88fdc4aefe6f2dfe63e23ee5fdf36ccf79aa5653def7479b3986dee24345b6ac\" successfully" Jan 29 14:14:13.565411 containerd[1509]: time="2025-01-29T14:14:13.565312478Z" level=info msg="StopPodSandbox for \"88fdc4aefe6f2dfe63e23ee5fdf36ccf79aa5653def7479b3986dee24345b6ac\" returns successfully" Jan 29 14:14:13.566140 containerd[1509]: time="2025-01-29T14:14:13.565808922Z" level=info msg="StopPodSandbox for \"b7c085b0c413b5c87c05583e2ece6eda68ffabc28f61ff972b3616d52deae2c7\"" Jan 29 14:14:13.566140 containerd[1509]: time="2025-01-29T14:14:13.565918312Z" level=info msg="TearDown network for sandbox \"b7c085b0c413b5c87c05583e2ece6eda68ffabc28f61ff972b3616d52deae2c7\" successfully" Jan 29 14:14:13.566140 containerd[1509]: time="2025-01-29T14:14:13.565938392Z" level=info msg="StopPodSandbox for \"b7c085b0c413b5c87c05583e2ece6eda68ffabc28f61ff972b3616d52deae2c7\" returns successfully" Jan 29 14:14:13.566939 containerd[1509]: time="2025-01-29T14:14:13.566749991Z" level=info msg="StopPodSandbox for \"83bd4c54b1d170d4b0ade5c8c9c72351d9e9b974a6197b8577fc8bf28093735d\"" Jan 29 14:14:13.566939 containerd[1509]: time="2025-01-29T14:14:13.566903152Z" level=info msg="TearDown network for sandbox \"83bd4c54b1d170d4b0ade5c8c9c72351d9e9b974a6197b8577fc8bf28093735d\" successfully" Jan 29 14:14:13.567356 containerd[1509]: time="2025-01-29T14:14:13.567156674Z" level=info msg="StopPodSandbox for \"83bd4c54b1d170d4b0ade5c8c9c72351d9e9b974a6197b8577fc8bf28093735d\" returns successfully" Jan 29 14:14:13.568078 containerd[1509]: time="2025-01-29T14:14:13.567934158Z" level=info msg="StopPodSandbox for \"d20be9ff8f6c39b1d90c5787ea8648a11208dd63d534f14ff1d52df21a7369c8\"" Jan 29 14:14:13.568409 containerd[1509]: time="2025-01-29T14:14:13.568365264Z" level=info msg="TearDown network for sandbox \"d20be9ff8f6c39b1d90c5787ea8648a11208dd63d534f14ff1d52df21a7369c8\" successfully" Jan 29 14:14:13.568584 containerd[1509]: time="2025-01-29T14:14:13.568390075Z" level=info msg="StopPodSandbox for \"d20be9ff8f6c39b1d90c5787ea8648a11208dd63d534f14ff1d52df21a7369c8\" returns successfully" Jan 29 14:14:13.570299 containerd[1509]: time="2025-01-29T14:14:13.569476622Z" level=info msg="StopPodSandbox for \"c259ca8a71ba11a49c56fd1c112c81f8f1945c1f63f949c78e0eb411b23d5577\"" Jan 29 14:14:13.570299 containerd[1509]: time="2025-01-29T14:14:13.569611298Z" level=info msg="TearDown network for sandbox \"c259ca8a71ba11a49c56fd1c112c81f8f1945c1f63f949c78e0eb411b23d5577\" successfully" Jan 29 14:14:13.570299 containerd[1509]: time="2025-01-29T14:14:13.569637838Z" level=info msg="StopPodSandbox for \"c259ca8a71ba11a49c56fd1c112c81f8f1945c1f63f949c78e0eb411b23d5577\" returns successfully" Jan 29 14:14:13.575477 containerd[1509]: time="2025-01-29T14:14:13.575419002Z" level=info msg="StopPodSandbox for \"7b44a68c6d4513bceebe82e13a0b01a12baf1ed1989e5384f977f63e80482e0e\"" Jan 29 14:14:13.575652 containerd[1509]: time="2025-01-29T14:14:13.575605574Z" level=info msg="TearDown network for sandbox \"7b44a68c6d4513bceebe82e13a0b01a12baf1ed1989e5384f977f63e80482e0e\" successfully" Jan 29 14:14:13.575652 containerd[1509]: time="2025-01-29T14:14:13.575632767Z" level=info msg="StopPodSandbox for \"7b44a68c6d4513bceebe82e13a0b01a12baf1ed1989e5384f977f63e80482e0e\" returns successfully" Jan 29 14:14:13.576990 containerd[1509]: time="2025-01-29T14:14:13.576700882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bkqlz,Uid:d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5,Namespace:calico-system,Attempt:8,}" Jan 29 14:14:13.872919 systemd-networkd[1422]: cali9c6e385027f: Link UP Jan 29 14:14:13.873292 systemd-networkd[1422]: cali9c6e385027f: Gained carrier Jan 29 14:14:13.900879 containerd[1509]: 2025-01-29 14:14:13.668 [INFO][2903] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 14:14:13.900879 containerd[1509]: 2025-01-29 14:14:13.690 [INFO][2903] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.230.31.206-k8s-csi--node--driver--bkqlz-eth0 csi-node-driver- calico-system d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5 1129 0 2025-01-29 14:13:50 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:84cddb44f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.230.31.206 csi-node-driver-bkqlz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali9c6e385027f [] []}} ContainerID="48c352ffcc7ce226ac5d74d1a7de7f25660b3a7ad9bac7de6b947dfc2ea8e26d" Namespace="calico-system" Pod="csi-node-driver-bkqlz" WorkloadEndpoint="10.230.31.206-k8s-csi--node--driver--bkqlz-" Jan 29 14:14:13.900879 containerd[1509]: 2025-01-29 14:14:13.690 [INFO][2903] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="48c352ffcc7ce226ac5d74d1a7de7f25660b3a7ad9bac7de6b947dfc2ea8e26d" Namespace="calico-system" Pod="csi-node-driver-bkqlz" WorkloadEndpoint="10.230.31.206-k8s-csi--node--driver--bkqlz-eth0" Jan 29 14:14:13.900879 containerd[1509]: 2025-01-29 14:14:13.773 [INFO][2931] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="48c352ffcc7ce226ac5d74d1a7de7f25660b3a7ad9bac7de6b947dfc2ea8e26d" HandleID="k8s-pod-network.48c352ffcc7ce226ac5d74d1a7de7f25660b3a7ad9bac7de6b947dfc2ea8e26d" Workload="10.230.31.206-k8s-csi--node--driver--bkqlz-eth0" Jan 29 14:14:13.900879 containerd[1509]: 2025-01-29 14:14:13.797 [INFO][2931] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="48c352ffcc7ce226ac5d74d1a7de7f25660b3a7ad9bac7de6b947dfc2ea8e26d" HandleID="k8s-pod-network.48c352ffcc7ce226ac5d74d1a7de7f25660b3a7ad9bac7de6b947dfc2ea8e26d" Workload="10.230.31.206-k8s-csi--node--driver--bkqlz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003fb780), Attrs:map[string]string{"namespace":"calico-system", "node":"10.230.31.206", "pod":"csi-node-driver-bkqlz", "timestamp":"2025-01-29 14:14:13.773008557 +0000 UTC"}, Hostname:"10.230.31.206", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 14:14:13.900879 containerd[1509]: 2025-01-29 14:14:13.797 [INFO][2931] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 14:14:13.900879 containerd[1509]: 2025-01-29 14:14:13.797 [INFO][2931] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 14:14:13.900879 containerd[1509]: 2025-01-29 14:14:13.797 [INFO][2931] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.230.31.206' Jan 29 14:14:13.900879 containerd[1509]: 2025-01-29 14:14:13.809 [INFO][2931] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.48c352ffcc7ce226ac5d74d1a7de7f25660b3a7ad9bac7de6b947dfc2ea8e26d" host="10.230.31.206" Jan 29 14:14:13.900879 containerd[1509]: 2025-01-29 14:14:13.815 [INFO][2931] ipam/ipam.go 372: Looking up existing affinities for host host="10.230.31.206" Jan 29 14:14:13.900879 containerd[1509]: 2025-01-29 14:14:13.822 [INFO][2931] ipam/ipam.go 489: Trying affinity for 192.168.115.0/26 host="10.230.31.206" Jan 29 14:14:13.900879 containerd[1509]: 2025-01-29 14:14:13.825 [INFO][2931] ipam/ipam.go 155: Attempting to load block cidr=192.168.115.0/26 host="10.230.31.206" Jan 29 14:14:13.900879 containerd[1509]: 2025-01-29 14:14:13.829 [INFO][2931] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.115.0/26 host="10.230.31.206" Jan 29 14:14:13.900879 containerd[1509]: 2025-01-29 14:14:13.829 [INFO][2931] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.115.0/26 handle="k8s-pod-network.48c352ffcc7ce226ac5d74d1a7de7f25660b3a7ad9bac7de6b947dfc2ea8e26d" host="10.230.31.206" Jan 29 14:14:13.900879 containerd[1509]: 2025-01-29 14:14:13.831 [INFO][2931] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.48c352ffcc7ce226ac5d74d1a7de7f25660b3a7ad9bac7de6b947dfc2ea8e26d Jan 29 14:14:13.900879 containerd[1509]: 2025-01-29 14:14:13.839 [INFO][2931] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.115.0/26 handle="k8s-pod-network.48c352ffcc7ce226ac5d74d1a7de7f25660b3a7ad9bac7de6b947dfc2ea8e26d" host="10.230.31.206" Jan 29 14:14:13.900879 containerd[1509]: 2025-01-29 14:14:13.855 [INFO][2931] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.115.1/26] block=192.168.115.0/26 handle="k8s-pod-network.48c352ffcc7ce226ac5d74d1a7de7f25660b3a7ad9bac7de6b947dfc2ea8e26d" host="10.230.31.206" Jan 29 14:14:13.900879 containerd[1509]: 2025-01-29 14:14:13.855 [INFO][2931] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.115.1/26] handle="k8s-pod-network.48c352ffcc7ce226ac5d74d1a7de7f25660b3a7ad9bac7de6b947dfc2ea8e26d" host="10.230.31.206" Jan 29 14:14:13.900879 containerd[1509]: 2025-01-29 14:14:13.855 [INFO][2931] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 14:14:13.900879 containerd[1509]: 2025-01-29 14:14:13.855 [INFO][2931] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.115.1/26] IPv6=[] ContainerID="48c352ffcc7ce226ac5d74d1a7de7f25660b3a7ad9bac7de6b947dfc2ea8e26d" HandleID="k8s-pod-network.48c352ffcc7ce226ac5d74d1a7de7f25660b3a7ad9bac7de6b947dfc2ea8e26d" Workload="10.230.31.206-k8s-csi--node--driver--bkqlz-eth0" Jan 29 14:14:13.902281 containerd[1509]: 2025-01-29 14:14:13.859 [INFO][2903] cni-plugin/k8s.go 386: Populated endpoint ContainerID="48c352ffcc7ce226ac5d74d1a7de7f25660b3a7ad9bac7de6b947dfc2ea8e26d" Namespace="calico-system" Pod="csi-node-driver-bkqlz" WorkloadEndpoint="10.230.31.206-k8s-csi--node--driver--bkqlz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.31.206-k8s-csi--node--driver--bkqlz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5", ResourceVersion:"1129", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 14, 13, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.31.206", ContainerID:"", Pod:"csi-node-driver-bkqlz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.115.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9c6e385027f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 14:14:13.902281 containerd[1509]: 2025-01-29 14:14:13.859 [INFO][2903] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.115.1/32] ContainerID="48c352ffcc7ce226ac5d74d1a7de7f25660b3a7ad9bac7de6b947dfc2ea8e26d" Namespace="calico-system" Pod="csi-node-driver-bkqlz" WorkloadEndpoint="10.230.31.206-k8s-csi--node--driver--bkqlz-eth0" Jan 29 14:14:13.902281 containerd[1509]: 2025-01-29 14:14:13.859 [INFO][2903] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9c6e385027f ContainerID="48c352ffcc7ce226ac5d74d1a7de7f25660b3a7ad9bac7de6b947dfc2ea8e26d" Namespace="calico-system" Pod="csi-node-driver-bkqlz" WorkloadEndpoint="10.230.31.206-k8s-csi--node--driver--bkqlz-eth0" Jan 29 14:14:13.902281 containerd[1509]: 2025-01-29 14:14:13.873 [INFO][2903] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="48c352ffcc7ce226ac5d74d1a7de7f25660b3a7ad9bac7de6b947dfc2ea8e26d" Namespace="calico-system" Pod="csi-node-driver-bkqlz" WorkloadEndpoint="10.230.31.206-k8s-csi--node--driver--bkqlz-eth0" Jan 29 14:14:13.902281 containerd[1509]: 2025-01-29 14:14:13.874 [INFO][2903] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="48c352ffcc7ce226ac5d74d1a7de7f25660b3a7ad9bac7de6b947dfc2ea8e26d" Namespace="calico-system" Pod="csi-node-driver-bkqlz" WorkloadEndpoint="10.230.31.206-k8s-csi--node--driver--bkqlz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.31.206-k8s-csi--node--driver--bkqlz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5", ResourceVersion:"1129", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 14, 13, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.31.206", ContainerID:"48c352ffcc7ce226ac5d74d1a7de7f25660b3a7ad9bac7de6b947dfc2ea8e26d", Pod:"csi-node-driver-bkqlz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.115.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9c6e385027f", MAC:"e2:49:8a:1b:89:ec", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 14:14:13.902281 containerd[1509]: 2025-01-29 14:14:13.898 [INFO][2903] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="48c352ffcc7ce226ac5d74d1a7de7f25660b3a7ad9bac7de6b947dfc2ea8e26d" Namespace="calico-system" Pod="csi-node-driver-bkqlz" WorkloadEndpoint="10.230.31.206-k8s-csi--node--driver--bkqlz-eth0" Jan 29 14:14:13.951441 containerd[1509]: time="2025-01-29T14:14:13.950942516Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 14:14:13.951441 containerd[1509]: time="2025-01-29T14:14:13.951067042Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 14:14:13.951441 containerd[1509]: time="2025-01-29T14:14:13.951100552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 14:14:13.951441 containerd[1509]: time="2025-01-29T14:14:13.951265246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 14:14:13.980342 systemd-networkd[1422]: cali12611eb97eb: Link UP Jan 29 14:14:13.980741 systemd-networkd[1422]: cali12611eb97eb: Gained carrier Jan 29 14:14:13.982388 systemd[1]: Started cri-containerd-48c352ffcc7ce226ac5d74d1a7de7f25660b3a7ad9bac7de6b947dfc2ea8e26d.scope - libcontainer container 48c352ffcc7ce226ac5d74d1a7de7f25660b3a7ad9bac7de6b947dfc2ea8e26d. Jan 29 14:14:13.998849 containerd[1509]: 2025-01-29 14:14:13.630 [INFO][2887] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 14:14:13.998849 containerd[1509]: 2025-01-29 14:14:13.689 [INFO][2887] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.230.31.206-k8s-nginx--deployment--7fcdb87857--p8xmr-eth0 nginx-deployment-7fcdb87857- default 9f03ac48-26b4-459a-b077-970ffd89c283 1216 0 2025-01-29 14:14:08 +0000 UTC map[app:nginx pod-template-hash:7fcdb87857 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.230.31.206 nginx-deployment-7fcdb87857-p8xmr eth0 default [] [] [kns.default ksa.default.default] cali12611eb97eb [] []}} ContainerID="cf6245de27b3431f3811b4475b1492ab530c66582355cb65d92947da3ec0f1f0" Namespace="default" Pod="nginx-deployment-7fcdb87857-p8xmr" WorkloadEndpoint="10.230.31.206-k8s-nginx--deployment--7fcdb87857--p8xmr-" Jan 29 14:14:13.998849 containerd[1509]: 2025-01-29 14:14:13.689 [INFO][2887] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cf6245de27b3431f3811b4475b1492ab530c66582355cb65d92947da3ec0f1f0" Namespace="default" Pod="nginx-deployment-7fcdb87857-p8xmr" WorkloadEndpoint="10.230.31.206-k8s-nginx--deployment--7fcdb87857--p8xmr-eth0" Jan 29 14:14:13.998849 containerd[1509]: 2025-01-29 14:14:13.772 [INFO][2930] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cf6245de27b3431f3811b4475b1492ab530c66582355cb65d92947da3ec0f1f0" HandleID="k8s-pod-network.cf6245de27b3431f3811b4475b1492ab530c66582355cb65d92947da3ec0f1f0" Workload="10.230.31.206-k8s-nginx--deployment--7fcdb87857--p8xmr-eth0" Jan 29 14:14:13.998849 containerd[1509]: 2025-01-29 14:14:13.797 [INFO][2930] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cf6245de27b3431f3811b4475b1492ab530c66582355cb65d92947da3ec0f1f0" HandleID="k8s-pod-network.cf6245de27b3431f3811b4475b1492ab530c66582355cb65d92947da3ec0f1f0" Workload="10.230.31.206-k8s-nginx--deployment--7fcdb87857--p8xmr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000429cf0), Attrs:map[string]string{"namespace":"default", "node":"10.230.31.206", "pod":"nginx-deployment-7fcdb87857-p8xmr", "timestamp":"2025-01-29 14:14:13.772546829 +0000 UTC"}, Hostname:"10.230.31.206", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 14:14:13.998849 containerd[1509]: 2025-01-29 14:14:13.797 [INFO][2930] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 14:14:13.998849 containerd[1509]: 2025-01-29 14:14:13.855 [INFO][2930] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 14:14:13.998849 containerd[1509]: 2025-01-29 14:14:13.856 [INFO][2930] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.230.31.206' Jan 29 14:14:13.998849 containerd[1509]: 2025-01-29 14:14:13.904 [INFO][2930] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cf6245de27b3431f3811b4475b1492ab530c66582355cb65d92947da3ec0f1f0" host="10.230.31.206" Jan 29 14:14:13.998849 containerd[1509]: 2025-01-29 14:14:13.911 [INFO][2930] ipam/ipam.go 372: Looking up existing affinities for host host="10.230.31.206" Jan 29 14:14:13.998849 containerd[1509]: 2025-01-29 14:14:13.925 [INFO][2930] ipam/ipam.go 489: Trying affinity for 192.168.115.0/26 host="10.230.31.206" Jan 29 14:14:13.998849 containerd[1509]: 2025-01-29 14:14:13.928 [INFO][2930] ipam/ipam.go 155: Attempting to load block cidr=192.168.115.0/26 host="10.230.31.206" Jan 29 14:14:13.998849 containerd[1509]: 2025-01-29 14:14:13.932 [INFO][2930] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.115.0/26 host="10.230.31.206" Jan 29 14:14:13.998849 containerd[1509]: 2025-01-29 14:14:13.932 [INFO][2930] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.115.0/26 handle="k8s-pod-network.cf6245de27b3431f3811b4475b1492ab530c66582355cb65d92947da3ec0f1f0" host="10.230.31.206" Jan 29 14:14:13.998849 containerd[1509]: 2025-01-29 14:14:13.936 [INFO][2930] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.cf6245de27b3431f3811b4475b1492ab530c66582355cb65d92947da3ec0f1f0 Jan 29 14:14:13.998849 containerd[1509]: 2025-01-29 14:14:13.948 [INFO][2930] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.115.0/26 handle="k8s-pod-network.cf6245de27b3431f3811b4475b1492ab530c66582355cb65d92947da3ec0f1f0" host="10.230.31.206" Jan 29 14:14:13.998849 containerd[1509]: 2025-01-29 14:14:13.961 [INFO][2930] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.115.2/26] block=192.168.115.0/26 handle="k8s-pod-network.cf6245de27b3431f3811b4475b1492ab530c66582355cb65d92947da3ec0f1f0" host="10.230.31.206" Jan 29 14:14:13.998849 containerd[1509]: 2025-01-29 14:14:13.961 [INFO][2930] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.115.2/26] handle="k8s-pod-network.cf6245de27b3431f3811b4475b1492ab530c66582355cb65d92947da3ec0f1f0" host="10.230.31.206" Jan 29 14:14:13.998849 containerd[1509]: 2025-01-29 14:14:13.961 [INFO][2930] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 14:14:13.998849 containerd[1509]: 2025-01-29 14:14:13.962 [INFO][2930] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.115.2/26] IPv6=[] ContainerID="cf6245de27b3431f3811b4475b1492ab530c66582355cb65d92947da3ec0f1f0" HandleID="k8s-pod-network.cf6245de27b3431f3811b4475b1492ab530c66582355cb65d92947da3ec0f1f0" Workload="10.230.31.206-k8s-nginx--deployment--7fcdb87857--p8xmr-eth0" Jan 29 14:14:14.003556 containerd[1509]: 2025-01-29 14:14:13.965 [INFO][2887] cni-plugin/k8s.go 386: Populated endpoint ContainerID="cf6245de27b3431f3811b4475b1492ab530c66582355cb65d92947da3ec0f1f0" Namespace="default" Pod="nginx-deployment-7fcdb87857-p8xmr" WorkloadEndpoint="10.230.31.206-k8s-nginx--deployment--7fcdb87857--p8xmr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.31.206-k8s-nginx--deployment--7fcdb87857--p8xmr-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"9f03ac48-26b4-459a-b077-970ffd89c283", ResourceVersion:"1216", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 14, 14, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.31.206", ContainerID:"", Pod:"nginx-deployment-7fcdb87857-p8xmr", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.115.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali12611eb97eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 14:14:14.003556 containerd[1509]: 2025-01-29 14:14:13.966 [INFO][2887] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.115.2/32] ContainerID="cf6245de27b3431f3811b4475b1492ab530c66582355cb65d92947da3ec0f1f0" Namespace="default" Pod="nginx-deployment-7fcdb87857-p8xmr" WorkloadEndpoint="10.230.31.206-k8s-nginx--deployment--7fcdb87857--p8xmr-eth0" Jan 29 14:14:14.003556 containerd[1509]: 2025-01-29 14:14:13.966 [INFO][2887] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali12611eb97eb ContainerID="cf6245de27b3431f3811b4475b1492ab530c66582355cb65d92947da3ec0f1f0" Namespace="default" Pod="nginx-deployment-7fcdb87857-p8xmr" WorkloadEndpoint="10.230.31.206-k8s-nginx--deployment--7fcdb87857--p8xmr-eth0" Jan 29 14:14:14.003556 containerd[1509]: 2025-01-29 14:14:13.977 [INFO][2887] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cf6245de27b3431f3811b4475b1492ab530c66582355cb65d92947da3ec0f1f0" Namespace="default" Pod="nginx-deployment-7fcdb87857-p8xmr" WorkloadEndpoint="10.230.31.206-k8s-nginx--deployment--7fcdb87857--p8xmr-eth0" Jan 29 14:14:14.003556 containerd[1509]: 2025-01-29 14:14:13.978 [INFO][2887] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cf6245de27b3431f3811b4475b1492ab530c66582355cb65d92947da3ec0f1f0" Namespace="default" Pod="nginx-deployment-7fcdb87857-p8xmr" WorkloadEndpoint="10.230.31.206-k8s-nginx--deployment--7fcdb87857--p8xmr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.31.206-k8s-nginx--deployment--7fcdb87857--p8xmr-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"9f03ac48-26b4-459a-b077-970ffd89c283", ResourceVersion:"1216", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 14, 14, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.31.206", ContainerID:"cf6245de27b3431f3811b4475b1492ab530c66582355cb65d92947da3ec0f1f0", Pod:"nginx-deployment-7fcdb87857-p8xmr", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.115.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali12611eb97eb", MAC:"a6:e6:29:ca:e9:14", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 14:14:14.003556 containerd[1509]: 2025-01-29 14:14:13.995 [INFO][2887] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="cf6245de27b3431f3811b4475b1492ab530c66582355cb65d92947da3ec0f1f0" Namespace="default" Pod="nginx-deployment-7fcdb87857-p8xmr" WorkloadEndpoint="10.230.31.206-k8s-nginx--deployment--7fcdb87857--p8xmr-eth0" Jan 29 14:14:14.036720 containerd[1509]: time="2025-01-29T14:14:14.036470616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bkqlz,Uid:d7eeb6ab-5d1a-4a03-b3d9-08caf13db3b5,Namespace:calico-system,Attempt:8,} returns sandbox id \"48c352ffcc7ce226ac5d74d1a7de7f25660b3a7ad9bac7de6b947dfc2ea8e26d\"" Jan 29 14:14:14.040544 containerd[1509]: time="2025-01-29T14:14:14.040494388Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 29 14:14:14.047750 containerd[1509]: time="2025-01-29T14:14:14.047590704Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 14:14:14.047750 containerd[1509]: time="2025-01-29T14:14:14.047664158Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 14:14:14.047750 containerd[1509]: time="2025-01-29T14:14:14.047699377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 14:14:14.048238 containerd[1509]: time="2025-01-29T14:14:14.047811606Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 14:14:14.074100 systemd[1]: Started cri-containerd-cf6245de27b3431f3811b4475b1492ab530c66582355cb65d92947da3ec0f1f0.scope - libcontainer container cf6245de27b3431f3811b4475b1492ab530c66582355cb65d92947da3ec0f1f0. Jan 29 14:14:14.139959 containerd[1509]: time="2025-01-29T14:14:14.139810929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-p8xmr,Uid:9f03ac48-26b4-459a-b077-970ffd89c283,Namespace:default,Attempt:5,} returns sandbox id \"cf6245de27b3431f3811b4475b1492ab530c66582355cb65d92947da3ec0f1f0\"" Jan 29 14:14:14.231095 kubelet[1884]: E0129 14:14:14.230965 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:14.579254 systemd[1]: run-containerd-runc-k8s.io-3fdef601e94bd226992f390ff5d8cdbb3a56d408492c497a29e02a09e5f2daf6-runc.kyvCVo.mount: Deactivated successfully. Jan 29 14:14:14.949501 systemd-networkd[1422]: cali9c6e385027f: Gained IPv6LL Jan 29 14:14:15.013565 systemd-networkd[1422]: cali12611eb97eb: Gained IPv6LL Jan 29 14:14:15.232994 kubelet[1884]: E0129 14:14:15.232565 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:15.291488 kernel: bpftool[3198]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 29 14:14:15.657753 systemd-networkd[1422]: vxlan.calico: Link UP Jan 29 14:14:15.657767 systemd-networkd[1422]: vxlan.calico: Gained carrier Jan 29 14:14:15.766149 containerd[1509]: time="2025-01-29T14:14:15.764727254Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:14:15.769150 containerd[1509]: time="2025-01-29T14:14:15.765747089Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 29 14:14:15.774499 containerd[1509]: time="2025-01-29T14:14:15.772879645Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:14:15.787750 containerd[1509]: time="2025-01-29T14:14:15.787708131Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:14:15.789572 containerd[1509]: time="2025-01-29T14:14:15.789487646Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.748933233s" Jan 29 14:14:15.789664 containerd[1509]: time="2025-01-29T14:14:15.789586291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 29 14:14:15.794132 containerd[1509]: time="2025-01-29T14:14:15.792636437Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 29 14:14:15.797191 containerd[1509]: time="2025-01-29T14:14:15.796739523Z" level=info msg="CreateContainer within sandbox \"48c352ffcc7ce226ac5d74d1a7de7f25660b3a7ad9bac7de6b947dfc2ea8e26d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 29 14:14:15.838431 containerd[1509]: time="2025-01-29T14:14:15.837997022Z" level=info msg="CreateContainer within sandbox \"48c352ffcc7ce226ac5d74d1a7de7f25660b3a7ad9bac7de6b947dfc2ea8e26d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"046668e4ee8b46e39c34d4b5da937b39aebbaea98660ec6a2a4469086efc4ab6\"" Jan 29 14:14:15.839946 containerd[1509]: time="2025-01-29T14:14:15.839564989Z" level=info msg="StartContainer for \"046668e4ee8b46e39c34d4b5da937b39aebbaea98660ec6a2a4469086efc4ab6\"" Jan 29 14:14:15.867146 update_engine[1490]: I20250129 14:14:15.864218 1490 update_attempter.cc:509] Updating boot flags... Jan 29 14:14:15.943625 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3237) Jan 29 14:14:16.034347 systemd[1]: Started cri-containerd-046668e4ee8b46e39c34d4b5da937b39aebbaea98660ec6a2a4469086efc4ab6.scope - libcontainer container 046668e4ee8b46e39c34d4b5da937b39aebbaea98660ec6a2a4469086efc4ab6. Jan 29 14:14:16.136912 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3237) Jan 29 14:14:16.232137 containerd[1509]: time="2025-01-29T14:14:16.232035579Z" level=info msg="StartContainer for \"046668e4ee8b46e39c34d4b5da937b39aebbaea98660ec6a2a4469086efc4ab6\" returns successfully" Jan 29 14:14:16.232965 kubelet[1884]: E0129 14:14:16.232926 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:17.234395 kubelet[1884]: E0129 14:14:17.234244 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:17.510460 systemd-networkd[1422]: vxlan.calico: Gained IPv6LL Jan 29 14:14:18.235302 kubelet[1884]: E0129 14:14:18.235231 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:19.237893 kubelet[1884]: E0129 14:14:19.237524 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:19.567518 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3316797762.mount: Deactivated successfully. Jan 29 14:14:20.238062 kubelet[1884]: E0129 14:14:20.237998 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:21.239406 kubelet[1884]: E0129 14:14:21.239324 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:21.457511 containerd[1509]: time="2025-01-29T14:14:21.457386498Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:14:21.459442 containerd[1509]: time="2025-01-29T14:14:21.459362412Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71015561" Jan 29 14:14:21.460139 containerd[1509]: time="2025-01-29T14:14:21.459939939Z" level=info msg="ImageCreate event name:\"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:14:21.464378 containerd[1509]: time="2025-01-29T14:14:21.464340590Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:14:21.466215 containerd[1509]: time="2025-01-29T14:14:21.465959916Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 5.673254659s" Jan 29 14:14:21.466215 containerd[1509]: time="2025-01-29T14:14:21.466023139Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 29 14:14:21.468905 containerd[1509]: time="2025-01-29T14:14:21.468170126Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 29 14:14:21.480143 containerd[1509]: time="2025-01-29T14:14:21.480070931Z" level=info msg="CreateContainer within sandbox \"cf6245de27b3431f3811b4475b1492ab530c66582355cb65d92947da3ec0f1f0\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 29 14:14:21.515230 containerd[1509]: time="2025-01-29T14:14:21.515165371Z" level=info msg="CreateContainer within sandbox \"cf6245de27b3431f3811b4475b1492ab530c66582355cb65d92947da3ec0f1f0\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"83397286a8b8a4ec4d50a27b3bf96be6358456a9c8f101dc33f266f4cdccb37d\"" Jan 29 14:14:21.516526 containerd[1509]: time="2025-01-29T14:14:21.516251911Z" level=info msg="StartContainer for \"83397286a8b8a4ec4d50a27b3bf96be6358456a9c8f101dc33f266f4cdccb37d\"" Jan 29 14:14:21.567391 systemd[1]: Started cri-containerd-83397286a8b8a4ec4d50a27b3bf96be6358456a9c8f101dc33f266f4cdccb37d.scope - libcontainer container 83397286a8b8a4ec4d50a27b3bf96be6358456a9c8f101dc33f266f4cdccb37d. Jan 29 14:14:21.617284 containerd[1509]: time="2025-01-29T14:14:21.617211424Z" level=info msg="StartContainer for \"83397286a8b8a4ec4d50a27b3bf96be6358456a9c8f101dc33f266f4cdccb37d\" returns successfully" Jan 29 14:14:22.240387 kubelet[1884]: E0129 14:14:22.240301 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:22.645136 kubelet[1884]: I0129 14:14:22.644975 1884 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-p8xmr" podStartSLOduration=7.320731939 podStartE2EDuration="14.64493762s" podCreationTimestamp="2025-01-29 14:14:08 +0000 UTC" firstStartedPulling="2025-01-29 14:14:14.143557925 +0000 UTC m=+24.578269692" lastFinishedPulling="2025-01-29 14:14:21.467763597 +0000 UTC m=+31.902475373" observedRunningTime="2025-01-29 14:14:22.644550083 +0000 UTC m=+33.079261861" watchObservedRunningTime="2025-01-29 14:14:22.64493762 +0000 UTC m=+33.079649391" Jan 29 14:14:22.978989 containerd[1509]: time="2025-01-29T14:14:22.977963162Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:14:22.981305 containerd[1509]: time="2025-01-29T14:14:22.981231490Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 29 14:14:22.982325 containerd[1509]: time="2025-01-29T14:14:22.982273073Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:14:22.985009 containerd[1509]: time="2025-01-29T14:14:22.984946133Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:14:22.986522 containerd[1509]: time="2025-01-29T14:14:22.986268665Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.518053687s" Jan 29 14:14:22.986522 containerd[1509]: time="2025-01-29T14:14:22.986318713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 29 14:14:22.989714 containerd[1509]: time="2025-01-29T14:14:22.989677313Z" level=info msg="CreateContainer within sandbox \"48c352ffcc7ce226ac5d74d1a7de7f25660b3a7ad9bac7de6b947dfc2ea8e26d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 29 14:14:23.007820 containerd[1509]: time="2025-01-29T14:14:23.007694090Z" level=info msg="CreateContainer within sandbox \"48c352ffcc7ce226ac5d74d1a7de7f25660b3a7ad9bac7de6b947dfc2ea8e26d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e8da1010ba8f7f374555176516b45fa94253dde1c0fb77f126d6d1eb2e0e7db5\"" Jan 29 14:14:23.008736 containerd[1509]: time="2025-01-29T14:14:23.008678019Z" level=info msg="StartContainer for \"e8da1010ba8f7f374555176516b45fa94253dde1c0fb77f126d6d1eb2e0e7db5\"" Jan 29 14:14:23.063335 systemd[1]: Started cri-containerd-e8da1010ba8f7f374555176516b45fa94253dde1c0fb77f126d6d1eb2e0e7db5.scope - libcontainer container e8da1010ba8f7f374555176516b45fa94253dde1c0fb77f126d6d1eb2e0e7db5. Jan 29 14:14:23.113706 containerd[1509]: time="2025-01-29T14:14:23.112908552Z" level=info msg="StartContainer for \"e8da1010ba8f7f374555176516b45fa94253dde1c0fb77f126d6d1eb2e0e7db5\" returns successfully" Jan 29 14:14:23.241038 kubelet[1884]: E0129 14:14:23.240815 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:23.391835 kubelet[1884]: I0129 14:14:23.391633 1884 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 29 14:14:23.391835 kubelet[1884]: I0129 14:14:23.391711 1884 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 29 14:14:23.663722 kubelet[1884]: I0129 14:14:23.663495 1884 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-bkqlz" podStartSLOduration=24.715527547 podStartE2EDuration="33.663463907s" podCreationTimestamp="2025-01-29 14:13:50 +0000 UTC" firstStartedPulling="2025-01-29 14:14:14.039725308 +0000 UTC m=+24.474437071" lastFinishedPulling="2025-01-29 14:14:22.987661669 +0000 UTC m=+33.422373431" observedRunningTime="2025-01-29 14:14:23.662703192 +0000 UTC m=+34.097414973" watchObservedRunningTime="2025-01-29 14:14:23.663463907 +0000 UTC m=+34.098175676" Jan 29 14:14:24.241812 kubelet[1884]: E0129 14:14:24.241722 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:25.243153 kubelet[1884]: E0129 14:14:25.243031 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:26.243548 kubelet[1884]: E0129 14:14:26.243469 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:27.244130 kubelet[1884]: E0129 14:14:27.243972 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:28.244439 kubelet[1884]: E0129 14:14:28.244345 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:29.244994 kubelet[1884]: E0129 14:14:29.244914 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:29.702018 systemd[1]: Created slice kubepods-besteffort-podd3ce0f7b_efdb_4604_a71d_b0bcf7b66d66.slice - libcontainer container kubepods-besteffort-podd3ce0f7b_efdb_4604_a71d_b0bcf7b66d66.slice. Jan 29 14:14:29.748262 kubelet[1884]: I0129 14:14:29.748033 1884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/d3ce0f7b-efdb-4604-a71d-b0bcf7b66d66-data\") pod \"nfs-server-provisioner-0\" (UID: \"d3ce0f7b-efdb-4604-a71d-b0bcf7b66d66\") " pod="default/nfs-server-provisioner-0" Jan 29 14:14:29.748262 kubelet[1884]: I0129 14:14:29.748212 1884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vwbf\" (UniqueName: \"kubernetes.io/projected/d3ce0f7b-efdb-4604-a71d-b0bcf7b66d66-kube-api-access-5vwbf\") pod \"nfs-server-provisioner-0\" (UID: \"d3ce0f7b-efdb-4604-a71d-b0bcf7b66d66\") " pod="default/nfs-server-provisioner-0" Jan 29 14:14:30.006836 containerd[1509]: time="2025-01-29T14:14:30.006738971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d3ce0f7b-efdb-4604-a71d-b0bcf7b66d66,Namespace:default,Attempt:0,}" Jan 29 14:14:30.202604 systemd-networkd[1422]: cali60e51b789ff: Link UP Jan 29 14:14:30.202939 systemd-networkd[1422]: cali60e51b789ff: Gained carrier Jan 29 14:14:30.209049 kubelet[1884]: E0129 14:14:30.208970 1884 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:30.222358 containerd[1509]: 2025-01-29 14:14:30.083 [INFO][3478] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.230.31.206-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default d3ce0f7b-efdb-4604-a71d-b0bcf7b66d66 1334 0 2025-01-29 14:14:29 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.230.31.206 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="1a2a16cfd67b9b8556adcdc8da877c5d3852ccc53e2bc98e02285ba6d0df94dd" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.31.206-k8s-nfs--server--provisioner--0-" Jan 29 14:14:30.222358 containerd[1509]: 2025-01-29 14:14:30.083 [INFO][3478] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1a2a16cfd67b9b8556adcdc8da877c5d3852ccc53e2bc98e02285ba6d0df94dd" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.31.206-k8s-nfs--server--provisioner--0-eth0" Jan 29 14:14:30.222358 containerd[1509]: 2025-01-29 14:14:30.128 [INFO][3488] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1a2a16cfd67b9b8556adcdc8da877c5d3852ccc53e2bc98e02285ba6d0df94dd" HandleID="k8s-pod-network.1a2a16cfd67b9b8556adcdc8da877c5d3852ccc53e2bc98e02285ba6d0df94dd" Workload="10.230.31.206-k8s-nfs--server--provisioner--0-eth0" Jan 29 14:14:30.222358 containerd[1509]: 2025-01-29 14:14:30.148 [INFO][3488] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1a2a16cfd67b9b8556adcdc8da877c5d3852ccc53e2bc98e02285ba6d0df94dd" HandleID="k8s-pod-network.1a2a16cfd67b9b8556adcdc8da877c5d3852ccc53e2bc98e02285ba6d0df94dd" Workload="10.230.31.206-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000308ab0), Attrs:map[string]string{"namespace":"default", "node":"10.230.31.206", "pod":"nfs-server-provisioner-0", "timestamp":"2025-01-29 14:14:30.128791446 +0000 UTC"}, Hostname:"10.230.31.206", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 14:14:30.222358 containerd[1509]: 2025-01-29 14:14:30.148 [INFO][3488] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 14:14:30.222358 containerd[1509]: 2025-01-29 14:14:30.148 [INFO][3488] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 14:14:30.222358 containerd[1509]: 2025-01-29 14:14:30.148 [INFO][3488] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.230.31.206' Jan 29 14:14:30.222358 containerd[1509]: 2025-01-29 14:14:30.152 [INFO][3488] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1a2a16cfd67b9b8556adcdc8da877c5d3852ccc53e2bc98e02285ba6d0df94dd" host="10.230.31.206" Jan 29 14:14:30.222358 containerd[1509]: 2025-01-29 14:14:30.159 [INFO][3488] ipam/ipam.go 372: Looking up existing affinities for host host="10.230.31.206" Jan 29 14:14:30.222358 containerd[1509]: 2025-01-29 14:14:30.166 [INFO][3488] ipam/ipam.go 489: Trying affinity for 192.168.115.0/26 host="10.230.31.206" Jan 29 14:14:30.222358 containerd[1509]: 2025-01-29 14:14:30.169 [INFO][3488] ipam/ipam.go 155: Attempting to load block cidr=192.168.115.0/26 host="10.230.31.206" Jan 29 14:14:30.222358 containerd[1509]: 2025-01-29 14:14:30.173 [INFO][3488] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.115.0/26 host="10.230.31.206" Jan 29 14:14:30.222358 containerd[1509]: 2025-01-29 14:14:30.173 [INFO][3488] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.115.0/26 handle="k8s-pod-network.1a2a16cfd67b9b8556adcdc8da877c5d3852ccc53e2bc98e02285ba6d0df94dd" host="10.230.31.206" Jan 29 14:14:30.222358 containerd[1509]: 2025-01-29 14:14:30.177 [INFO][3488] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1a2a16cfd67b9b8556adcdc8da877c5d3852ccc53e2bc98e02285ba6d0df94dd Jan 29 14:14:30.222358 containerd[1509]: 2025-01-29 14:14:30.184 [INFO][3488] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.115.0/26 handle="k8s-pod-network.1a2a16cfd67b9b8556adcdc8da877c5d3852ccc53e2bc98e02285ba6d0df94dd" host="10.230.31.206" Jan 29 14:14:30.222358 containerd[1509]: 2025-01-29 14:14:30.195 [INFO][3488] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.115.3/26] block=192.168.115.0/26 handle="k8s-pod-network.1a2a16cfd67b9b8556adcdc8da877c5d3852ccc53e2bc98e02285ba6d0df94dd" host="10.230.31.206" Jan 29 14:14:30.222358 containerd[1509]: 2025-01-29 14:14:30.195 [INFO][3488] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.115.3/26] handle="k8s-pod-network.1a2a16cfd67b9b8556adcdc8da877c5d3852ccc53e2bc98e02285ba6d0df94dd" host="10.230.31.206" Jan 29 14:14:30.222358 containerd[1509]: 2025-01-29 14:14:30.195 [INFO][3488] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 14:14:30.222358 containerd[1509]: 2025-01-29 14:14:30.195 [INFO][3488] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.115.3/26] IPv6=[] ContainerID="1a2a16cfd67b9b8556adcdc8da877c5d3852ccc53e2bc98e02285ba6d0df94dd" HandleID="k8s-pod-network.1a2a16cfd67b9b8556adcdc8da877c5d3852ccc53e2bc98e02285ba6d0df94dd" Workload="10.230.31.206-k8s-nfs--server--provisioner--0-eth0" Jan 29 14:14:30.223688 containerd[1509]: 2025-01-29 14:14:30.197 [INFO][3478] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1a2a16cfd67b9b8556adcdc8da877c5d3852ccc53e2bc98e02285ba6d0df94dd" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.31.206-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.31.206-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"d3ce0f7b-efdb-4604-a71d-b0bcf7b66d66", ResourceVersion:"1334", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 14, 14, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.31.206", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.115.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 14:14:30.223688 containerd[1509]: 2025-01-29 14:14:30.197 [INFO][3478] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.115.3/32] ContainerID="1a2a16cfd67b9b8556adcdc8da877c5d3852ccc53e2bc98e02285ba6d0df94dd" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.31.206-k8s-nfs--server--provisioner--0-eth0" Jan 29 14:14:30.223688 containerd[1509]: 2025-01-29 14:14:30.197 [INFO][3478] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="1a2a16cfd67b9b8556adcdc8da877c5d3852ccc53e2bc98e02285ba6d0df94dd" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.31.206-k8s-nfs--server--provisioner--0-eth0" Jan 29 14:14:30.223688 containerd[1509]: 2025-01-29 14:14:30.203 [INFO][3478] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1a2a16cfd67b9b8556adcdc8da877c5d3852ccc53e2bc98e02285ba6d0df94dd" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.31.206-k8s-nfs--server--provisioner--0-eth0" Jan 29 14:14:30.224027 containerd[1509]: 2025-01-29 14:14:30.204 [INFO][3478] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1a2a16cfd67b9b8556adcdc8da877c5d3852ccc53e2bc98e02285ba6d0df94dd" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.31.206-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.31.206-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"d3ce0f7b-efdb-4604-a71d-b0bcf7b66d66", ResourceVersion:"1334", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 14, 14, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.31.206", ContainerID:"1a2a16cfd67b9b8556adcdc8da877c5d3852ccc53e2bc98e02285ba6d0df94dd", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.115.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"a2:3c:83:e8:a7:a8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 14:14:30.224027 containerd[1509]: 2025-01-29 14:14:30.220 [INFO][3478] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1a2a16cfd67b9b8556adcdc8da877c5d3852ccc53e2bc98e02285ba6d0df94dd" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.31.206-k8s-nfs--server--provisioner--0-eth0" Jan 29 14:14:30.245919 kubelet[1884]: E0129 14:14:30.245799 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:30.268763 containerd[1509]: time="2025-01-29T14:14:30.268411046Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 14:14:30.268763 containerd[1509]: time="2025-01-29T14:14:30.268595987Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 14:14:30.269396 containerd[1509]: time="2025-01-29T14:14:30.268628009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 14:14:30.269396 containerd[1509]: time="2025-01-29T14:14:30.269001466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 14:14:30.306345 systemd[1]: Started cri-containerd-1a2a16cfd67b9b8556adcdc8da877c5d3852ccc53e2bc98e02285ba6d0df94dd.scope - libcontainer container 1a2a16cfd67b9b8556adcdc8da877c5d3852ccc53e2bc98e02285ba6d0df94dd. Jan 29 14:14:30.368573 containerd[1509]: time="2025-01-29T14:14:30.368382657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d3ce0f7b-efdb-4604-a71d-b0bcf7b66d66,Namespace:default,Attempt:0,} returns sandbox id \"1a2a16cfd67b9b8556adcdc8da877c5d3852ccc53e2bc98e02285ba6d0df94dd\"" Jan 29 14:14:30.372014 containerd[1509]: time="2025-01-29T14:14:30.371528195Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 29 14:14:30.870070 systemd[1]: run-containerd-runc-k8s.io-1a2a16cfd67b9b8556adcdc8da877c5d3852ccc53e2bc98e02285ba6d0df94dd-runc.oE6jfI.mount: Deactivated successfully. Jan 29 14:14:31.247689 kubelet[1884]: E0129 14:14:31.246882 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:31.590089 systemd-networkd[1422]: cali60e51b789ff: Gained IPv6LL Jan 29 14:14:32.248304 kubelet[1884]: E0129 14:14:32.248219 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:33.249381 kubelet[1884]: E0129 14:14:33.249316 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:33.654040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3724726828.mount: Deactivated successfully. Jan 29 14:14:34.250786 kubelet[1884]: E0129 14:14:34.250683 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:35.252514 kubelet[1884]: E0129 14:14:35.252208 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:36.253456 kubelet[1884]: E0129 14:14:36.253407 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:36.792016 containerd[1509]: time="2025-01-29T14:14:36.791875711Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:14:36.794173 containerd[1509]: time="2025-01-29T14:14:36.793818639Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039414" Jan 29 14:14:36.794998 containerd[1509]: time="2025-01-29T14:14:36.794545492Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:14:36.799142 containerd[1509]: time="2025-01-29T14:14:36.799011370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:14:36.800911 containerd[1509]: time="2025-01-29T14:14:36.800673716Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 6.429087789s" Jan 29 14:14:36.800911 containerd[1509]: time="2025-01-29T14:14:36.800742561Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 29 14:14:36.805258 containerd[1509]: time="2025-01-29T14:14:36.805091113Z" level=info msg="CreateContainer within sandbox \"1a2a16cfd67b9b8556adcdc8da877c5d3852ccc53e2bc98e02285ba6d0df94dd\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 29 14:14:36.846872 containerd[1509]: time="2025-01-29T14:14:36.846703714Z" level=info msg="CreateContainer within sandbox \"1a2a16cfd67b9b8556adcdc8da877c5d3852ccc53e2bc98e02285ba6d0df94dd\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"16b3c08afb76f20e0863157c1c74ff4292d981134158384d40eb6f524318fa2d\"" Jan 29 14:14:36.847572 containerd[1509]: time="2025-01-29T14:14:36.847530722Z" level=info msg="StartContainer for \"16b3c08afb76f20e0863157c1c74ff4292d981134158384d40eb6f524318fa2d\"" Jan 29 14:14:36.902391 systemd[1]: Started cri-containerd-16b3c08afb76f20e0863157c1c74ff4292d981134158384d40eb6f524318fa2d.scope - libcontainer container 16b3c08afb76f20e0863157c1c74ff4292d981134158384d40eb6f524318fa2d. Jan 29 14:14:36.943619 containerd[1509]: time="2025-01-29T14:14:36.943540854Z" level=info msg="StartContainer for \"16b3c08afb76f20e0863157c1c74ff4292d981134158384d40eb6f524318fa2d\" returns successfully" Jan 29 14:14:37.254760 kubelet[1884]: E0129 14:14:37.254673 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:37.718585 kubelet[1884]: I0129 14:14:37.718344 1884 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.286635442 podStartE2EDuration="8.718290716s" podCreationTimestamp="2025-01-29 14:14:29 +0000 UTC" firstStartedPulling="2025-01-29 14:14:30.370765557 +0000 UTC m=+40.805477319" lastFinishedPulling="2025-01-29 14:14:36.802420825 +0000 UTC m=+47.237132593" observedRunningTime="2025-01-29 14:14:37.717944296 +0000 UTC m=+48.152656084" watchObservedRunningTime="2025-01-29 14:14:37.718290716 +0000 UTC m=+48.153002490" Jan 29 14:14:38.255637 kubelet[1884]: E0129 14:14:38.255551 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:39.256821 kubelet[1884]: E0129 14:14:39.256720 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:40.257711 kubelet[1884]: E0129 14:14:40.257646 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:41.258750 kubelet[1884]: E0129 14:14:41.258663 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:42.258994 kubelet[1884]: E0129 14:14:42.258917 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:43.259500 kubelet[1884]: E0129 14:14:43.259416 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:44.260456 kubelet[1884]: E0129 14:14:44.260383 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:44.591525 systemd[1]: run-containerd-runc-k8s.io-3fdef601e94bd226992f390ff5d8cdbb3a56d408492c497a29e02a09e5f2daf6-runc.6Bhk6H.mount: Deactivated successfully. Jan 29 14:14:45.261598 kubelet[1884]: E0129 14:14:45.261542 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:46.262227 kubelet[1884]: E0129 14:14:46.262095 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:46.398935 systemd[1]: Created slice kubepods-besteffort-podd49efef7_b0d0_4fe5_b149_97fef8d32860.slice - libcontainer container kubepods-besteffort-podd49efef7_b0d0_4fe5_b149_97fef8d32860.slice. Jan 29 14:14:46.463791 kubelet[1884]: I0129 14:14:46.463585 1884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a3af629f-d5e2-4be6-bec6-3dc246c0c141\" (UniqueName: \"kubernetes.io/nfs/d49efef7-b0d0-4fe5-b149-97fef8d32860-pvc-a3af629f-d5e2-4be6-bec6-3dc246c0c141\") pod \"test-pod-1\" (UID: \"d49efef7-b0d0-4fe5-b149-97fef8d32860\") " pod="default/test-pod-1" Jan 29 14:14:46.463791 kubelet[1884]: I0129 14:14:46.463662 1884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk568\" (UniqueName: \"kubernetes.io/projected/d49efef7-b0d0-4fe5-b149-97fef8d32860-kube-api-access-rk568\") pod \"test-pod-1\" (UID: \"d49efef7-b0d0-4fe5-b149-97fef8d32860\") " pod="default/test-pod-1" Jan 29 14:14:46.619483 kernel: FS-Cache: Loaded Jan 29 14:14:46.718411 kernel: RPC: Registered named UNIX socket transport module. Jan 29 14:14:46.718599 kernel: RPC: Registered udp transport module. Jan 29 14:14:46.718661 kernel: RPC: Registered tcp transport module. Jan 29 14:14:46.718719 kernel: RPC: Registered tcp-with-tls transport module. Jan 29 14:14:46.719660 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 29 14:14:47.093959 kernel: NFS: Registering the id_resolver key type Jan 29 14:14:47.097966 kernel: Key type id_resolver registered Jan 29 14:14:47.098016 kernel: Key type id_legacy registered Jan 29 14:14:47.167242 nfsidmap[3696]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Jan 29 14:14:47.178022 nfsidmap[3699]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Jan 29 14:14:47.263638 kubelet[1884]: E0129 14:14:47.263425 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:47.305088 containerd[1509]: time="2025-01-29T14:14:47.304614586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:d49efef7-b0d0-4fe5-b149-97fef8d32860,Namespace:default,Attempt:0,}" Jan 29 14:14:47.599894 systemd-networkd[1422]: cali5ec59c6bf6e: Link UP Jan 29 14:14:47.602826 systemd-networkd[1422]: cali5ec59c6bf6e: Gained carrier Jan 29 14:14:47.620444 containerd[1509]: 2025-01-29 14:14:47.393 [INFO][3703] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.230.31.206-k8s-test--pod--1-eth0 default d49efef7-b0d0-4fe5-b149-97fef8d32860 1397 0 2025-01-29 14:14:31 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.230.31.206 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="907a46376dc2101606a37ee2dc5fd5fe49284eed10705bd72ceb719fc5ec1b8a" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.31.206-k8s-test--pod--1-" Jan 29 14:14:47.620444 containerd[1509]: 2025-01-29 14:14:47.393 [INFO][3703] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="907a46376dc2101606a37ee2dc5fd5fe49284eed10705bd72ceb719fc5ec1b8a" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.31.206-k8s-test--pod--1-eth0" Jan 29 14:14:47.620444 containerd[1509]: 2025-01-29 14:14:47.454 [INFO][3713] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="907a46376dc2101606a37ee2dc5fd5fe49284eed10705bd72ceb719fc5ec1b8a" HandleID="k8s-pod-network.907a46376dc2101606a37ee2dc5fd5fe49284eed10705bd72ceb719fc5ec1b8a" Workload="10.230.31.206-k8s-test--pod--1-eth0" Jan 29 14:14:47.620444 containerd[1509]: 2025-01-29 14:14:47.480 [INFO][3713] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="907a46376dc2101606a37ee2dc5fd5fe49284eed10705bd72ceb719fc5ec1b8a" HandleID="k8s-pod-network.907a46376dc2101606a37ee2dc5fd5fe49284eed10705bd72ceb719fc5ec1b8a" Workload="10.230.31.206-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00038cd30), Attrs:map[string]string{"namespace":"default", "node":"10.230.31.206", "pod":"test-pod-1", "timestamp":"2025-01-29 14:14:47.454329317 +0000 UTC"}, Hostname:"10.230.31.206", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 14:14:47.620444 containerd[1509]: 2025-01-29 14:14:47.480 [INFO][3713] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 14:14:47.620444 containerd[1509]: 2025-01-29 14:14:47.481 [INFO][3713] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 14:14:47.620444 containerd[1509]: 2025-01-29 14:14:47.481 [INFO][3713] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.230.31.206' Jan 29 14:14:47.620444 containerd[1509]: 2025-01-29 14:14:47.495 [INFO][3713] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.907a46376dc2101606a37ee2dc5fd5fe49284eed10705bd72ceb719fc5ec1b8a" host="10.230.31.206" Jan 29 14:14:47.620444 containerd[1509]: 2025-01-29 14:14:47.543 [INFO][3713] ipam/ipam.go 372: Looking up existing affinities for host host="10.230.31.206" Jan 29 14:14:47.620444 containerd[1509]: 2025-01-29 14:14:47.557 [INFO][3713] ipam/ipam.go 489: Trying affinity for 192.168.115.0/26 host="10.230.31.206" Jan 29 14:14:47.620444 containerd[1509]: 2025-01-29 14:14:47.560 [INFO][3713] ipam/ipam.go 155: Attempting to load block cidr=192.168.115.0/26 host="10.230.31.206" Jan 29 14:14:47.620444 containerd[1509]: 2025-01-29 14:14:47.567 [INFO][3713] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.115.0/26 host="10.230.31.206" Jan 29 14:14:47.620444 containerd[1509]: 2025-01-29 14:14:47.567 [INFO][3713] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.115.0/26 handle="k8s-pod-network.907a46376dc2101606a37ee2dc5fd5fe49284eed10705bd72ceb719fc5ec1b8a" host="10.230.31.206" Jan 29 14:14:47.620444 containerd[1509]: 2025-01-29 14:14:47.571 [INFO][3713] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.907a46376dc2101606a37ee2dc5fd5fe49284eed10705bd72ceb719fc5ec1b8a Jan 29 14:14:47.620444 containerd[1509]: 2025-01-29 14:14:47.580 [INFO][3713] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.115.0/26 handle="k8s-pod-network.907a46376dc2101606a37ee2dc5fd5fe49284eed10705bd72ceb719fc5ec1b8a" host="10.230.31.206" Jan 29 14:14:47.620444 containerd[1509]: 2025-01-29 14:14:47.592 [INFO][3713] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.115.4/26] block=192.168.115.0/26 handle="k8s-pod-network.907a46376dc2101606a37ee2dc5fd5fe49284eed10705bd72ceb719fc5ec1b8a" host="10.230.31.206" Jan 29 14:14:47.620444 containerd[1509]: 2025-01-29 14:14:47.592 [INFO][3713] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.115.4/26] handle="k8s-pod-network.907a46376dc2101606a37ee2dc5fd5fe49284eed10705bd72ceb719fc5ec1b8a" host="10.230.31.206" Jan 29 14:14:47.620444 containerd[1509]: 2025-01-29 14:14:47.592 [INFO][3713] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 14:14:47.620444 containerd[1509]: 2025-01-29 14:14:47.592 [INFO][3713] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.115.4/26] IPv6=[] ContainerID="907a46376dc2101606a37ee2dc5fd5fe49284eed10705bd72ceb719fc5ec1b8a" HandleID="k8s-pod-network.907a46376dc2101606a37ee2dc5fd5fe49284eed10705bd72ceb719fc5ec1b8a" Workload="10.230.31.206-k8s-test--pod--1-eth0" Jan 29 14:14:47.620444 containerd[1509]: 2025-01-29 14:14:47.594 [INFO][3703] cni-plugin/k8s.go 386: Populated endpoint ContainerID="907a46376dc2101606a37ee2dc5fd5fe49284eed10705bd72ceb719fc5ec1b8a" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.31.206-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.31.206-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"d49efef7-b0d0-4fe5-b149-97fef8d32860", ResourceVersion:"1397", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 14, 14, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.31.206", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.115.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 14:14:47.624331 containerd[1509]: 2025-01-29 14:14:47.594 [INFO][3703] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.115.4/32] ContainerID="907a46376dc2101606a37ee2dc5fd5fe49284eed10705bd72ceb719fc5ec1b8a" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.31.206-k8s-test--pod--1-eth0" Jan 29 14:14:47.624331 containerd[1509]: 2025-01-29 14:14:47.594 [INFO][3703] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="907a46376dc2101606a37ee2dc5fd5fe49284eed10705bd72ceb719fc5ec1b8a" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.31.206-k8s-test--pod--1-eth0" Jan 29 14:14:47.624331 containerd[1509]: 2025-01-29 14:14:47.599 [INFO][3703] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="907a46376dc2101606a37ee2dc5fd5fe49284eed10705bd72ceb719fc5ec1b8a" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.31.206-k8s-test--pod--1-eth0" Jan 29 14:14:47.624331 containerd[1509]: 2025-01-29 14:14:47.602 [INFO][3703] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="907a46376dc2101606a37ee2dc5fd5fe49284eed10705bd72ceb719fc5ec1b8a" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.31.206-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.31.206-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"d49efef7-b0d0-4fe5-b149-97fef8d32860", ResourceVersion:"1397", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 14, 14, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.31.206", ContainerID:"907a46376dc2101606a37ee2dc5fd5fe49284eed10705bd72ceb719fc5ec1b8a", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.115.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"1e:77:70:b2:86:dc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 14:14:47.624331 containerd[1509]: 2025-01-29 14:14:47.617 [INFO][3703] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="907a46376dc2101606a37ee2dc5fd5fe49284eed10705bd72ceb719fc5ec1b8a" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.31.206-k8s-test--pod--1-eth0" Jan 29 14:14:47.661850 containerd[1509]: time="2025-01-29T14:14:47.661632779Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 14:14:47.661850 containerd[1509]: time="2025-01-29T14:14:47.661770321Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 14:14:47.661850 containerd[1509]: time="2025-01-29T14:14:47.661794961Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 14:14:47.662493 containerd[1509]: time="2025-01-29T14:14:47.661954785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 14:14:47.701333 systemd[1]: Started cri-containerd-907a46376dc2101606a37ee2dc5fd5fe49284eed10705bd72ceb719fc5ec1b8a.scope - libcontainer container 907a46376dc2101606a37ee2dc5fd5fe49284eed10705bd72ceb719fc5ec1b8a. Jan 29 14:14:47.765647 containerd[1509]: time="2025-01-29T14:14:47.765529253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:d49efef7-b0d0-4fe5-b149-97fef8d32860,Namespace:default,Attempt:0,} returns sandbox id \"907a46376dc2101606a37ee2dc5fd5fe49284eed10705bd72ceb719fc5ec1b8a\"" Jan 29 14:14:47.768079 containerd[1509]: time="2025-01-29T14:14:47.767993705Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 29 14:14:48.108374 containerd[1509]: time="2025-01-29T14:14:48.108313761Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:14:48.109256 containerd[1509]: time="2025-01-29T14:14:48.109178250Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 29 14:14:48.119024 containerd[1509]: time="2025-01-29T14:14:48.118807839Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 350.741741ms" Jan 29 14:14:48.119024 containerd[1509]: time="2025-01-29T14:14:48.118891869Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 29 14:14:48.122257 containerd[1509]: time="2025-01-29T14:14:48.122222037Z" level=info msg="CreateContainer within sandbox \"907a46376dc2101606a37ee2dc5fd5fe49284eed10705bd72ceb719fc5ec1b8a\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 29 14:14:48.139958 containerd[1509]: time="2025-01-29T14:14:48.139829795Z" level=info msg="CreateContainer within sandbox \"907a46376dc2101606a37ee2dc5fd5fe49284eed10705bd72ceb719fc5ec1b8a\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"b50b0e1d53985c35444f07d028ee781efca5251c2842a580fc1a117930b18184\"" Jan 29 14:14:48.141309 containerd[1509]: time="2025-01-29T14:14:48.140791635Z" level=info msg="StartContainer for \"b50b0e1d53985c35444f07d028ee781efca5251c2842a580fc1a117930b18184\"" Jan 29 14:14:48.181315 systemd[1]: Started cri-containerd-b50b0e1d53985c35444f07d028ee781efca5251c2842a580fc1a117930b18184.scope - libcontainer container b50b0e1d53985c35444f07d028ee781efca5251c2842a580fc1a117930b18184. Jan 29 14:14:48.215393 containerd[1509]: time="2025-01-29T14:14:48.214844038Z" level=info msg="StartContainer for \"b50b0e1d53985c35444f07d028ee781efca5251c2842a580fc1a117930b18184\" returns successfully" Jan 29 14:14:48.264154 kubelet[1884]: E0129 14:14:48.264090 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:48.741464 systemd-networkd[1422]: cali5ec59c6bf6e: Gained IPv6LL Jan 29 14:14:49.265340 kubelet[1884]: E0129 14:14:49.265262 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:50.209175 kubelet[1884]: E0129 14:14:50.209088 1884 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:50.255470 containerd[1509]: time="2025-01-29T14:14:50.255354045Z" level=info msg="StopPodSandbox for \"7b44a68c6d4513bceebe82e13a0b01a12baf1ed1989e5384f977f63e80482e0e\"" Jan 29 14:14:50.256021 containerd[1509]: time="2025-01-29T14:14:50.255530318Z" level=info msg="TearDown network for sandbox \"7b44a68c6d4513bceebe82e13a0b01a12baf1ed1989e5384f977f63e80482e0e\" successfully" Jan 29 14:14:50.256021 containerd[1509]: time="2025-01-29T14:14:50.255549979Z" level=info msg="StopPodSandbox for \"7b44a68c6d4513bceebe82e13a0b01a12baf1ed1989e5384f977f63e80482e0e\" returns successfully" Jan 29 14:14:50.261858 containerd[1509]: time="2025-01-29T14:14:50.261828464Z" level=info msg="RemovePodSandbox for \"7b44a68c6d4513bceebe82e13a0b01a12baf1ed1989e5384f977f63e80482e0e\"" Jan 29 14:14:50.266350 kubelet[1884]: E0129 14:14:50.266291 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:50.274214 containerd[1509]: time="2025-01-29T14:14:50.274063239Z" level=info msg="Forcibly stopping sandbox \"7b44a68c6d4513bceebe82e13a0b01a12baf1ed1989e5384f977f63e80482e0e\"" Jan 29 14:14:50.274323 containerd[1509]: time="2025-01-29T14:14:50.274266022Z" level=info msg="TearDown network for sandbox \"7b44a68c6d4513bceebe82e13a0b01a12baf1ed1989e5384f977f63e80482e0e\" successfully" Jan 29 14:14:50.287033 containerd[1509]: time="2025-01-29T14:14:50.286948642Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7b44a68c6d4513bceebe82e13a0b01a12baf1ed1989e5384f977f63e80482e0e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 14:14:50.287157 containerd[1509]: time="2025-01-29T14:14:50.287047459Z" level=info msg="RemovePodSandbox \"7b44a68c6d4513bceebe82e13a0b01a12baf1ed1989e5384f977f63e80482e0e\" returns successfully" Jan 29 14:14:50.287869 containerd[1509]: time="2025-01-29T14:14:50.287770001Z" level=info msg="StopPodSandbox for \"c259ca8a71ba11a49c56fd1c112c81f8f1945c1f63f949c78e0eb411b23d5577\"" Jan 29 14:14:50.288011 containerd[1509]: time="2025-01-29T14:14:50.287981793Z" level=info msg="TearDown network for sandbox \"c259ca8a71ba11a49c56fd1c112c81f8f1945c1f63f949c78e0eb411b23d5577\" successfully" Jan 29 14:14:50.288265 containerd[1509]: time="2025-01-29T14:14:50.288008529Z" level=info msg="StopPodSandbox for \"c259ca8a71ba11a49c56fd1c112c81f8f1945c1f63f949c78e0eb411b23d5577\" returns successfully" Jan 29 14:14:50.288377 containerd[1509]: time="2025-01-29T14:14:50.288335713Z" level=info msg="RemovePodSandbox for \"c259ca8a71ba11a49c56fd1c112c81f8f1945c1f63f949c78e0eb411b23d5577\"" Jan 29 14:14:50.288377 containerd[1509]: time="2025-01-29T14:14:50.288371990Z" level=info msg="Forcibly stopping sandbox \"c259ca8a71ba11a49c56fd1c112c81f8f1945c1f63f949c78e0eb411b23d5577\"" Jan 29 14:14:50.306125 containerd[1509]: time="2025-01-29T14:14:50.288456568Z" level=info msg="TearDown network for sandbox \"c259ca8a71ba11a49c56fd1c112c81f8f1945c1f63f949c78e0eb411b23d5577\" successfully" Jan 29 14:14:50.309241 containerd[1509]: time="2025-01-29T14:14:50.309188269Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c259ca8a71ba11a49c56fd1c112c81f8f1945c1f63f949c78e0eb411b23d5577\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 14:14:50.309421 containerd[1509]: time="2025-01-29T14:14:50.309247512Z" level=info msg="RemovePodSandbox \"c259ca8a71ba11a49c56fd1c112c81f8f1945c1f63f949c78e0eb411b23d5577\" returns successfully" Jan 29 14:14:50.310296 containerd[1509]: time="2025-01-29T14:14:50.309831399Z" level=info msg="StopPodSandbox for \"d20be9ff8f6c39b1d90c5787ea8648a11208dd63d534f14ff1d52df21a7369c8\"" Jan 29 14:14:50.310296 containerd[1509]: time="2025-01-29T14:14:50.309985048Z" level=info msg="TearDown network for sandbox \"d20be9ff8f6c39b1d90c5787ea8648a11208dd63d534f14ff1d52df21a7369c8\" successfully" Jan 29 14:14:50.310296 containerd[1509]: time="2025-01-29T14:14:50.310007865Z" level=info msg="StopPodSandbox for \"d20be9ff8f6c39b1d90c5787ea8648a11208dd63d534f14ff1d52df21a7369c8\" returns successfully" Jan 29 14:14:50.310814 containerd[1509]: time="2025-01-29T14:14:50.310778456Z" level=info msg="RemovePodSandbox for \"d20be9ff8f6c39b1d90c5787ea8648a11208dd63d534f14ff1d52df21a7369c8\"" Jan 29 14:14:50.310891 containerd[1509]: time="2025-01-29T14:14:50.310817685Z" level=info msg="Forcibly stopping sandbox \"d20be9ff8f6c39b1d90c5787ea8648a11208dd63d534f14ff1d52df21a7369c8\"" Jan 29 14:14:50.311104 containerd[1509]: time="2025-01-29T14:14:50.310918045Z" level=info msg="TearDown network for sandbox \"d20be9ff8f6c39b1d90c5787ea8648a11208dd63d534f14ff1d52df21a7369c8\" successfully" Jan 29 14:14:50.313360 containerd[1509]: time="2025-01-29T14:14:50.313317357Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d20be9ff8f6c39b1d90c5787ea8648a11208dd63d534f14ff1d52df21a7369c8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 14:14:50.313437 containerd[1509]: time="2025-01-29T14:14:50.313374275Z" level=info msg="RemovePodSandbox \"d20be9ff8f6c39b1d90c5787ea8648a11208dd63d534f14ff1d52df21a7369c8\" returns successfully" Jan 29 14:14:50.313991 containerd[1509]: time="2025-01-29T14:14:50.313750991Z" level=info msg="StopPodSandbox for \"83bd4c54b1d170d4b0ade5c8c9c72351d9e9b974a6197b8577fc8bf28093735d\"" Jan 29 14:14:50.313991 containerd[1509]: time="2025-01-29T14:14:50.313872950Z" level=info msg="TearDown network for sandbox \"83bd4c54b1d170d4b0ade5c8c9c72351d9e9b974a6197b8577fc8bf28093735d\" successfully" Jan 29 14:14:50.313991 containerd[1509]: time="2025-01-29T14:14:50.313893005Z" level=info msg="StopPodSandbox for \"83bd4c54b1d170d4b0ade5c8c9c72351d9e9b974a6197b8577fc8bf28093735d\" returns successfully" Jan 29 14:14:50.314315 containerd[1509]: time="2025-01-29T14:14:50.314268783Z" level=info msg="RemovePodSandbox for \"83bd4c54b1d170d4b0ade5c8c9c72351d9e9b974a6197b8577fc8bf28093735d\"" Jan 29 14:14:50.314315 containerd[1509]: time="2025-01-29T14:14:50.314305053Z" level=info msg="Forcibly stopping sandbox \"83bd4c54b1d170d4b0ade5c8c9c72351d9e9b974a6197b8577fc8bf28093735d\"" Jan 29 14:14:50.314456 containerd[1509]: time="2025-01-29T14:14:50.314405009Z" level=info msg="TearDown network for sandbox \"83bd4c54b1d170d4b0ade5c8c9c72351d9e9b974a6197b8577fc8bf28093735d\" successfully" Jan 29 14:14:50.316876 containerd[1509]: time="2025-01-29T14:14:50.316826649Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"83bd4c54b1d170d4b0ade5c8c9c72351d9e9b974a6197b8577fc8bf28093735d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 14:14:50.317191 containerd[1509]: time="2025-01-29T14:14:50.316880741Z" level=info msg="RemovePodSandbox \"83bd4c54b1d170d4b0ade5c8c9c72351d9e9b974a6197b8577fc8bf28093735d\" returns successfully" Jan 29 14:14:50.317434 containerd[1509]: time="2025-01-29T14:14:50.317276601Z" level=info msg="StopPodSandbox for \"b7c085b0c413b5c87c05583e2ece6eda68ffabc28f61ff972b3616d52deae2c7\"" Jan 29 14:14:50.317521 containerd[1509]: time="2025-01-29T14:14:50.317473036Z" level=info msg="TearDown network for sandbox \"b7c085b0c413b5c87c05583e2ece6eda68ffabc28f61ff972b3616d52deae2c7\" successfully" Jan 29 14:14:50.317597 containerd[1509]: time="2025-01-29T14:14:50.317518372Z" level=info msg="StopPodSandbox for \"b7c085b0c413b5c87c05583e2ece6eda68ffabc28f61ff972b3616d52deae2c7\" returns successfully" Jan 29 14:14:50.318162 containerd[1509]: time="2025-01-29T14:14:50.318130392Z" level=info msg="RemovePodSandbox for \"b7c085b0c413b5c87c05583e2ece6eda68ffabc28f61ff972b3616d52deae2c7\"" Jan 29 14:14:50.318259 containerd[1509]: time="2025-01-29T14:14:50.318167164Z" level=info msg="Forcibly stopping sandbox \"b7c085b0c413b5c87c05583e2ece6eda68ffabc28f61ff972b3616d52deae2c7\"" Jan 29 14:14:50.318340 containerd[1509]: time="2025-01-29T14:14:50.318270445Z" level=info msg="TearDown network for sandbox \"b7c085b0c413b5c87c05583e2ece6eda68ffabc28f61ff972b3616d52deae2c7\" successfully" Jan 29 14:14:50.320684 containerd[1509]: time="2025-01-29T14:14:50.320642273Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b7c085b0c413b5c87c05583e2ece6eda68ffabc28f61ff972b3616d52deae2c7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 14:14:50.320844 containerd[1509]: time="2025-01-29T14:14:50.320693880Z" level=info msg="RemovePodSandbox \"b7c085b0c413b5c87c05583e2ece6eda68ffabc28f61ff972b3616d52deae2c7\" returns successfully" Jan 29 14:14:50.321707 containerd[1509]: time="2025-01-29T14:14:50.321446751Z" level=info msg="StopPodSandbox for \"88fdc4aefe6f2dfe63e23ee5fdf36ccf79aa5653def7479b3986dee24345b6ac\"" Jan 29 14:14:50.321707 containerd[1509]: time="2025-01-29T14:14:50.321626254Z" level=info msg="TearDown network for sandbox \"88fdc4aefe6f2dfe63e23ee5fdf36ccf79aa5653def7479b3986dee24345b6ac\" successfully" Jan 29 14:14:50.321707 containerd[1509]: time="2025-01-29T14:14:50.321646358Z" level=info msg="StopPodSandbox for \"88fdc4aefe6f2dfe63e23ee5fdf36ccf79aa5653def7479b3986dee24345b6ac\" returns successfully" Jan 29 14:14:50.322594 containerd[1509]: time="2025-01-29T14:14:50.322020740Z" level=info msg="RemovePodSandbox for \"88fdc4aefe6f2dfe63e23ee5fdf36ccf79aa5653def7479b3986dee24345b6ac\"" Jan 29 14:14:50.322594 containerd[1509]: time="2025-01-29T14:14:50.322050166Z" level=info msg="Forcibly stopping sandbox \"88fdc4aefe6f2dfe63e23ee5fdf36ccf79aa5653def7479b3986dee24345b6ac\"" Jan 29 14:14:50.322594 containerd[1509]: time="2025-01-29T14:14:50.322188522Z" level=info msg="TearDown network for sandbox \"88fdc4aefe6f2dfe63e23ee5fdf36ccf79aa5653def7479b3986dee24345b6ac\" successfully" Jan 29 14:14:50.341614 containerd[1509]: time="2025-01-29T14:14:50.341549594Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"88fdc4aefe6f2dfe63e23ee5fdf36ccf79aa5653def7479b3986dee24345b6ac\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 14:14:50.341614 containerd[1509]: time="2025-01-29T14:14:50.341611254Z" level=info msg="RemovePodSandbox \"88fdc4aefe6f2dfe63e23ee5fdf36ccf79aa5653def7479b3986dee24345b6ac\" returns successfully" Jan 29 14:14:50.342293 containerd[1509]: time="2025-01-29T14:14:50.342162809Z" level=info msg="StopPodSandbox for \"407221a1f1c500d27a764909eb3a6e5481e2355759aba4ae013c9b96519bfd18\"" Jan 29 14:14:50.342293 containerd[1509]: time="2025-01-29T14:14:50.342273333Z" level=info msg="TearDown network for sandbox \"407221a1f1c500d27a764909eb3a6e5481e2355759aba4ae013c9b96519bfd18\" successfully" Jan 29 14:14:50.342293 containerd[1509]: time="2025-01-29T14:14:50.342292307Z" level=info msg="StopPodSandbox for \"407221a1f1c500d27a764909eb3a6e5481e2355759aba4ae013c9b96519bfd18\" returns successfully" Jan 29 14:14:50.342769 containerd[1509]: time="2025-01-29T14:14:50.342739764Z" level=info msg="RemovePodSandbox for \"407221a1f1c500d27a764909eb3a6e5481e2355759aba4ae013c9b96519bfd18\"" Jan 29 14:14:50.342848 containerd[1509]: time="2025-01-29T14:14:50.342776526Z" level=info msg="Forcibly stopping sandbox \"407221a1f1c500d27a764909eb3a6e5481e2355759aba4ae013c9b96519bfd18\"" Jan 29 14:14:50.342921 containerd[1509]: time="2025-01-29T14:14:50.342871750Z" level=info msg="TearDown network for sandbox \"407221a1f1c500d27a764909eb3a6e5481e2355759aba4ae013c9b96519bfd18\" successfully" Jan 29 14:14:50.345483 containerd[1509]: time="2025-01-29T14:14:50.345432641Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"407221a1f1c500d27a764909eb3a6e5481e2355759aba4ae013c9b96519bfd18\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 14:14:50.345596 containerd[1509]: time="2025-01-29T14:14:50.345499690Z" level=info msg="RemovePodSandbox \"407221a1f1c500d27a764909eb3a6e5481e2355759aba4ae013c9b96519bfd18\" returns successfully" Jan 29 14:14:50.346011 containerd[1509]: time="2025-01-29T14:14:50.345950744Z" level=info msg="StopPodSandbox for \"f84f32eeb36054eabf32691580b337a1b5ff98267f4b26ad242bbfc75b7963aa\"" Jan 29 14:14:50.346152 containerd[1509]: time="2025-01-29T14:14:50.346101648Z" level=info msg="TearDown network for sandbox \"f84f32eeb36054eabf32691580b337a1b5ff98267f4b26ad242bbfc75b7963aa\" successfully" Jan 29 14:14:50.346225 containerd[1509]: time="2025-01-29T14:14:50.346152015Z" level=info msg="StopPodSandbox for \"f84f32eeb36054eabf32691580b337a1b5ff98267f4b26ad242bbfc75b7963aa\" returns successfully" Jan 29 14:14:50.346606 containerd[1509]: time="2025-01-29T14:14:50.346568584Z" level=info msg="RemovePodSandbox for \"f84f32eeb36054eabf32691580b337a1b5ff98267f4b26ad242bbfc75b7963aa\"" Jan 29 14:14:50.346675 containerd[1509]: time="2025-01-29T14:14:50.346609978Z" level=info msg="Forcibly stopping sandbox \"f84f32eeb36054eabf32691580b337a1b5ff98267f4b26ad242bbfc75b7963aa\"" Jan 29 14:14:50.346755 containerd[1509]: time="2025-01-29T14:14:50.346706619Z" level=info msg="TearDown network for sandbox \"f84f32eeb36054eabf32691580b337a1b5ff98267f4b26ad242bbfc75b7963aa\" successfully" Jan 29 14:14:50.349164 containerd[1509]: time="2025-01-29T14:14:50.349123232Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f84f32eeb36054eabf32691580b337a1b5ff98267f4b26ad242bbfc75b7963aa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 14:14:50.349555 containerd[1509]: time="2025-01-29T14:14:50.349171826Z" level=info msg="RemovePodSandbox \"f84f32eeb36054eabf32691580b337a1b5ff98267f4b26ad242bbfc75b7963aa\" returns successfully" Jan 29 14:14:50.349773 containerd[1509]: time="2025-01-29T14:14:50.349629440Z" level=info msg="StopPodSandbox for \"3edb3341714ab9b96d12296b10a401d4876b484a8b43e9151714917f1e544bd4\"" Jan 29 14:14:50.349773 containerd[1509]: time="2025-01-29T14:14:50.349755215Z" level=info msg="TearDown network for sandbox \"3edb3341714ab9b96d12296b10a401d4876b484a8b43e9151714917f1e544bd4\" successfully" Jan 29 14:14:50.350222 containerd[1509]: time="2025-01-29T14:14:50.349775394Z" level=info msg="StopPodSandbox for \"3edb3341714ab9b96d12296b10a401d4876b484a8b43e9151714917f1e544bd4\" returns successfully" Jan 29 14:14:50.350609 containerd[1509]: time="2025-01-29T14:14:50.350522260Z" level=info msg="RemovePodSandbox for \"3edb3341714ab9b96d12296b10a401d4876b484a8b43e9151714917f1e544bd4\"" Jan 29 14:14:50.350609 containerd[1509]: time="2025-01-29T14:14:50.350563710Z" level=info msg="Forcibly stopping sandbox \"3edb3341714ab9b96d12296b10a401d4876b484a8b43e9151714917f1e544bd4\"" Jan 29 14:14:50.350839 containerd[1509]: time="2025-01-29T14:14:50.350650009Z" level=info msg="TearDown network for sandbox \"3edb3341714ab9b96d12296b10a401d4876b484a8b43e9151714917f1e544bd4\" successfully" Jan 29 14:14:50.353229 containerd[1509]: time="2025-01-29T14:14:50.353168399Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3edb3341714ab9b96d12296b10a401d4876b484a8b43e9151714917f1e544bd4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 14:14:50.390012 containerd[1509]: time="2025-01-29T14:14:50.389509899Z" level=info msg="RemovePodSandbox \"3edb3341714ab9b96d12296b10a401d4876b484a8b43e9151714917f1e544bd4\" returns successfully" Jan 29 14:14:50.390771 containerd[1509]: time="2025-01-29T14:14:50.390529287Z" level=info msg="StopPodSandbox for \"f70a7538889833d64ac87d078703876fec7b41abf382c08b473269c78b850921\"" Jan 29 14:14:50.390771 containerd[1509]: time="2025-01-29T14:14:50.390660643Z" level=info msg="TearDown network for sandbox \"f70a7538889833d64ac87d078703876fec7b41abf382c08b473269c78b850921\" successfully" Jan 29 14:14:50.390771 containerd[1509]: time="2025-01-29T14:14:50.390690723Z" level=info msg="StopPodSandbox for \"f70a7538889833d64ac87d078703876fec7b41abf382c08b473269c78b850921\" returns successfully" Jan 29 14:14:50.391707 containerd[1509]: time="2025-01-29T14:14:50.391258655Z" level=info msg="RemovePodSandbox for \"f70a7538889833d64ac87d078703876fec7b41abf382c08b473269c78b850921\"" Jan 29 14:14:50.391707 containerd[1509]: time="2025-01-29T14:14:50.391292059Z" level=info msg="Forcibly stopping sandbox \"f70a7538889833d64ac87d078703876fec7b41abf382c08b473269c78b850921\"" Jan 29 14:14:50.391707 containerd[1509]: time="2025-01-29T14:14:50.391410161Z" level=info msg="TearDown network for sandbox \"f70a7538889833d64ac87d078703876fec7b41abf382c08b473269c78b850921\" successfully" Jan 29 14:14:50.394269 containerd[1509]: time="2025-01-29T14:14:50.394209024Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f70a7538889833d64ac87d078703876fec7b41abf382c08b473269c78b850921\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 14:14:50.394363 containerd[1509]: time="2025-01-29T14:14:50.394270670Z" level=info msg="RemovePodSandbox \"f70a7538889833d64ac87d078703876fec7b41abf382c08b473269c78b850921\" returns successfully" Jan 29 14:14:50.395067 containerd[1509]: time="2025-01-29T14:14:50.394791346Z" level=info msg="StopPodSandbox for \"4dc8772f2d31cc037fab4af2a8550bf1e17ca9d6e836f5f407d02ec77a9ec77f\"" Jan 29 14:14:50.395067 containerd[1509]: time="2025-01-29T14:14:50.394913348Z" level=info msg="TearDown network for sandbox \"4dc8772f2d31cc037fab4af2a8550bf1e17ca9d6e836f5f407d02ec77a9ec77f\" successfully" Jan 29 14:14:50.395067 containerd[1509]: time="2025-01-29T14:14:50.394933633Z" level=info msg="StopPodSandbox for \"4dc8772f2d31cc037fab4af2a8550bf1e17ca9d6e836f5f407d02ec77a9ec77f\" returns successfully" Jan 29 14:14:50.395535 containerd[1509]: time="2025-01-29T14:14:50.395504505Z" level=info msg="RemovePodSandbox for \"4dc8772f2d31cc037fab4af2a8550bf1e17ca9d6e836f5f407d02ec77a9ec77f\"" Jan 29 14:14:50.396296 containerd[1509]: time="2025-01-29T14:14:50.395658209Z" level=info msg="Forcibly stopping sandbox \"4dc8772f2d31cc037fab4af2a8550bf1e17ca9d6e836f5f407d02ec77a9ec77f\"" Jan 29 14:14:50.396296 containerd[1509]: time="2025-01-29T14:14:50.395763258Z" level=info msg="TearDown network for sandbox \"4dc8772f2d31cc037fab4af2a8550bf1e17ca9d6e836f5f407d02ec77a9ec77f\" successfully" Jan 29 14:14:50.398395 containerd[1509]: time="2025-01-29T14:14:50.398337494Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4dc8772f2d31cc037fab4af2a8550bf1e17ca9d6e836f5f407d02ec77a9ec77f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 14:14:50.398500 containerd[1509]: time="2025-01-29T14:14:50.398410250Z" level=info msg="RemovePodSandbox \"4dc8772f2d31cc037fab4af2a8550bf1e17ca9d6e836f5f407d02ec77a9ec77f\" returns successfully" Jan 29 14:14:50.399138 containerd[1509]: time="2025-01-29T14:14:50.398817898Z" level=info msg="StopPodSandbox for \"2d60c69065707dc941dd2638d266a88e83ca08c5fcf92d93b3fe424dcd534434\"" Jan 29 14:14:50.399138 containerd[1509]: time="2025-01-29T14:14:50.398947341Z" level=info msg="TearDown network for sandbox \"2d60c69065707dc941dd2638d266a88e83ca08c5fcf92d93b3fe424dcd534434\" successfully" Jan 29 14:14:50.399138 containerd[1509]: time="2025-01-29T14:14:50.398980908Z" level=info msg="StopPodSandbox for \"2d60c69065707dc941dd2638d266a88e83ca08c5fcf92d93b3fe424dcd534434\" returns successfully" Jan 29 14:14:50.399451 containerd[1509]: time="2025-01-29T14:14:50.399423829Z" level=info msg="RemovePodSandbox for \"2d60c69065707dc941dd2638d266a88e83ca08c5fcf92d93b3fe424dcd534434\"" Jan 29 14:14:50.400145 containerd[1509]: time="2025-01-29T14:14:50.399562509Z" level=info msg="Forcibly stopping sandbox \"2d60c69065707dc941dd2638d266a88e83ca08c5fcf92d93b3fe424dcd534434\"" Jan 29 14:14:50.400145 containerd[1509]: time="2025-01-29T14:14:50.399655908Z" level=info msg="TearDown network for sandbox \"2d60c69065707dc941dd2638d266a88e83ca08c5fcf92d93b3fe424dcd534434\" successfully" Jan 29 14:14:50.402221 containerd[1509]: time="2025-01-29T14:14:50.402175310Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2d60c69065707dc941dd2638d266a88e83ca08c5fcf92d93b3fe424dcd534434\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 14:14:50.402869 containerd[1509]: time="2025-01-29T14:14:50.402228058Z" level=info msg="RemovePodSandbox \"2d60c69065707dc941dd2638d266a88e83ca08c5fcf92d93b3fe424dcd534434\" returns successfully" Jan 29 14:14:50.402869 containerd[1509]: time="2025-01-29T14:14:50.402563729Z" level=info msg="StopPodSandbox for \"e459b3c9842a6a6b1044e1d9eaf3176a26c0ad6fea5e4901f475d35d8e54ea71\"" Jan 29 14:14:50.402869 containerd[1509]: time="2025-01-29T14:14:50.402683418Z" level=info msg="TearDown network for sandbox \"e459b3c9842a6a6b1044e1d9eaf3176a26c0ad6fea5e4901f475d35d8e54ea71\" successfully" Jan 29 14:14:50.402869 containerd[1509]: time="2025-01-29T14:14:50.402714736Z" level=info msg="StopPodSandbox for \"e459b3c9842a6a6b1044e1d9eaf3176a26c0ad6fea5e4901f475d35d8e54ea71\" returns successfully" Jan 29 14:14:50.403240 containerd[1509]: time="2025-01-29T14:14:50.403204694Z" level=info msg="RemovePodSandbox for \"e459b3c9842a6a6b1044e1d9eaf3176a26c0ad6fea5e4901f475d35d8e54ea71\"" Jan 29 14:14:50.406140 containerd[1509]: time="2025-01-29T14:14:50.405861305Z" level=info msg="Forcibly stopping sandbox \"e459b3c9842a6a6b1044e1d9eaf3176a26c0ad6fea5e4901f475d35d8e54ea71\"" Jan 29 14:14:50.406140 containerd[1509]: time="2025-01-29T14:14:50.406008579Z" level=info msg="TearDown network for sandbox \"e459b3c9842a6a6b1044e1d9eaf3176a26c0ad6fea5e4901f475d35d8e54ea71\" successfully" Jan 29 14:14:50.414212 containerd[1509]: time="2025-01-29T14:14:50.414175570Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e459b3c9842a6a6b1044e1d9eaf3176a26c0ad6fea5e4901f475d35d8e54ea71\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 14:14:50.414316 containerd[1509]: time="2025-01-29T14:14:50.414241150Z" level=info msg="RemovePodSandbox \"e459b3c9842a6a6b1044e1d9eaf3176a26c0ad6fea5e4901f475d35d8e54ea71\" returns successfully" Jan 29 14:14:51.267326 kubelet[1884]: E0129 14:14:51.267255 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:52.268192 kubelet[1884]: E0129 14:14:52.268066 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:53.269159 kubelet[1884]: E0129 14:14:53.269063 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:54.269524 kubelet[1884]: E0129 14:14:54.269440 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:55.270350 kubelet[1884]: E0129 14:14:55.270284 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:56.271005 kubelet[1884]: E0129 14:14:56.270926 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:57.271995 kubelet[1884]: E0129 14:14:57.271878 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:58.272635 kubelet[1884]: E0129 14:14:58.272357 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:14:59.273649 kubelet[1884]: E0129 14:14:59.273561 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"