Jan 29 16:37:31.047583 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 14:51:22 -00 2025 Jan 29 16:37:31.047634 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:37:31.047657 kernel: BIOS-provided physical RAM map: Jan 29 16:37:31.047673 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 29 16:37:31.047683 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 29 16:37:31.047693 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 29 16:37:31.047705 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jan 29 16:37:31.047720 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jan 29 16:37:31.047731 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 29 16:37:31.047741 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 29 16:37:31.047752 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 29 16:37:31.047762 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 29 16:37:31.047782 kernel: NX (Execute Disable) protection: active Jan 29 16:37:31.047792 kernel: APIC: Static calls initialized Jan 29 16:37:31.047805 kernel: SMBIOS 2.8 present. Jan 29 16:37:31.047817 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Jan 29 16:37:31.047828 kernel: Hypervisor detected: KVM Jan 29 16:37:31.047851 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 16:37:31.047872 kernel: kvm-clock: using sched offset of 4513668820 cycles Jan 29 16:37:31.047886 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 16:37:31.047898 kernel: tsc: Detected 2500.032 MHz processor Jan 29 16:37:31.047910 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 16:37:31.047922 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 16:37:31.047933 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 29 16:37:31.047945 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 29 16:37:31.047957 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 16:37:31.047974 kernel: Using GB pages for direct mapping Jan 29 16:37:31.047986 kernel: ACPI: Early table checksum verification disabled Jan 29 16:37:31.047997 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Jan 29 16:37:31.048009 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:37:31.048021 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:37:31.048032 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:37:31.048044 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jan 29 16:37:31.048055 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:37:31.048067 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:37:31.050407 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:37:31.050435 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:37:31.050447 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jan 29 16:37:31.050458 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jan 29 16:37:31.050471 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jan 29 16:37:31.050489 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jan 29 16:37:31.050501 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jan 29 16:37:31.050517 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jan 29 16:37:31.050529 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jan 29 16:37:31.050541 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 29 16:37:31.050552 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 29 16:37:31.050564 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jan 29 16:37:31.050575 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Jan 29 16:37:31.050587 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jan 29 16:37:31.050598 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Jan 29 16:37:31.050615 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jan 29 16:37:31.050626 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Jan 29 16:37:31.050638 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jan 29 16:37:31.050649 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Jan 29 16:37:31.050672 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jan 29 16:37:31.050683 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Jan 29 16:37:31.050694 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jan 29 16:37:31.050705 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Jan 29 16:37:31.050716 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jan 29 16:37:31.050727 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Jan 29 16:37:31.050743 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 29 16:37:31.050754 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 29 16:37:31.050766 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jan 29 16:37:31.050777 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Jan 29 16:37:31.050788 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Jan 29 16:37:31.050800 kernel: Zone ranges: Jan 29 16:37:31.050812 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 16:37:31.050823 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jan 29 16:37:31.050834 kernel: Normal empty Jan 29 16:37:31.050884 kernel: Movable zone start for each node Jan 29 16:37:31.050900 kernel: Early memory node ranges Jan 29 16:37:31.050912 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 29 16:37:31.050924 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jan 29 16:37:31.050936 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jan 29 16:37:31.050948 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 16:37:31.050960 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 29 16:37:31.050972 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jan 29 16:37:31.050984 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 29 16:37:31.051002 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 16:37:31.051015 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 16:37:31.051027 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 29 16:37:31.051038 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 16:37:31.051050 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 16:37:31.051062 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 16:37:31.051074 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 16:37:31.051086 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 16:37:31.051098 kernel: TSC deadline timer available Jan 29 16:37:31.051115 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Jan 29 16:37:31.051127 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 16:37:31.051139 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 29 16:37:31.051151 kernel: Booting paravirtualized kernel on KVM Jan 29 16:37:31.051163 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 16:37:31.051175 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 29 16:37:31.051187 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 29 16:37:31.051199 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 29 16:37:31.051212 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 29 16:37:31.051229 kernel: kvm-guest: PV spinlocks enabled Jan 29 16:37:31.051241 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 29 16:37:31.051255 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:37:31.051268 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 16:37:31.051280 kernel: random: crng init done Jan 29 16:37:31.051292 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 16:37:31.051304 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 29 16:37:31.051316 kernel: Fallback order for Node 0: 0 Jan 29 16:37:31.051360 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Jan 29 16:37:31.051377 kernel: Policy zone: DMA32 Jan 29 16:37:31.051389 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 16:37:31.051401 kernel: software IO TLB: area num 16. Jan 29 16:37:31.051414 kernel: Memory: 1899480K/2096616K available (14336K kernel code, 2301K rwdata, 22852K rodata, 43472K init, 1600K bss, 196876K reserved, 0K cma-reserved) Jan 29 16:37:31.051438 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 29 16:37:31.051450 kernel: Kernel/User page tables isolation: enabled Jan 29 16:37:31.051461 kernel: ftrace: allocating 37893 entries in 149 pages Jan 29 16:37:31.051472 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 16:37:31.051489 kernel: Dynamic Preempt: voluntary Jan 29 16:37:31.051501 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 16:37:31.051525 kernel: rcu: RCU event tracing is enabled. Jan 29 16:37:31.051537 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 29 16:37:31.051551 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 16:37:31.051587 kernel: Rude variant of Tasks RCU enabled. Jan 29 16:37:31.051605 kernel: Tracing variant of Tasks RCU enabled. Jan 29 16:37:31.051618 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 16:37:31.051630 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 29 16:37:31.051643 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jan 29 16:37:31.051656 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 16:37:31.051668 kernel: Console: colour VGA+ 80x25 Jan 29 16:37:31.051686 kernel: printk: console [tty0] enabled Jan 29 16:37:31.051699 kernel: printk: console [ttyS0] enabled Jan 29 16:37:31.051712 kernel: ACPI: Core revision 20230628 Jan 29 16:37:31.051724 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 16:37:31.051737 kernel: x2apic enabled Jan 29 16:37:31.051755 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 16:37:31.051768 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240957bf147, max_idle_ns: 440795216753 ns Jan 29 16:37:31.051781 kernel: Calibrating delay loop (skipped) preset value.. 5000.06 BogoMIPS (lpj=2500032) Jan 29 16:37:31.051793 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 29 16:37:31.051806 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 29 16:37:31.051819 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 29 16:37:31.051832 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 16:37:31.051844 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 16:37:31.051856 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 16:37:31.051879 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 16:37:31.051898 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 29 16:37:31.051911 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 16:37:31.051923 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 16:37:31.051936 kernel: MDS: Mitigation: Clear CPU buffers Jan 29 16:37:31.051948 kernel: MMIO Stale Data: Unknown: No mitigations Jan 29 16:37:31.051961 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 29 16:37:31.051973 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 16:37:31.051986 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 16:37:31.051998 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 16:37:31.052010 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 16:37:31.052028 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 29 16:37:31.052041 kernel: Freeing SMP alternatives memory: 32K Jan 29 16:37:31.052054 kernel: pid_max: default: 32768 minimum: 301 Jan 29 16:37:31.052066 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 16:37:31.052078 kernel: landlock: Up and running. Jan 29 16:37:31.052091 kernel: SELinux: Initializing. Jan 29 16:37:31.052103 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 16:37:31.052116 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 16:37:31.052129 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Jan 29 16:37:31.052141 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 29 16:37:31.052154 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 29 16:37:31.052172 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 29 16:37:31.052185 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Jan 29 16:37:31.052198 kernel: signal: max sigframe size: 1776 Jan 29 16:37:31.052210 kernel: rcu: Hierarchical SRCU implementation. Jan 29 16:37:31.052223 kernel: rcu: Max phase no-delay instances is 400. Jan 29 16:37:31.052236 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 29 16:37:31.052249 kernel: smp: Bringing up secondary CPUs ... Jan 29 16:37:31.052261 kernel: smpboot: x86: Booting SMP configuration: Jan 29 16:37:31.052274 kernel: .... node #0, CPUs: #1 Jan 29 16:37:31.052292 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jan 29 16:37:31.052305 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 16:37:31.052317 kernel: smpboot: Max logical packages: 16 Jan 29 16:37:31.052330 kernel: smpboot: Total of 2 processors activated (10000.12 BogoMIPS) Jan 29 16:37:31.053453 kernel: devtmpfs: initialized Jan 29 16:37:31.053466 kernel: x86/mm: Memory block size: 128MB Jan 29 16:37:31.053478 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 16:37:31.053504 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 29 16:37:31.053516 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 16:37:31.053535 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 16:37:31.053548 kernel: audit: initializing netlink subsys (disabled) Jan 29 16:37:31.053560 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 16:37:31.053576 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 16:37:31.053588 kernel: audit: type=2000 audit(1738168649.496:1): state=initialized audit_enabled=0 res=1 Jan 29 16:37:31.053601 kernel: cpuidle: using governor menu Jan 29 16:37:31.053613 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 16:37:31.053625 kernel: dca service started, version 1.12.1 Jan 29 16:37:31.053638 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 29 16:37:31.053655 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 29 16:37:31.053680 kernel: PCI: Using configuration type 1 for base access Jan 29 16:37:31.053693 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 16:37:31.053722 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 16:37:31.053740 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 16:37:31.053753 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 16:37:31.053765 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 16:37:31.053778 kernel: ACPI: Added _OSI(Module Device) Jan 29 16:37:31.053791 kernel: ACPI: Added _OSI(Processor Device) Jan 29 16:37:31.053814 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 16:37:31.053840 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 16:37:31.053852 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 16:37:31.053875 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 16:37:31.053901 kernel: ACPI: Interpreter enabled Jan 29 16:37:31.053914 kernel: ACPI: PM: (supports S0 S5) Jan 29 16:37:31.053927 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 16:37:31.053939 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 16:37:31.053952 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 16:37:31.053971 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 29 16:37:31.053984 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 16:37:31.054254 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 16:37:31.055548 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 29 16:37:31.055727 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 29 16:37:31.055748 kernel: PCI host bridge to bus 0000:00 Jan 29 16:37:31.055961 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 16:37:31.056136 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 16:37:31.056295 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 16:37:31.058515 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 29 16:37:31.058680 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 29 16:37:31.058854 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jan 29 16:37:31.059031 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 16:37:31.059245 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 29 16:37:31.059463 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Jan 29 16:37:31.059648 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Jan 29 16:37:31.059831 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Jan 29 16:37:31.060019 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Jan 29 16:37:31.060191 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 16:37:31.060420 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 29 16:37:31.060609 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Jan 29 16:37:31.060808 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 29 16:37:31.060999 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Jan 29 16:37:31.061198 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 29 16:37:31.063454 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Jan 29 16:37:31.063658 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 29 16:37:31.063844 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Jan 29 16:37:31.064057 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 29 16:37:31.064236 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Jan 29 16:37:31.065467 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 29 16:37:31.065655 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Jan 29 16:37:31.065846 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 29 16:37:31.066051 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Jan 29 16:37:31.066245 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 29 16:37:31.067476 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Jan 29 16:37:31.067679 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 29 16:37:31.067856 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 29 16:37:31.068046 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Jan 29 16:37:31.068219 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jan 29 16:37:31.069435 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Jan 29 16:37:31.069633 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 29 16:37:31.069805 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 29 16:37:31.069990 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Jan 29 16:37:31.070161 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Jan 29 16:37:31.070434 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 29 16:37:31.070620 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 29 16:37:31.070812 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 29 16:37:31.070999 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Jan 29 16:37:31.071169 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Jan 29 16:37:31.071364 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 29 16:37:31.072564 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 29 16:37:31.072765 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Jan 29 16:37:31.072971 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Jan 29 16:37:31.073151 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 29 16:37:31.073327 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 29 16:37:31.074544 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 29 16:37:31.074754 kernel: pci_bus 0000:02: extended config space not accessible Jan 29 16:37:31.074984 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Jan 29 16:37:31.075190 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Jan 29 16:37:31.077424 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 29 16:37:31.077614 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 29 16:37:31.077819 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 29 16:37:31.078022 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Jan 29 16:37:31.078196 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 29 16:37:31.078428 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 29 16:37:31.078600 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 29 16:37:31.078799 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 29 16:37:31.078990 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jan 29 16:37:31.079164 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 29 16:37:31.081365 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 29 16:37:31.081560 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 29 16:37:31.081740 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 29 16:37:31.081923 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 29 16:37:31.082103 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 29 16:37:31.082296 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 29 16:37:31.082519 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 29 16:37:31.082690 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 29 16:37:31.082871 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 29 16:37:31.083046 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 29 16:37:31.083215 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 29 16:37:31.085437 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 29 16:37:31.085621 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 29 16:37:31.085796 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 29 16:37:31.086000 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 29 16:37:31.086178 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 29 16:37:31.086406 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 29 16:37:31.086427 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 16:37:31.086441 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 16:37:31.086453 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 16:37:31.086474 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 16:37:31.086495 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 29 16:37:31.086507 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 29 16:37:31.086529 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 29 16:37:31.086541 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 29 16:37:31.086554 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 29 16:37:31.086578 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 29 16:37:31.086591 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 29 16:37:31.086604 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 29 16:37:31.086617 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 29 16:37:31.086635 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 29 16:37:31.086648 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 29 16:37:31.086661 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 29 16:37:31.086674 kernel: iommu: Default domain type: Translated Jan 29 16:37:31.086687 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 16:37:31.086700 kernel: PCI: Using ACPI for IRQ routing Jan 29 16:37:31.086713 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 16:37:31.086725 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 29 16:37:31.086738 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jan 29 16:37:31.086929 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 29 16:37:31.087099 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 29 16:37:31.087306 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 16:37:31.087325 kernel: vgaarb: loaded Jan 29 16:37:31.087338 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 16:37:31.087350 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 16:37:31.087363 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 16:37:31.087390 kernel: pnp: PnP ACPI init Jan 29 16:37:31.087616 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 29 16:37:31.087639 kernel: pnp: PnP ACPI: found 5 devices Jan 29 16:37:31.087652 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 16:37:31.087665 kernel: NET: Registered PF_INET protocol family Jan 29 16:37:31.087678 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 16:37:31.087691 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 29 16:37:31.087704 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 16:37:31.087717 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 29 16:37:31.087743 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 29 16:37:31.087756 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 29 16:37:31.087769 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 16:37:31.087782 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 16:37:31.087795 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 16:37:31.087811 kernel: NET: Registered PF_XDP protocol family Jan 29 16:37:31.087998 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Jan 29 16:37:31.088171 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 29 16:37:31.090382 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 29 16:37:31.090583 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 29 16:37:31.090756 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 29 16:37:31.090943 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 29 16:37:31.091117 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 29 16:37:31.091329 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 29 16:37:31.093540 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 29 16:37:31.094446 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 29 16:37:31.094635 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 29 16:37:31.094813 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 29 16:37:31.095010 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 29 16:37:31.095188 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 29 16:37:31.097400 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 29 16:37:31.097575 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 29 16:37:31.097791 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 29 16:37:31.098011 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 29 16:37:31.098182 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 29 16:37:31.098453 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 29 16:37:31.098629 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 29 16:37:31.098800 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 29 16:37:31.098984 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 29 16:37:31.099155 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 29 16:37:31.099357 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 29 16:37:31.099552 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 29 16:37:31.099733 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 29 16:37:31.099919 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 29 16:37:31.100090 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 29 16:37:31.100272 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 29 16:37:31.100493 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 29 16:37:31.100665 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 29 16:37:31.100841 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 29 16:37:31.101038 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 29 16:37:31.101209 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 29 16:37:31.101418 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 29 16:37:31.101608 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 29 16:37:31.101778 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 29 16:37:31.101960 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 29 16:37:31.102139 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 29 16:37:31.102309 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 29 16:37:31.102510 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 29 16:37:31.102697 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 29 16:37:31.102887 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 29 16:37:31.103068 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 29 16:37:31.103239 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 29 16:37:31.103450 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 29 16:37:31.103632 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 29 16:37:31.103802 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 29 16:37:31.103992 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 29 16:37:31.104154 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 16:37:31.104310 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 16:37:31.104497 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 16:37:31.104672 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 29 16:37:31.104833 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 29 16:37:31.105020 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jan 29 16:37:31.105200 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 29 16:37:31.105428 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jan 29 16:37:31.105604 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 29 16:37:31.105776 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 29 16:37:31.105980 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Jan 29 16:37:31.106142 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 29 16:37:31.106302 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 29 16:37:31.106527 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Jan 29 16:37:31.106703 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 29 16:37:31.106874 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 29 16:37:31.107074 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jan 29 16:37:31.107238 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 29 16:37:31.107450 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 29 16:37:31.107640 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Jan 29 16:37:31.107817 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 29 16:37:31.107993 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 29 16:37:31.108166 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Jan 29 16:37:31.108358 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 29 16:37:31.108525 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 29 16:37:31.108707 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Jan 29 16:37:31.108904 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jan 29 16:37:31.109066 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 29 16:37:31.109246 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Jan 29 16:37:31.109458 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 29 16:37:31.109641 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 29 16:37:31.109669 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 29 16:37:31.109683 kernel: PCI: CLS 0 bytes, default 64 Jan 29 16:37:31.109703 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 29 16:37:31.109716 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jan 29 16:37:31.109730 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 29 16:37:31.109743 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240957bf147, max_idle_ns: 440795216753 ns Jan 29 16:37:31.109757 kernel: Initialise system trusted keyrings Jan 29 16:37:31.109775 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 29 16:37:31.109788 kernel: Key type asymmetric registered Jan 29 16:37:31.109801 kernel: Asymmetric key parser 'x509' registered Jan 29 16:37:31.109814 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 16:37:31.109827 kernel: io scheduler mq-deadline registered Jan 29 16:37:31.109852 kernel: io scheduler kyber registered Jan 29 16:37:31.109883 kernel: io scheduler bfq registered Jan 29 16:37:31.110054 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 29 16:37:31.110225 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 29 16:37:31.110433 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 16:37:31.110623 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 29 16:37:31.110795 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 29 16:37:31.110981 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 16:37:31.111154 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 29 16:37:31.111325 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 29 16:37:31.111556 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 16:37:31.111738 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 29 16:37:31.111920 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 29 16:37:31.112091 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 16:37:31.112260 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 29 16:37:31.112472 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 29 16:37:31.112653 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 16:37:31.112848 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 29 16:37:31.113036 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 29 16:37:31.113209 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 16:37:31.113423 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 29 16:37:31.113619 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 29 16:37:31.113814 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 16:37:31.114007 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 29 16:37:31.114178 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 29 16:37:31.114373 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 16:37:31.114396 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 16:37:31.114419 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 29 16:37:31.114440 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 29 16:37:31.114454 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 16:37:31.114468 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 16:37:31.114482 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 16:37:31.114495 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 16:37:31.114508 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 16:37:31.114691 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 29 16:37:31.114712 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 16:37:31.114897 kernel: rtc_cmos 00:03: registered as rtc0 Jan 29 16:37:31.115068 kernel: rtc_cmos 00:03: setting system clock to 2025-01-29T16:37:30 UTC (1738168650) Jan 29 16:37:31.115251 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 29 16:37:31.115271 kernel: intel_pstate: CPU model not supported Jan 29 16:37:31.115285 kernel: NET: Registered PF_INET6 protocol family Jan 29 16:37:31.115299 kernel: Segment Routing with IPv6 Jan 29 16:37:31.115312 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 16:37:31.115325 kernel: NET: Registered PF_PACKET protocol family Jan 29 16:37:31.115338 kernel: Key type dns_resolver registered Jan 29 16:37:31.115397 kernel: IPI shorthand broadcast: enabled Jan 29 16:37:31.115416 kernel: sched_clock: Marking stable (1141026805, 240125549)->(1626083988, -244931634) Jan 29 16:37:31.115429 kernel: registered taskstats version 1 Jan 29 16:37:31.115443 kernel: Loading compiled-in X.509 certificates Jan 29 16:37:31.115472 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 68134fdf6dac3690da6e3bc9c22b042a5c364340' Jan 29 16:37:31.115485 kernel: Key type .fscrypt registered Jan 29 16:37:31.115498 kernel: Key type fscrypt-provisioning registered Jan 29 16:37:31.115518 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 16:37:31.115531 kernel: ima: Allocated hash algorithm: sha1 Jan 29 16:37:31.115550 kernel: ima: No architecture policies found Jan 29 16:37:31.115564 kernel: clk: Disabling unused clocks Jan 29 16:37:31.115577 kernel: Freeing unused kernel image (initmem) memory: 43472K Jan 29 16:37:31.115591 kernel: Write protecting the kernel read-only data: 38912k Jan 29 16:37:31.115620 kernel: Freeing unused kernel image (rodata/data gap) memory: 1724K Jan 29 16:37:31.115632 kernel: Run /init as init process Jan 29 16:37:31.115645 kernel: with arguments: Jan 29 16:37:31.115680 kernel: /init Jan 29 16:37:31.115693 kernel: with environment: Jan 29 16:37:31.115711 kernel: HOME=/ Jan 29 16:37:31.115724 kernel: TERM=linux Jan 29 16:37:31.115745 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 16:37:31.115769 systemd[1]: Successfully made /usr/ read-only. Jan 29 16:37:31.115788 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 16:37:31.115803 systemd[1]: Detected virtualization kvm. Jan 29 16:37:31.115817 systemd[1]: Detected architecture x86-64. Jan 29 16:37:31.115838 systemd[1]: Running in initrd. Jan 29 16:37:31.115859 systemd[1]: No hostname configured, using default hostname. Jan 29 16:37:31.115884 systemd[1]: Hostname set to . Jan 29 16:37:31.115901 systemd[1]: Initializing machine ID from VM UUID. Jan 29 16:37:31.115915 systemd[1]: Queued start job for default target initrd.target. Jan 29 16:37:31.115930 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:37:31.115944 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:37:31.115959 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 16:37:31.115974 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 16:37:31.115995 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 16:37:31.116011 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 16:37:31.116027 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 16:37:31.116042 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 16:37:31.116056 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:37:31.116071 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:37:31.116090 systemd[1]: Reached target paths.target - Path Units. Jan 29 16:37:31.116105 systemd[1]: Reached target slices.target - Slice Units. Jan 29 16:37:31.116119 systemd[1]: Reached target swap.target - Swaps. Jan 29 16:37:31.116134 systemd[1]: Reached target timers.target - Timer Units. Jan 29 16:37:31.116148 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 16:37:31.116163 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 16:37:31.116178 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 16:37:31.116198 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 29 16:37:31.116212 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:37:31.116232 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 16:37:31.116246 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:37:31.116269 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 16:37:31.116284 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 16:37:31.116298 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 16:37:31.116313 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 16:37:31.116356 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 16:37:31.116375 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 16:37:31.116395 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 16:37:31.116416 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:37:31.116488 systemd-journald[202]: Collecting audit messages is disabled. Jan 29 16:37:31.116534 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 16:37:31.116549 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:37:31.116571 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 16:37:31.116586 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 16:37:31.116601 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 16:37:31.116615 kernel: Bridge firewalling registered Jan 29 16:37:31.116643 systemd-journald[202]: Journal started Jan 29 16:37:31.116676 systemd-journald[202]: Runtime Journal (/run/log/journal/8ce0092101a048aa81e51d5d513bf4b8) is 4.7M, max 37.9M, 33.2M free. Jan 29 16:37:31.049233 systemd-modules-load[203]: Inserted module 'overlay' Jan 29 16:37:31.156735 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 16:37:31.096904 systemd-modules-load[203]: Inserted module 'br_netfilter' Jan 29 16:37:31.165330 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 16:37:31.167331 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:37:31.171006 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:37:31.184567 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:37:31.187495 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:37:31.196530 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 16:37:31.201161 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 16:37:31.206739 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:37:31.212945 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:37:31.221551 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 16:37:31.222590 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:37:31.231658 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:37:31.239581 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 16:37:31.248005 dracut-cmdline[233]: dracut-dracut-053 Jan 29 16:37:31.250261 dracut-cmdline[233]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:37:31.288243 systemd-resolved[236]: Positive Trust Anchors: Jan 29 16:37:31.289362 systemd-resolved[236]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 16:37:31.289411 systemd-resolved[236]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 16:37:31.298758 systemd-resolved[236]: Defaulting to hostname 'linux'. Jan 29 16:37:31.302569 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 16:37:31.303418 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:37:31.351450 kernel: SCSI subsystem initialized Jan 29 16:37:31.363370 kernel: Loading iSCSI transport class v2.0-870. Jan 29 16:37:31.377380 kernel: iscsi: registered transport (tcp) Jan 29 16:37:31.404860 kernel: iscsi: registered transport (qla4xxx) Jan 29 16:37:31.404929 kernel: QLogic iSCSI HBA Driver Jan 29 16:37:31.461785 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 16:37:31.470592 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 16:37:31.512547 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 16:37:31.512702 kernel: device-mapper: uevent: version 1.0.3 Jan 29 16:37:31.517381 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 16:37:31.570437 kernel: raid6: sse2x4 gen() 13154 MB/s Jan 29 16:37:31.587406 kernel: raid6: sse2x2 gen() 8860 MB/s Jan 29 16:37:31.606085 kernel: raid6: sse2x1 gen() 9070 MB/s Jan 29 16:37:31.606202 kernel: raid6: using algorithm sse2x4 gen() 13154 MB/s Jan 29 16:37:31.625046 kernel: raid6: .... xor() 7647 MB/s, rmw enabled Jan 29 16:37:31.625150 kernel: raid6: using ssse3x2 recovery algorithm Jan 29 16:37:31.651391 kernel: xor: automatically using best checksumming function avx Jan 29 16:37:31.824409 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 16:37:31.838809 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 16:37:31.849600 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:37:31.868768 systemd-udevd[420]: Using default interface naming scheme 'v255'. Jan 29 16:37:31.877895 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:37:31.886539 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 16:37:31.907383 dracut-pre-trigger[426]: rd.md=0: removing MD RAID activation Jan 29 16:37:31.948077 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 16:37:31.955565 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 16:37:32.076710 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:37:32.086552 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 16:37:32.115788 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 16:37:32.118788 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 16:37:32.120705 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:37:32.122748 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 16:37:32.130576 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 16:37:32.158612 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 16:37:32.213282 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jan 29 16:37:32.284777 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 29 16:37:32.285018 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 16:37:32.285041 kernel: GPT:17805311 != 125829119 Jan 29 16:37:32.285070 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 16:37:32.285088 kernel: GPT:17805311 != 125829119 Jan 29 16:37:32.285119 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 16:37:32.285136 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 16:37:32.285152 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 16:37:32.285185 kernel: libata version 3.00 loaded. Jan 29 16:37:32.285201 kernel: ahci 0000:00:1f.2: version 3.0 Jan 29 16:37:32.342183 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 29 16:37:32.342221 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 29 16:37:32.342907 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 29 16:37:32.343120 kernel: ACPI: bus type USB registered Jan 29 16:37:32.343141 kernel: scsi host0: ahci Jan 29 16:37:32.343407 kernel: usbcore: registered new interface driver usbfs Jan 29 16:37:32.343428 kernel: scsi host1: ahci Jan 29 16:37:32.343640 kernel: scsi host2: ahci Jan 29 16:37:32.343899 kernel: AVX version of gcm_enc/dec engaged. Jan 29 16:37:32.343921 kernel: usbcore: registered new interface driver hub Jan 29 16:37:32.343939 kernel: AES CTR mode by8 optimization enabled Jan 29 16:37:32.343957 kernel: usbcore: registered new device driver usb Jan 29 16:37:32.343975 kernel: scsi host3: ahci Jan 29 16:37:32.344177 kernel: scsi host4: ahci Jan 29 16:37:32.349579 kernel: scsi host5: ahci Jan 29 16:37:32.349841 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Jan 29 16:37:32.349874 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Jan 29 16:37:32.349894 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Jan 29 16:37:32.349913 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Jan 29 16:37:32.349931 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Jan 29 16:37:32.349950 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Jan 29 16:37:32.310408 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 16:37:32.310587 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:37:32.314568 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:37:32.315325 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:37:32.315544 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:37:32.469875 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (467) Jan 29 16:37:32.469920 kernel: BTRFS: device fsid b756ea5d-2d08-456f-8231-a684aa2555c3 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (472) Jan 29 16:37:32.317449 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:37:32.323645 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:37:32.334925 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:37:32.413934 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 16:37:32.483197 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 16:37:32.484593 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:37:32.516411 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 16:37:32.527354 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 16:37:32.528271 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 16:37:32.537639 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 16:37:32.542661 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:37:32.544911 disk-uuid[559]: Primary Header is updated. Jan 29 16:37:32.544911 disk-uuid[559]: Secondary Entries is updated. Jan 29 16:37:32.544911 disk-uuid[559]: Secondary Header is updated. Jan 29 16:37:32.551413 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 16:37:32.561386 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 16:37:32.585651 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:37:32.647365 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 29 16:37:32.647453 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 29 16:37:32.648733 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 29 16:37:32.655368 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 29 16:37:32.655441 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 29 16:37:32.655461 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 29 16:37:32.686406 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 29 16:37:32.722239 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jan 29 16:37:32.722640 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 29 16:37:32.722894 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 29 16:37:32.723111 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jan 29 16:37:32.723330 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jan 29 16:37:32.723982 kernel: hub 1-0:1.0: USB hub found Jan 29 16:37:32.725153 kernel: hub 1-0:1.0: 4 ports detected Jan 29 16:37:32.725427 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 29 16:37:32.725677 kernel: hub 2-0:1.0: USB hub found Jan 29 16:37:32.725940 kernel: hub 2-0:1.0: 4 ports detected Jan 29 16:37:32.954379 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 29 16:37:33.097371 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 16:37:33.104561 kernel: usbcore: registered new interface driver usbhid Jan 29 16:37:33.104631 kernel: usbhid: USB HID core driver Jan 29 16:37:33.113018 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 29 16:37:33.113060 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jan 29 16:37:33.559394 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 16:37:33.562359 disk-uuid[560]: The operation has completed successfully. Jan 29 16:37:33.635579 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 16:37:33.635771 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 16:37:33.685566 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 16:37:33.691359 sh[586]: Success Jan 29 16:37:33.709383 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Jan 29 16:37:33.775654 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 16:37:33.787437 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 16:37:33.790613 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 16:37:33.818392 kernel: BTRFS info (device dm-0): first mount of filesystem b756ea5d-2d08-456f-8231-a684aa2555c3 Jan 29 16:37:33.818495 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:37:33.818517 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 16:37:33.820111 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 16:37:33.822379 kernel: BTRFS info (device dm-0): using free space tree Jan 29 16:37:33.833650 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 16:37:33.835208 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 16:37:33.844713 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 16:37:33.848870 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 16:37:33.869367 kernel: BTRFS info (device vda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:37:33.873347 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:37:33.873383 kernel: BTRFS info (device vda6): using free space tree Jan 29 16:37:33.878354 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 16:37:33.891474 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 16:37:33.894578 kernel: BTRFS info (device vda6): last unmount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:37:33.900486 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 16:37:33.907799 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 16:37:34.042877 ignition[679]: Ignition 2.20.0 Jan 29 16:37:34.042904 ignition[679]: Stage: fetch-offline Jan 29 16:37:34.042999 ignition[679]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:37:34.043020 ignition[679]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 16:37:34.043235 ignition[679]: parsed url from cmdline: "" Jan 29 16:37:34.043242 ignition[679]: no config URL provided Jan 29 16:37:34.043252 ignition[679]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 16:37:34.047988 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 16:37:34.043268 ignition[679]: no config at "/usr/lib/ignition/user.ign" Jan 29 16:37:34.043285 ignition[679]: failed to fetch config: resource requires networking Jan 29 16:37:34.043645 ignition[679]: Ignition finished successfully Jan 29 16:37:34.057901 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 16:37:34.064713 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 16:37:34.110527 systemd-networkd[776]: lo: Link UP Jan 29 16:37:34.110544 systemd-networkd[776]: lo: Gained carrier Jan 29 16:37:34.113108 systemd-networkd[776]: Enumeration completed Jan 29 16:37:34.113695 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 16:37:34.113733 systemd-networkd[776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:37:34.113740 systemd-networkd[776]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:37:34.114995 systemd[1]: Reached target network.target - Network. Jan 29 16:37:34.115133 systemd-networkd[776]: eth0: Link UP Jan 29 16:37:34.115139 systemd-networkd[776]: eth0: Gained carrier Jan 29 16:37:34.115151 systemd-networkd[776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:37:34.123507 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 16:37:34.140442 ignition[781]: Ignition 2.20.0 Jan 29 16:37:34.140467 ignition[781]: Stage: fetch Jan 29 16:37:34.140724 ignition[781]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:37:34.142429 systemd-networkd[776]: eth0: DHCPv4 address 10.230.24.202/30, gateway 10.230.24.201 acquired from 10.230.24.201 Jan 29 16:37:34.140745 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 16:37:34.140899 ignition[781]: parsed url from cmdline: "" Jan 29 16:37:34.140907 ignition[781]: no config URL provided Jan 29 16:37:34.140917 ignition[781]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 16:37:34.140935 ignition[781]: no config at "/usr/lib/ignition/user.ign" Jan 29 16:37:34.141110 ignition[781]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 29 16:37:34.141262 ignition[781]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 29 16:37:34.141302 ignition[781]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 29 16:37:34.141478 ignition[781]: GET error: Get "http://169.254.169.254/openstack/latest/user_data": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 29 16:37:34.341606 ignition[781]: GET http://169.254.169.254/openstack/latest/user_data: attempt #2 Jan 29 16:37:34.363498 ignition[781]: GET result: OK Jan 29 16:37:34.363653 ignition[781]: parsing config with SHA512: e875120d6f8c191482d0b79264921796c5f9861f8b8ad076ae3a3cd3ccd63b155835f09581d70ba11044539012c9a1590a0dd1ca7f90ffc142e024c0ec089036 Jan 29 16:37:34.368318 unknown[781]: fetched base config from "system" Jan 29 16:37:34.368351 unknown[781]: fetched base config from "system" Jan 29 16:37:34.368630 ignition[781]: fetch: fetch complete Jan 29 16:37:34.368363 unknown[781]: fetched user config from "openstack" Jan 29 16:37:34.368639 ignition[781]: fetch: fetch passed Jan 29 16:37:34.371163 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 16:37:34.368718 ignition[781]: Ignition finished successfully Jan 29 16:37:34.377559 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 16:37:34.399319 ignition[788]: Ignition 2.20.0 Jan 29 16:37:34.399362 ignition[788]: Stage: kargs Jan 29 16:37:34.399599 ignition[788]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:37:34.402019 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 16:37:34.399620 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 16:37:34.400465 ignition[788]: kargs: kargs passed Jan 29 16:37:34.400554 ignition[788]: Ignition finished successfully Jan 29 16:37:34.411584 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 16:37:34.427696 ignition[795]: Ignition 2.20.0 Jan 29 16:37:34.427720 ignition[795]: Stage: disks Jan 29 16:37:34.427974 ignition[795]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:37:34.430106 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 16:37:34.427995 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 16:37:34.428954 ignition[795]: disks: disks passed Jan 29 16:37:34.429031 ignition[795]: Ignition finished successfully Jan 29 16:37:34.433832 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 16:37:34.435133 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 16:37:34.436520 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 16:37:34.438102 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 16:37:34.439688 systemd[1]: Reached target basic.target - Basic System. Jan 29 16:37:34.448612 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 16:37:34.467774 systemd-fsck[804]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 29 16:37:34.470740 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 16:37:34.824497 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 16:37:34.934383 kernel: EXT4-fs (vda9): mounted filesystem 93ea9bb6-d6ba-4a18-a828-f0002683a7b4 r/w with ordered data mode. Quota mode: none. Jan 29 16:37:34.935577 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 16:37:34.937008 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 16:37:34.946678 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 16:37:34.950521 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 16:37:34.952465 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 16:37:34.953601 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 29 16:37:34.957830 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 16:37:34.957957 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 16:37:34.966526 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (812) Jan 29 16:37:34.972936 kernel: BTRFS info (device vda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:37:34.976531 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:37:34.977279 kernel: BTRFS info (device vda6): using free space tree Jan 29 16:37:34.976491 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 16:37:34.988695 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 16:37:34.988008 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 16:37:34.996089 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 16:37:35.060553 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 16:37:35.069888 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Jan 29 16:37:35.079411 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 16:37:35.087437 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 16:37:35.195814 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 16:37:35.206792 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 16:37:35.212574 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 16:37:35.223384 kernel: BTRFS info (device vda6): last unmount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:37:35.251849 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 16:37:35.254335 ignition[929]: INFO : Ignition 2.20.0 Jan 29 16:37:35.255837 ignition[929]: INFO : Stage: mount Jan 29 16:37:35.256531 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:37:35.256531 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 16:37:35.258318 ignition[929]: INFO : mount: mount passed Jan 29 16:37:35.258318 ignition[929]: INFO : Ignition finished successfully Jan 29 16:37:35.258857 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 16:37:35.814457 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 16:37:36.007252 systemd-networkd[776]: eth0: Gained IPv6LL Jan 29 16:37:37.515194 systemd-networkd[776]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8632:24:19ff:fee6:18ca/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8632:24:19ff:fee6:18ca/64 assigned by NDisc. Jan 29 16:37:37.515222 systemd-networkd[776]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 29 16:37:42.113812 coreos-metadata[814]: Jan 29 16:37:42.113 WARN failed to locate config-drive, using the metadata service API instead Jan 29 16:37:42.140112 coreos-metadata[814]: Jan 29 16:37:42.140 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 29 16:37:42.159934 coreos-metadata[814]: Jan 29 16:37:42.159 INFO Fetch successful Jan 29 16:37:42.160800 coreos-metadata[814]: Jan 29 16:37:42.160 INFO wrote hostname srv-xmqrr.gb1.brightbox.com to /sysroot/etc/hostname Jan 29 16:37:42.163093 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 29 16:37:42.163283 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 29 16:37:42.175628 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 16:37:42.198644 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 16:37:42.212393 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (946) Jan 29 16:37:42.216358 kernel: BTRFS info (device vda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:37:42.216393 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:37:42.217762 kernel: BTRFS info (device vda6): using free space tree Jan 29 16:37:42.223362 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 16:37:42.226884 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 16:37:42.254397 ignition[964]: INFO : Ignition 2.20.0 Jan 29 16:37:42.254397 ignition[964]: INFO : Stage: files Jan 29 16:37:42.254397 ignition[964]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:37:42.254397 ignition[964]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 16:37:42.258912 ignition[964]: DEBUG : files: compiled without relabeling support, skipping Jan 29 16:37:42.258912 ignition[964]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 16:37:42.258912 ignition[964]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 16:37:42.262689 ignition[964]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 16:37:42.264674 ignition[964]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 16:37:42.265705 ignition[964]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 16:37:42.264904 unknown[964]: wrote ssh authorized keys file for user: core Jan 29 16:37:42.267678 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 29 16:37:42.267678 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 16:37:42.267678 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 16:37:42.271451 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 16:37:42.271451 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 16:37:42.271451 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 16:37:42.271451 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 16:37:42.271451 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 29 16:37:42.925736 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 29 16:37:44.928298 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 16:37:44.935857 ignition[964]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 16:37:44.935857 ignition[964]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 16:37:44.935857 ignition[964]: INFO : files: files passed Jan 29 16:37:44.935857 ignition[964]: INFO : Ignition finished successfully Jan 29 16:37:44.935674 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 16:37:44.946604 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 16:37:44.948579 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 16:37:44.959043 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 16:37:44.959266 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 16:37:44.972455 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:37:44.975058 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:37:44.976187 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:37:44.978399 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 16:37:44.980578 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 16:37:44.990619 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 16:37:45.029781 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 16:37:45.030031 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 16:37:45.032293 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 16:37:45.033450 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 16:37:45.035131 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 16:37:45.041582 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 16:37:45.062958 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 16:37:45.067543 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 16:37:45.090268 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:37:45.091205 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:37:45.092936 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 16:37:45.094397 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 16:37:45.094599 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 16:37:45.096379 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 16:37:45.097315 systemd[1]: Stopped target basic.target - Basic System. Jan 29 16:37:45.098823 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 16:37:45.100176 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 16:37:45.101550 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 16:37:45.103088 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 16:37:45.104709 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 16:37:45.106363 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 16:37:45.107821 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 16:37:45.109482 systemd[1]: Stopped target swap.target - Swaps. Jan 29 16:37:45.110855 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 16:37:45.111048 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 16:37:45.112841 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:37:45.113767 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:37:45.115269 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 16:37:45.117552 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:37:45.118710 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 16:37:45.118891 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 16:37:45.120853 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 16:37:45.121096 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 16:37:45.122756 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 16:37:45.122986 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 16:37:45.131569 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 16:37:45.134168 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 16:37:45.134376 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:37:45.138680 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 16:37:45.139403 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 16:37:45.139675 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:37:45.141860 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 16:37:45.142038 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 16:37:45.157494 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 16:37:45.158704 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 16:37:45.161467 ignition[1016]: INFO : Ignition 2.20.0 Jan 29 16:37:45.161467 ignition[1016]: INFO : Stage: umount Jan 29 16:37:45.164830 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:37:45.164830 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 16:37:45.167912 ignition[1016]: INFO : umount: umount passed Jan 29 16:37:45.167912 ignition[1016]: INFO : Ignition finished successfully Jan 29 16:37:45.167758 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 16:37:45.167934 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 16:37:45.170308 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 16:37:45.170505 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 16:37:45.174519 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 16:37:45.174611 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 16:37:45.175383 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 16:37:45.175477 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 16:37:45.176184 systemd[1]: Stopped target network.target - Network. Jan 29 16:37:45.176844 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 16:37:45.176929 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 16:37:45.178321 systemd[1]: Stopped target paths.target - Path Units. Jan 29 16:37:45.178962 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 16:37:45.182443 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:37:45.189462 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 16:37:45.190984 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 16:37:45.192715 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 16:37:45.192803 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 16:37:45.194067 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 16:37:45.194140 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 16:37:45.195654 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 16:37:45.195730 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 16:37:45.197323 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 16:37:45.197442 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 16:37:45.198919 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 16:37:45.200884 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 16:37:45.203732 systemd-networkd[776]: eth0: DHCPv6 lease lost Jan 29 16:37:45.206290 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 16:37:45.207115 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 16:37:45.207701 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 16:37:45.213567 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 29 16:37:45.213996 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 16:37:45.214175 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 16:37:45.216898 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 29 16:37:45.218144 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 16:37:45.218590 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:37:45.224517 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 16:37:45.225745 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 16:37:45.225821 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 16:37:45.228921 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 16:37:45.228999 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:37:45.230560 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 16:37:45.230640 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 16:37:45.232087 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 16:37:45.232157 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:37:45.234160 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:37:45.239486 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 29 16:37:45.239592 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:37:45.252047 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 16:37:45.252326 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:37:45.255321 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 16:37:45.255584 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 16:37:45.257178 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 16:37:45.257329 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 16:37:45.260415 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 16:37:45.260491 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:37:45.261995 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 16:37:45.262079 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 16:37:45.264172 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 16:37:45.264253 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 16:37:45.265666 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 16:37:45.265759 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:37:45.274555 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 16:37:45.277758 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 16:37:45.277835 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:37:45.280497 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:37:45.280570 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:37:45.283568 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 29 16:37:45.283661 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:37:45.287666 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 16:37:45.287847 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 16:37:45.301882 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 16:37:45.302077 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 16:37:45.304108 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 16:37:45.304911 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 16:37:45.305021 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 16:37:45.309538 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 16:37:45.330863 systemd[1]: Switching root. Jan 29 16:37:45.366372 systemd-journald[202]: Journal stopped Jan 29 16:37:46.914034 systemd-journald[202]: Received SIGTERM from PID 1 (systemd). Jan 29 16:37:46.914137 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 16:37:46.914164 kernel: SELinux: policy capability open_perms=1 Jan 29 16:37:46.914183 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 16:37:46.914209 kernel: SELinux: policy capability always_check_network=0 Jan 29 16:37:46.914262 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 16:37:46.914283 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 16:37:46.914309 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 16:37:46.914373 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 16:37:46.914399 kernel: audit: type=1403 audit(1738168665.606:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 16:37:46.914428 systemd[1]: Successfully loaded SELinux policy in 51.553ms. Jan 29 16:37:46.914475 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 23.792ms. Jan 29 16:37:46.914499 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 16:37:46.914526 systemd[1]: Detected virtualization kvm. Jan 29 16:37:46.914560 systemd[1]: Detected architecture x86-64. Jan 29 16:37:46.914583 systemd[1]: Detected first boot. Jan 29 16:37:46.914611 systemd[1]: Hostname set to . Jan 29 16:37:46.914633 systemd[1]: Initializing machine ID from VM UUID. Jan 29 16:37:46.914654 zram_generator::config[1061]: No configuration found. Jan 29 16:37:46.914676 kernel: Guest personality initialized and is inactive Jan 29 16:37:46.914695 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jan 29 16:37:46.914715 kernel: Initialized host personality Jan 29 16:37:46.914746 kernel: NET: Registered PF_VSOCK protocol family Jan 29 16:37:46.914767 systemd[1]: Populated /etc with preset unit settings. Jan 29 16:37:46.914797 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 29 16:37:46.914818 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 16:37:46.914853 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 16:37:46.914880 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 16:37:46.914901 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 16:37:46.914922 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 16:37:46.914960 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 16:37:46.914982 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 16:37:46.915013 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 16:37:46.915033 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 16:37:46.915071 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 16:37:46.915091 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 16:37:46.915112 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:37:46.915133 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:37:46.915154 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 16:37:46.915206 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 16:37:46.915229 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 16:37:46.915267 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 16:37:46.915287 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 16:37:46.915307 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:37:46.915328 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 16:37:46.915411 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 16:37:46.915436 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 16:37:46.915458 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 16:37:46.915478 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:37:46.915499 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 16:37:46.915521 systemd[1]: Reached target slices.target - Slice Units. Jan 29 16:37:46.915541 systemd[1]: Reached target swap.target - Swaps. Jan 29 16:37:46.915563 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 16:37:46.915583 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 16:37:46.915617 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 29 16:37:46.915645 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:37:46.915667 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 16:37:46.915694 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:37:46.915733 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 16:37:46.915768 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 16:37:46.915810 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 16:37:46.915834 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 16:37:46.915867 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:37:46.915887 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 16:37:46.915907 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 16:37:46.915937 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 16:37:46.915958 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 16:37:46.915986 systemd[1]: Reached target machines.target - Containers. Jan 29 16:37:46.916006 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 16:37:46.916037 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:37:46.916057 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 16:37:46.916089 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 16:37:46.916111 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:37:46.916131 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 16:37:46.916151 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:37:46.916170 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 16:37:46.916200 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:37:46.916233 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 16:37:46.916265 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 16:37:46.916286 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 16:37:46.916305 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 16:37:46.916370 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 16:37:46.916405 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:37:46.916427 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 16:37:46.916449 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 16:37:46.916469 kernel: loop: module loaded Jan 29 16:37:46.916505 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 16:37:46.916528 kernel: fuse: init (API version 7.39) Jan 29 16:37:46.916548 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 16:37:46.916569 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 29 16:37:46.916590 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 16:37:46.916612 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 16:37:46.916632 kernel: ACPI: bus type drm_connector registered Jan 29 16:37:46.916675 systemd[1]: Stopped verity-setup.service. Jan 29 16:37:46.916699 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:37:46.916754 systemd-journald[1151]: Collecting audit messages is disabled. Jan 29 16:37:46.916824 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 16:37:46.916855 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 16:37:46.916888 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 16:37:46.916919 systemd-journald[1151]: Journal started Jan 29 16:37:46.916957 systemd-journald[1151]: Runtime Journal (/run/log/journal/8ce0092101a048aa81e51d5d513bf4b8) is 4.7M, max 37.9M, 33.2M free. Jan 29 16:37:46.524483 systemd[1]: Queued start job for default target multi-user.target. Jan 29 16:37:46.537930 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 16:37:46.538748 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 16:37:46.925461 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 16:37:46.928016 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 16:37:46.929596 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 16:37:46.930553 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 16:37:46.932424 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:37:46.933737 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 16:37:46.934441 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 16:37:46.935905 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:37:46.936446 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:37:46.937911 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 16:37:46.938614 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 16:37:46.942501 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:37:46.942836 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:37:46.945095 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 16:37:46.945500 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 16:37:46.946733 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:37:46.947432 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:37:46.949679 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 16:37:46.950993 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 16:37:46.952430 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 16:37:46.966446 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 29 16:37:46.981675 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 16:37:46.991482 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 16:37:46.998452 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 16:37:47.001119 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 16:37:47.001172 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 16:37:47.006212 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 29 16:37:47.014562 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 16:37:47.023224 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 16:37:47.024201 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:37:47.028569 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 16:37:47.037718 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 16:37:47.039534 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:37:47.046489 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 16:37:47.047301 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:37:47.059491 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:37:47.062286 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 16:37:47.068194 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 16:37:47.069902 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 16:37:47.071785 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 16:37:47.074164 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 16:37:47.099391 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 16:37:47.100761 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 16:37:47.102522 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 16:37:47.104754 systemd-journald[1151]: Time spent on flushing to /var/log/journal/8ce0092101a048aa81e51d5d513bf4b8 is 113.128ms for 1143 entries. Jan 29 16:37:47.104754 systemd-journald[1151]: System Journal (/var/log/journal/8ce0092101a048aa81e51d5d513bf4b8) is 8M, max 584.8M, 576.8M free. Jan 29 16:37:47.248661 systemd-journald[1151]: Received client request to flush runtime journal. Jan 29 16:37:47.248727 kernel: loop0: detected capacity change from 0 to 8 Jan 29 16:37:47.248768 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 16:37:47.248791 kernel: loop1: detected capacity change from 0 to 147912 Jan 29 16:37:47.109839 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 29 16:37:47.205611 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:37:47.211997 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 29 16:37:47.254183 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 16:37:47.278670 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 16:37:47.292500 kernel: loop2: detected capacity change from 0 to 138176 Jan 29 16:37:47.291311 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 16:37:47.343925 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:37:47.370041 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Jan 29 16:37:47.370069 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Jan 29 16:37:47.370460 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 16:37:47.380002 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:37:47.388364 kernel: loop3: detected capacity change from 0 to 210664 Jan 29 16:37:47.428638 udevadm[1223]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 29 16:37:47.461384 kernel: loop4: detected capacity change from 0 to 8 Jan 29 16:37:47.479151 kernel: loop5: detected capacity change from 0 to 147912 Jan 29 16:37:47.506413 kernel: loop6: detected capacity change from 0 to 138176 Jan 29 16:37:47.527391 kernel: loop7: detected capacity change from 0 to 210664 Jan 29 16:37:47.542099 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 16:37:47.545579 (sd-merge)[1226]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 29 16:37:47.546520 (sd-merge)[1226]: Merged extensions into '/usr'. Jan 29 16:37:47.553224 systemd[1]: Reload requested from client PID 1199 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 16:37:47.553411 systemd[1]: Reloading... Jan 29 16:37:47.665411 zram_generator::config[1251]: No configuration found. Jan 29 16:37:47.916094 ldconfig[1194]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 16:37:47.985557 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:37:48.085584 systemd[1]: Reloading finished in 531 ms. Jan 29 16:37:48.118118 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 16:37:48.119820 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 16:37:48.132715 systemd[1]: Starting ensure-sysext.service... Jan 29 16:37:48.137197 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 16:37:48.165487 systemd-tmpfiles[1312]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 16:37:48.167808 systemd-tmpfiles[1312]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 16:37:48.170597 systemd-tmpfiles[1312]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 16:37:48.170999 systemd-tmpfiles[1312]: ACLs are not supported, ignoring. Jan 29 16:37:48.171133 systemd-tmpfiles[1312]: ACLs are not supported, ignoring. Jan 29 16:37:48.172722 systemd[1]: Reload requested from client PID 1311 ('systemctl') (unit ensure-sysext.service)... Jan 29 16:37:48.172747 systemd[1]: Reloading... Jan 29 16:37:48.181934 systemd-tmpfiles[1312]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 16:37:48.183400 systemd-tmpfiles[1312]: Skipping /boot Jan 29 16:37:48.228040 systemd-tmpfiles[1312]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 16:37:48.229625 systemd-tmpfiles[1312]: Skipping /boot Jan 29 16:37:48.263377 zram_generator::config[1340]: No configuration found. Jan 29 16:37:48.472016 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:37:48.574139 systemd[1]: Reloading finished in 400 ms. Jan 29 16:37:48.591460 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 16:37:48.606752 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:37:48.620737 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:37:48.624705 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 16:37:48.630671 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 16:37:48.635668 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 16:37:48.646592 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:37:48.663869 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 16:37:48.673862 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:37:48.674162 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:37:48.676815 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:37:48.681160 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:37:48.688695 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:37:48.689639 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:37:48.689817 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:37:48.699680 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 16:37:48.701023 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:37:48.711057 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:37:48.711394 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:37:48.711650 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:37:48.711785 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:37:48.711918 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:37:48.713701 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 16:37:48.722960 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:37:48.723321 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:37:48.725650 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 16:37:48.726554 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:37:48.726731 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:37:48.726923 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:37:48.735444 systemd[1]: Finished ensure-sysext.service. Jan 29 16:37:48.751559 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 16:37:48.754416 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 16:37:48.768173 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 16:37:48.776470 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:37:48.778451 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:37:48.781329 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:37:48.794415 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 16:37:48.795519 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 16:37:48.796481 systemd-udevd[1404]: Using default interface naming scheme 'v255'. Jan 29 16:37:48.799833 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:37:48.801425 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:37:48.802689 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:37:48.802983 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:37:48.804819 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 16:37:48.805143 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 16:37:48.809898 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:37:48.826200 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 16:37:48.849242 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:37:48.859429 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 16:37:48.866353 augenrules[1445]: No rules Jan 29 16:37:48.868938 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:37:48.869326 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:37:48.878082 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 16:37:49.078904 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 16:37:49.080095 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 16:37:49.083887 systemd-networkd[1440]: lo: Link UP Jan 29 16:37:49.086383 systemd-networkd[1440]: lo: Gained carrier Jan 29 16:37:49.093156 systemd-networkd[1440]: Enumeration completed Jan 29 16:37:49.093298 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 16:37:49.103578 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 29 16:37:49.107890 systemd-resolved[1403]: Positive Trust Anchors: Jan 29 16:37:49.112565 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 16:37:49.113598 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 16:37:49.113741 systemd-resolved[1403]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 16:37:49.113785 systemd-resolved[1403]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 16:37:49.132979 systemd-resolved[1403]: Using system hostname 'srv-xmqrr.gb1.brightbox.com'. Jan 29 16:37:49.145346 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 16:37:49.148143 systemd[1]: Reached target network.target - Network. Jan 29 16:37:49.148820 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:37:49.156365 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1442) Jan 29 16:37:49.167760 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 29 16:37:49.249214 systemd-networkd[1440]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:37:49.251198 systemd-networkd[1440]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:37:49.253229 systemd-networkd[1440]: eth0: Link UP Jan 29 16:37:49.253241 systemd-networkd[1440]: eth0: Gained carrier Jan 29 16:37:49.253261 systemd-networkd[1440]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:37:49.275433 systemd-networkd[1440]: eth0: DHCPv4 address 10.230.24.202/30, gateway 10.230.24.201 acquired from 10.230.24.201 Jan 29 16:37:49.277265 systemd-timesyncd[1418]: Network configuration changed, trying to establish connection. Jan 29 16:37:49.317094 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 29 16:37:49.321362 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 16:37:49.328372 kernel: ACPI: button: Power Button [PWRF] Jan 29 16:37:49.361382 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 29 16:37:49.368397 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 29 16:37:49.380687 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 29 16:37:49.380991 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 29 16:37:49.377446 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 16:37:49.385575 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 16:37:49.422783 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 16:37:49.464673 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:37:49.610515 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 16:37:49.626647 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 16:37:49.688564 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:37:49.705402 lvm[1492]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 16:37:49.743889 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 16:37:49.746058 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:37:49.746996 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 16:37:49.748134 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 16:37:49.749103 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 16:37:49.750362 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 16:37:49.751384 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 16:37:49.752314 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 16:37:49.753174 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 16:37:49.753233 systemd[1]: Reached target paths.target - Path Units. Jan 29 16:37:49.753922 systemd[1]: Reached target timers.target - Timer Units. Jan 29 16:37:49.756366 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 16:37:49.759420 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 16:37:49.764361 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 29 16:37:49.765424 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 29 16:37:49.766239 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 29 16:37:49.784048 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 16:37:49.785370 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 29 16:37:49.798568 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 16:37:49.800177 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 16:37:49.801164 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 16:37:49.801861 systemd[1]: Reached target basic.target - Basic System. Jan 29 16:37:49.802596 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 16:37:49.802661 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 16:37:49.805222 lvm[1497]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 16:37:49.809448 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 16:37:49.815567 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 16:37:49.819542 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 16:37:49.825483 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 16:37:49.829909 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 16:37:49.830641 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 16:37:49.834556 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 16:37:49.842538 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 16:37:49.851902 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 16:37:49.866569 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 16:37:49.868632 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 16:37:49.871536 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 16:37:49.875447 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 16:37:49.881544 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 16:37:49.884902 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 16:37:49.905013 dbus-daemon[1500]: [system] SELinux support is enabled Jan 29 16:37:49.912782 extend-filesystems[1502]: Found loop4 Jan 29 16:37:49.912782 extend-filesystems[1502]: Found loop5 Jan 29 16:37:49.912782 extend-filesystems[1502]: Found loop6 Jan 29 16:37:49.912782 extend-filesystems[1502]: Found loop7 Jan 29 16:37:49.912782 extend-filesystems[1502]: Found vda Jan 29 16:37:49.912782 extend-filesystems[1502]: Found vda1 Jan 29 16:37:49.912782 extend-filesystems[1502]: Found vda2 Jan 29 16:37:49.912782 extend-filesystems[1502]: Found vda3 Jan 29 16:37:49.912782 extend-filesystems[1502]: Found usr Jan 29 16:37:49.912782 extend-filesystems[1502]: Found vda4 Jan 29 16:37:49.912782 extend-filesystems[1502]: Found vda6 Jan 29 16:37:49.912782 extend-filesystems[1502]: Found vda7 Jan 29 16:37:49.912782 extend-filesystems[1502]: Found vda9 Jan 29 16:37:49.912782 extend-filesystems[1502]: Checking size of /dev/vda9 Jan 29 16:37:49.947536 jq[1501]: false Jan 29 16:37:49.914988 dbus-daemon[1500]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1440 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 29 16:37:49.913426 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 16:37:49.941887 dbus-daemon[1500]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 29 16:37:49.932901 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 16:37:49.933243 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 16:37:49.933754 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 16:37:49.934127 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 16:37:49.941113 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 16:37:49.941200 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 16:37:49.943730 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 16:37:49.943759 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 16:37:49.965076 jq[1511]: true Jan 29 16:37:49.964812 (ntainerd)[1523]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 16:37:49.980537 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 29 16:37:50.000669 extend-filesystems[1502]: Resized partition /dev/vda9 Jan 29 16:37:50.008215 extend-filesystems[1535]: resize2fs 1.47.1 (20-May-2024) Jan 29 16:37:50.031360 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Jan 29 16:37:50.052485 update_engine[1510]: I20250129 16:37:50.052328 1510 main.cc:92] Flatcar Update Engine starting Jan 29 16:37:50.063278 jq[1532]: true Jan 29 16:37:50.080050 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 16:37:50.080444 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 16:37:50.085011 systemd[1]: Started update-engine.service - Update Engine. Jan 29 16:37:50.086224 update_engine[1510]: I20250129 16:37:50.085556 1510 update_check_scheduler.cc:74] Next update check in 7m6s Jan 29 16:37:50.097521 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 16:37:50.149607 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1456) Jan 29 16:37:50.283287 dbus-daemon[1500]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 29 16:37:50.284394 dbus-daemon[1500]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1527 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 29 16:37:50.284675 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 29 16:37:50.307092 systemd[1]: Starting polkit.service - Authorization Manager... Jan 29 16:37:50.327472 bash[1554]: Updated "/home/core/.ssh/authorized_keys" Jan 29 16:37:50.332575 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 16:37:50.335464 systemd-logind[1509]: Watching system buttons on /dev/input/event2 (Power Button) Jan 29 16:37:50.335514 systemd-logind[1509]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 16:37:50.343667 systemd-logind[1509]: New seat seat0. Jan 29 16:37:50.358658 systemd[1]: Starting sshkeys.service... Jan 29 16:37:50.359543 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 16:37:50.394196 polkitd[1556]: Started polkitd version 121 Jan 29 16:37:50.419958 locksmithd[1539]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 16:37:50.425354 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 29 16:37:50.423540 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 16:37:50.440177 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 16:37:50.434164 polkitd[1556]: Loading rules from directory /etc/polkit-1/rules.d Jan 29 16:37:50.434284 polkitd[1556]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 29 16:37:50.440922 polkitd[1556]: Finished loading, compiling and executing 2 rules Jan 29 16:37:50.448359 extend-filesystems[1535]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 16:37:50.448359 extend-filesystems[1535]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 29 16:37:50.448359 extend-filesystems[1535]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 29 16:37:50.454209 extend-filesystems[1502]: Resized filesystem in /dev/vda9 Jan 29 16:37:50.449791 systemd[1]: Started polkit.service - Authorization Manager. Jan 29 16:37:50.448667 dbus-daemon[1500]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 29 16:37:50.453975 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 16:37:50.450035 polkitd[1556]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 29 16:37:50.456991 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 16:37:50.470798 systemd-networkd[1440]: eth0: Gained IPv6LL Jan 29 16:37:50.475490 systemd-timesyncd[1418]: Network configuration changed, trying to establish connection. Jan 29 16:37:50.478018 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 16:37:50.482254 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 16:37:50.492734 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:37:50.498771 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 16:37:50.511763 systemd-hostnamed[1527]: Hostname set to (static) Jan 29 16:37:50.597655 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 16:37:50.634005 containerd[1523]: time="2025-01-29T16:37:50.633850687Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 16:37:50.699694 containerd[1523]: time="2025-01-29T16:37:50.699449968Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:37:50.705169 containerd[1523]: time="2025-01-29T16:37:50.705094377Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:37:50.705241 containerd[1523]: time="2025-01-29T16:37:50.705168858Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 16:37:50.705241 containerd[1523]: time="2025-01-29T16:37:50.705196684Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 16:37:50.705507 containerd[1523]: time="2025-01-29T16:37:50.705470174Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 16:37:50.705559 containerd[1523]: time="2025-01-29T16:37:50.705525802Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 16:37:50.705659 containerd[1523]: time="2025-01-29T16:37:50.705629845Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:37:50.705707 containerd[1523]: time="2025-01-29T16:37:50.705660410Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:37:50.706351 containerd[1523]: time="2025-01-29T16:37:50.705958203Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:37:50.706351 containerd[1523]: time="2025-01-29T16:37:50.705998990Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 16:37:50.706351 containerd[1523]: time="2025-01-29T16:37:50.706020212Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:37:50.706351 containerd[1523]: time="2025-01-29T16:37:50.706035755Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 16:37:50.706351 containerd[1523]: time="2025-01-29T16:37:50.706187132Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:37:50.709060 containerd[1523]: time="2025-01-29T16:37:50.707620020Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:37:50.710359 containerd[1523]: time="2025-01-29T16:37:50.709235367Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:37:50.710359 containerd[1523]: time="2025-01-29T16:37:50.709274681Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 16:37:50.710359 containerd[1523]: time="2025-01-29T16:37:50.709911928Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 16:37:50.710359 containerd[1523]: time="2025-01-29T16:37:50.709989334Z" level=info msg="metadata content store policy set" policy=shared Jan 29 16:37:50.722276 containerd[1523]: time="2025-01-29T16:37:50.722236782Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 16:37:50.722406 containerd[1523]: time="2025-01-29T16:37:50.722328378Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 16:37:50.722454 containerd[1523]: time="2025-01-29T16:37:50.722423852Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 16:37:50.722526 containerd[1523]: time="2025-01-29T16:37:50.722454067Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 16:37:50.722526 containerd[1523]: time="2025-01-29T16:37:50.722476825Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 16:37:50.722729 containerd[1523]: time="2025-01-29T16:37:50.722699382Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 16:37:50.723216 containerd[1523]: time="2025-01-29T16:37:50.723174230Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 16:37:50.723432 containerd[1523]: time="2025-01-29T16:37:50.723406431Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 16:37:50.723483 containerd[1523]: time="2025-01-29T16:37:50.723438565Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 16:37:50.723483 containerd[1523]: time="2025-01-29T16:37:50.723463954Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 16:37:50.723604 containerd[1523]: time="2025-01-29T16:37:50.723489966Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 16:37:50.723604 containerd[1523]: time="2025-01-29T16:37:50.723520700Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 16:37:50.723604 containerd[1523]: time="2025-01-29T16:37:50.723539630Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 16:37:50.723604 containerd[1523]: time="2025-01-29T16:37:50.723559425Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 16:37:50.723604 containerd[1523]: time="2025-01-29T16:37:50.723589712Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 16:37:50.723826 containerd[1523]: time="2025-01-29T16:37:50.723618071Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 16:37:50.723826 containerd[1523]: time="2025-01-29T16:37:50.723655361Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 16:37:50.723826 containerd[1523]: time="2025-01-29T16:37:50.723674720Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 16:37:50.723826 containerd[1523]: time="2025-01-29T16:37:50.723725428Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 16:37:50.723826 containerd[1523]: time="2025-01-29T16:37:50.723756111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 16:37:50.723826 containerd[1523]: time="2025-01-29T16:37:50.723786420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 16:37:50.723826 containerd[1523]: time="2025-01-29T16:37:50.723816909Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 16:37:50.724137 containerd[1523]: time="2025-01-29T16:37:50.723838822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 16:37:50.724137 containerd[1523]: time="2025-01-29T16:37:50.723865608Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 16:37:50.724137 containerd[1523]: time="2025-01-29T16:37:50.723906109Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 16:37:50.724137 containerd[1523]: time="2025-01-29T16:37:50.723923455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 16:37:50.724137 containerd[1523]: time="2025-01-29T16:37:50.723942725Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 16:37:50.724137 containerd[1523]: time="2025-01-29T16:37:50.723964801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 16:37:50.724137 containerd[1523]: time="2025-01-29T16:37:50.723983077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 16:37:50.724137 containerd[1523]: time="2025-01-29T16:37:50.723999268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 16:37:50.724137 containerd[1523]: time="2025-01-29T16:37:50.724015759Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 16:37:50.724137 containerd[1523]: time="2025-01-29T16:37:50.724040349Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 16:37:50.724137 containerd[1523]: time="2025-01-29T16:37:50.724076147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 16:37:50.724137 containerd[1523]: time="2025-01-29T16:37:50.724099279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 16:37:50.724137 containerd[1523]: time="2025-01-29T16:37:50.724116349Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 16:37:50.726418 containerd[1523]: time="2025-01-29T16:37:50.725278177Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 16:37:50.726418 containerd[1523]: time="2025-01-29T16:37:50.725445517Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 16:37:50.726418 containerd[1523]: time="2025-01-29T16:37:50.725470738Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 16:37:50.726418 containerd[1523]: time="2025-01-29T16:37:50.725493074Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 16:37:50.726418 containerd[1523]: time="2025-01-29T16:37:50.725518146Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 16:37:50.726418 containerd[1523]: time="2025-01-29T16:37:50.725563219Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 16:37:50.726418 containerd[1523]: time="2025-01-29T16:37:50.725604147Z" level=info msg="NRI interface is disabled by configuration." Jan 29 16:37:50.726418 containerd[1523]: time="2025-01-29T16:37:50.725626971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 16:37:50.726801 containerd[1523]: time="2025-01-29T16:37:50.726099731Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 16:37:50.726801 containerd[1523]: time="2025-01-29T16:37:50.726172449Z" level=info msg="Connect containerd service" Jan 29 16:37:50.726801 containerd[1523]: time="2025-01-29T16:37:50.726247620Z" level=info msg="using legacy CRI server" Jan 29 16:37:50.726801 containerd[1523]: time="2025-01-29T16:37:50.726271982Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 16:37:50.729555 containerd[1523]: time="2025-01-29T16:37:50.727491798Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 16:37:50.729656 containerd[1523]: time="2025-01-29T16:37:50.729621076Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 16:37:50.729851 containerd[1523]: time="2025-01-29T16:37:50.729782850Z" level=info msg="Start subscribing containerd event" Jan 29 16:37:50.729916 containerd[1523]: time="2025-01-29T16:37:50.729881373Z" level=info msg="Start recovering state" Jan 29 16:37:50.730077 containerd[1523]: time="2025-01-29T16:37:50.730054024Z" level=info msg="Start event monitor" Jan 29 16:37:50.730163 containerd[1523]: time="2025-01-29T16:37:50.730098900Z" level=info msg="Start snapshots syncer" Jan 29 16:37:50.730163 containerd[1523]: time="2025-01-29T16:37:50.730122677Z" level=info msg="Start cni network conf syncer for default" Jan 29 16:37:50.730163 containerd[1523]: time="2025-01-29T16:37:50.730135504Z" level=info msg="Start streaming server" Jan 29 16:37:50.732614 containerd[1523]: time="2025-01-29T16:37:50.732585318Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 16:37:50.732710 containerd[1523]: time="2025-01-29T16:37:50.732686368Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 16:37:50.732954 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 16:37:50.733889 containerd[1523]: time="2025-01-29T16:37:50.733859735Z" level=info msg="containerd successfully booted in 0.104821s" Jan 29 16:37:50.780171 sshd_keygen[1519]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 16:37:50.825193 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 16:37:50.838858 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 16:37:50.847119 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 16:37:50.847512 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 16:37:50.856937 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 16:37:50.869658 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 16:37:50.882016 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 16:37:50.891051 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 16:37:50.892604 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 16:37:51.496682 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:37:51.511848 (kubelet)[1615]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:37:51.976966 systemd-timesyncd[1418]: Network configuration changed, trying to establish connection. Jan 29 16:37:51.978553 systemd-networkd[1440]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8632:24:19ff:fee6:18ca/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8632:24:19ff:fee6:18ca/64 assigned by NDisc. Jan 29 16:37:51.978564 systemd-networkd[1440]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 29 16:37:52.159634 kubelet[1615]: E0129 16:37:52.159474 1615 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:37:52.162662 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:37:52.162959 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:37:52.163937 systemd[1]: kubelet.service: Consumed 1.055s CPU time, 244.2M memory peak. Jan 29 16:37:53.927828 systemd-timesyncd[1418]: Network configuration changed, trying to establish connection. Jan 29 16:37:55.985229 login[1607]: pam_lastlog(login:session): file /var/log/lastlog is locked/read, retrying Jan 29 16:37:55.987491 login[1608]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 29 16:37:56.008425 systemd-logind[1509]: New session 1 of user core. Jan 29 16:37:56.011121 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 16:37:56.020022 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 16:37:56.038436 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 16:37:56.050895 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 16:37:56.055962 (systemd)[1633]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 16:37:56.059702 systemd-logind[1509]: New session c1 of user core. Jan 29 16:37:56.249762 systemd[1633]: Queued start job for default target default.target. Jan 29 16:37:56.262241 systemd[1633]: Created slice app.slice - User Application Slice. Jan 29 16:37:56.262288 systemd[1633]: Reached target paths.target - Paths. Jan 29 16:37:56.262392 systemd[1633]: Reached target timers.target - Timers. Jan 29 16:37:56.264532 systemd[1633]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 16:37:56.279233 systemd[1633]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 16:37:56.279443 systemd[1633]: Reached target sockets.target - Sockets. Jan 29 16:37:56.279518 systemd[1633]: Reached target basic.target - Basic System. Jan 29 16:37:56.279597 systemd[1633]: Reached target default.target - Main User Target. Jan 29 16:37:56.279686 systemd[1633]: Startup finished in 210ms. Jan 29 16:37:56.279697 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 16:37:56.287864 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 16:37:56.943362 coreos-metadata[1499]: Jan 29 16:37:56.943 WARN failed to locate config-drive, using the metadata service API instead Jan 29 16:37:56.970977 coreos-metadata[1499]: Jan 29 16:37:56.970 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 29 16:37:56.977474 coreos-metadata[1499]: Jan 29 16:37:56.977 INFO Fetch failed with 404: resource not found Jan 29 16:37:56.977474 coreos-metadata[1499]: Jan 29 16:37:56.977 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 29 16:37:56.978152 coreos-metadata[1499]: Jan 29 16:37:56.978 INFO Fetch successful Jan 29 16:37:56.978279 coreos-metadata[1499]: Jan 29 16:37:56.978 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 29 16:37:56.986993 login[1607]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 29 16:37:56.994805 systemd-logind[1509]: New session 2 of user core. Jan 29 16:37:56.997068 coreos-metadata[1499]: Jan 29 16:37:56.996 INFO Fetch successful Jan 29 16:37:56.997068 coreos-metadata[1499]: Jan 29 16:37:56.997 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 29 16:37:57.004756 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 16:37:57.009782 coreos-metadata[1499]: Jan 29 16:37:57.009 INFO Fetch successful Jan 29 16:37:57.009916 coreos-metadata[1499]: Jan 29 16:37:57.009 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 29 16:37:57.022320 coreos-metadata[1499]: Jan 29 16:37:57.022 INFO Fetch successful Jan 29 16:37:57.022320 coreos-metadata[1499]: Jan 29 16:37:57.022 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 29 16:37:57.037215 coreos-metadata[1499]: Jan 29 16:37:57.037 INFO Fetch successful Jan 29 16:37:57.079626 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 16:37:57.080624 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 16:37:57.567774 coreos-metadata[1570]: Jan 29 16:37:57.567 WARN failed to locate config-drive, using the metadata service API instead Jan 29 16:37:57.590666 coreos-metadata[1570]: Jan 29 16:37:57.590 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 29 16:37:57.616150 coreos-metadata[1570]: Jan 29 16:37:57.616 INFO Fetch successful Jan 29 16:37:57.616329 coreos-metadata[1570]: Jan 29 16:37:57.616 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 29 16:37:57.643362 coreos-metadata[1570]: Jan 29 16:37:57.643 INFO Fetch successful Jan 29 16:37:57.645349 unknown[1570]: wrote ssh authorized keys file for user: core Jan 29 16:37:57.670168 update-ssh-keys[1670]: Updated "/home/core/.ssh/authorized_keys" Jan 29 16:37:57.671185 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 16:37:57.674191 systemd[1]: Finished sshkeys.service. Jan 29 16:37:57.676053 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 16:37:57.676451 systemd[1]: Startup finished in 1.325s (kernel) + 14.844s (initrd) + 12.119s (userspace) = 28.290s. Jan 29 16:37:59.842268 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 16:37:59.854230 systemd[1]: Started sshd@0-10.230.24.202:22-139.178.68.195:58750.service - OpenSSH per-connection server daemon (139.178.68.195:58750). Jan 29 16:38:00.774025 sshd[1675]: Accepted publickey for core from 139.178.68.195 port 58750 ssh2: RSA SHA256:7dTxPmip35TWTWu6mEyj4H7R5+NoH6jBkLY6mIqymkE Jan 29 16:38:00.776071 sshd-session[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:38:00.783962 systemd-logind[1509]: New session 3 of user core. Jan 29 16:38:00.792564 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 16:38:01.554843 systemd[1]: Started sshd@1-10.230.24.202:22-139.178.68.195:58752.service - OpenSSH per-connection server daemon (139.178.68.195:58752). Jan 29 16:38:02.296195 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 16:38:02.302602 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:38:02.448487 sshd[1680]: Accepted publickey for core from 139.178.68.195 port 58752 ssh2: RSA SHA256:7dTxPmip35TWTWu6mEyj4H7R5+NoH6jBkLY6mIqymkE Jan 29 16:38:02.450597 sshd-session[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:38:02.451400 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:38:02.459418 systemd-logind[1509]: New session 4 of user core. Jan 29 16:38:02.460881 (kubelet)[1689]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:38:02.461803 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 16:38:02.527896 kubelet[1689]: E0129 16:38:02.527820 1689 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:38:02.533022 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:38:02.533284 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:38:02.533920 systemd[1]: kubelet.service: Consumed 205ms CPU time, 96.4M memory peak. Jan 29 16:38:03.066280 sshd[1695]: Connection closed by 139.178.68.195 port 58752 Jan 29 16:38:03.067478 sshd-session[1680]: pam_unix(sshd:session): session closed for user core Jan 29 16:38:03.073198 systemd[1]: sshd@1-10.230.24.202:22-139.178.68.195:58752.service: Deactivated successfully. Jan 29 16:38:03.075642 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 16:38:03.077781 systemd-logind[1509]: Session 4 logged out. Waiting for processes to exit. Jan 29 16:38:03.079205 systemd-logind[1509]: Removed session 4. Jan 29 16:38:03.234835 systemd[1]: Started sshd@2-10.230.24.202:22-139.178.68.195:58768.service - OpenSSH per-connection server daemon (139.178.68.195:58768). Jan 29 16:38:04.133511 sshd[1704]: Accepted publickey for core from 139.178.68.195 port 58768 ssh2: RSA SHA256:7dTxPmip35TWTWu6mEyj4H7R5+NoH6jBkLY6mIqymkE Jan 29 16:38:04.135444 sshd-session[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:38:04.143089 systemd-logind[1509]: New session 5 of user core. Jan 29 16:38:04.148558 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 16:38:04.749967 sshd[1706]: Connection closed by 139.178.68.195 port 58768 Jan 29 16:38:04.749176 sshd-session[1704]: pam_unix(sshd:session): session closed for user core Jan 29 16:38:04.753974 systemd-logind[1509]: Session 5 logged out. Waiting for processes to exit. Jan 29 16:38:04.755967 systemd[1]: sshd@2-10.230.24.202:22-139.178.68.195:58768.service: Deactivated successfully. Jan 29 16:38:04.758229 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 16:38:04.759716 systemd-logind[1509]: Removed session 5. Jan 29 16:38:04.908692 systemd[1]: Started sshd@3-10.230.24.202:22-139.178.68.195:37554.service - OpenSSH per-connection server daemon (139.178.68.195:37554). Jan 29 16:38:05.801018 sshd[1712]: Accepted publickey for core from 139.178.68.195 port 37554 ssh2: RSA SHA256:7dTxPmip35TWTWu6mEyj4H7R5+NoH6jBkLY6mIqymkE Jan 29 16:38:05.803052 sshd-session[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:38:05.811060 systemd-logind[1509]: New session 6 of user core. Jan 29 16:38:05.826579 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 16:38:06.420475 sshd[1714]: Connection closed by 139.178.68.195 port 37554 Jan 29 16:38:06.421427 sshd-session[1712]: pam_unix(sshd:session): session closed for user core Jan 29 16:38:06.425546 systemd[1]: sshd@3-10.230.24.202:22-139.178.68.195:37554.service: Deactivated successfully. Jan 29 16:38:06.427978 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 16:38:06.430462 systemd-logind[1509]: Session 6 logged out. Waiting for processes to exit. Jan 29 16:38:06.431895 systemd-logind[1509]: Removed session 6. Jan 29 16:38:06.581765 systemd[1]: Started sshd@4-10.230.24.202:22-139.178.68.195:37558.service - OpenSSH per-connection server daemon (139.178.68.195:37558). Jan 29 16:38:07.468534 sshd[1720]: Accepted publickey for core from 139.178.68.195 port 37558 ssh2: RSA SHA256:7dTxPmip35TWTWu6mEyj4H7R5+NoH6jBkLY6mIqymkE Jan 29 16:38:07.470609 sshd-session[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:38:07.480390 systemd-logind[1509]: New session 7 of user core. Jan 29 16:38:07.488577 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 16:38:07.955235 sudo[1723]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 16:38:07.955805 sudo[1723]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:38:07.973622 sudo[1723]: pam_unix(sudo:session): session closed for user root Jan 29 16:38:08.118399 sshd[1722]: Connection closed by 139.178.68.195 port 37558 Jan 29 16:38:08.117398 sshd-session[1720]: pam_unix(sshd:session): session closed for user core Jan 29 16:38:08.122747 systemd-logind[1509]: Session 7 logged out. Waiting for processes to exit. Jan 29 16:38:08.123332 systemd[1]: sshd@4-10.230.24.202:22-139.178.68.195:37558.service: Deactivated successfully. Jan 29 16:38:08.125554 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 16:38:08.126871 systemd-logind[1509]: Removed session 7. Jan 29 16:38:08.280815 systemd[1]: Started sshd@5-10.230.24.202:22-139.178.68.195:37570.service - OpenSSH per-connection server daemon (139.178.68.195:37570). Jan 29 16:38:09.170377 sshd[1729]: Accepted publickey for core from 139.178.68.195 port 37570 ssh2: RSA SHA256:7dTxPmip35TWTWu6mEyj4H7R5+NoH6jBkLY6mIqymkE Jan 29 16:38:09.172472 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:38:09.180700 systemd-logind[1509]: New session 8 of user core. Jan 29 16:38:09.187551 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 16:38:09.647435 sudo[1733]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 16:38:09.648770 sudo[1733]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:38:09.654170 sudo[1733]: pam_unix(sudo:session): session closed for user root Jan 29 16:38:09.662007 sudo[1732]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 16:38:09.662497 sudo[1732]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:38:09.691173 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:38:09.728866 augenrules[1755]: No rules Jan 29 16:38:09.729820 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:38:09.730175 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:38:09.732210 sudo[1732]: pam_unix(sudo:session): session closed for user root Jan 29 16:38:09.875413 sshd[1731]: Connection closed by 139.178.68.195 port 37570 Jan 29 16:38:09.876390 sshd-session[1729]: pam_unix(sshd:session): session closed for user core Jan 29 16:38:09.881373 systemd-logind[1509]: Session 8 logged out. Waiting for processes to exit. Jan 29 16:38:09.881938 systemd[1]: sshd@5-10.230.24.202:22-139.178.68.195:37570.service: Deactivated successfully. Jan 29 16:38:09.884250 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 16:38:09.885643 systemd-logind[1509]: Removed session 8. Jan 29 16:38:10.037774 systemd[1]: Started sshd@6-10.230.24.202:22-139.178.68.195:37574.service - OpenSSH per-connection server daemon (139.178.68.195:37574). Jan 29 16:38:10.925578 sshd[1764]: Accepted publickey for core from 139.178.68.195 port 37574 ssh2: RSA SHA256:7dTxPmip35TWTWu6mEyj4H7R5+NoH6jBkLY6mIqymkE Jan 29 16:38:10.927450 sshd-session[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:38:10.935629 systemd-logind[1509]: New session 9 of user core. Jan 29 16:38:10.942608 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 16:38:11.402281 sudo[1767]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 16:38:11.402814 sudo[1767]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:38:12.251271 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:38:12.251590 systemd[1]: kubelet.service: Consumed 205ms CPU time, 96.4M memory peak. Jan 29 16:38:12.266764 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:38:12.295394 systemd[1]: Reload requested from client PID 1805 ('systemctl') (unit session-9.scope)... Jan 29 16:38:12.295436 systemd[1]: Reloading... Jan 29 16:38:12.456390 zram_generator::config[1851]: No configuration found. Jan 29 16:38:12.640239 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:38:12.794903 systemd[1]: Reloading finished in 498 ms. Jan 29 16:38:12.859660 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:38:12.866049 (kubelet)[1908]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:38:12.876027 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:38:12.877427 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 16:38:12.877826 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:38:12.877910 systemd[1]: kubelet.service: Consumed 134ms CPU time, 85M memory peak. Jan 29 16:38:12.885719 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:38:13.105927 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:38:13.117055 (kubelet)[1925]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:38:13.195668 kubelet[1925]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:38:13.196389 kubelet[1925]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 16:38:13.196389 kubelet[1925]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:38:13.198427 kubelet[1925]: I0129 16:38:13.197620 1925 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 16:38:13.824993 kubelet[1925]: I0129 16:38:13.824882 1925 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 16:38:13.824993 kubelet[1925]: I0129 16:38:13.824929 1925 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 16:38:13.825309 kubelet[1925]: I0129 16:38:13.825267 1925 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 16:38:13.840496 kubelet[1925]: I0129 16:38:13.840433 1925 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 16:38:13.854985 kubelet[1925]: I0129 16:38:13.854914 1925 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 16:38:13.857524 kubelet[1925]: I0129 16:38:13.857451 1925 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 16:38:13.857768 kubelet[1925]: I0129 16:38:13.857516 1925 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.230.24.202","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 16:38:13.858007 kubelet[1925]: I0129 16:38:13.857798 1925 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 16:38:13.858007 kubelet[1925]: I0129 16:38:13.857814 1925 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 16:38:13.858111 kubelet[1925]: I0129 16:38:13.858012 1925 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:38:13.858964 kubelet[1925]: I0129 16:38:13.858935 1925 kubelet.go:400] "Attempting to sync node with API server" Jan 29 16:38:13.858964 kubelet[1925]: I0129 16:38:13.858962 1925 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 16:38:13.859090 kubelet[1925]: I0129 16:38:13.859012 1925 kubelet.go:312] "Adding apiserver pod source" Jan 29 16:38:13.859090 kubelet[1925]: I0129 16:38:13.859047 1925 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 16:38:13.863369 kubelet[1925]: E0129 16:38:13.862785 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:38:13.863369 kubelet[1925]: E0129 16:38:13.862981 1925 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:38:13.864294 kubelet[1925]: I0129 16:38:13.864264 1925 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 16:38:13.866263 kubelet[1925]: I0129 16:38:13.866236 1925 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 16:38:13.866543 kubelet[1925]: W0129 16:38:13.866521 1925 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 16:38:13.867618 kubelet[1925]: I0129 16:38:13.867595 1925 server.go:1264] "Started kubelet" Jan 29 16:38:13.868656 kubelet[1925]: I0129 16:38:13.868607 1925 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 16:38:13.875428 kubelet[1925]: I0129 16:38:13.875401 1925 server.go:455] "Adding debug handlers to kubelet server" Jan 29 16:38:13.879382 kubelet[1925]: I0129 16:38:13.878037 1925 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 16:38:13.879382 kubelet[1925]: I0129 16:38:13.878531 1925 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 16:38:13.880126 kubelet[1925]: I0129 16:38:13.880102 1925 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 16:38:13.887498 kubelet[1925]: I0129 16:38:13.887459 1925 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 16:38:13.887922 kubelet[1925]: I0129 16:38:13.887894 1925 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 16:38:13.888073 kubelet[1925]: I0129 16:38:13.888047 1925 reconciler.go:26] "Reconciler: start to sync state" Jan 29 16:38:13.890419 kubelet[1925]: E0129 16:38:13.890218 1925 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.230.24.202.181f3737c5ae3f91 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.230.24.202,UID:10.230.24.202,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.230.24.202,},FirstTimestamp:2025-01-29 16:38:13.867560849 +0000 UTC m=+0.746005691,LastTimestamp:2025-01-29 16:38:13.867560849 +0000 UTC m=+0.746005691,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.230.24.202,}" Jan 29 16:38:13.892889 kubelet[1925]: I0129 16:38:13.892772 1925 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 16:38:13.896009 kubelet[1925]: I0129 16:38:13.895821 1925 factory.go:221] Registration of the containerd container factory successfully Jan 29 16:38:13.896009 kubelet[1925]: I0129 16:38:13.895846 1925 factory.go:221] Registration of the systemd container factory successfully Jan 29 16:38:13.920245 kubelet[1925]: E0129 16:38:13.915945 1925 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 16:38:13.920245 kubelet[1925]: E0129 16:38:13.917018 1925 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.230.24.202\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 29 16:38:13.920245 kubelet[1925]: W0129 16:38:13.917087 1925 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.230.24.202" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 29 16:38:13.920245 kubelet[1925]: E0129 16:38:13.917137 1925 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.230.24.202" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 29 16:38:13.920245 kubelet[1925]: W0129 16:38:13.917180 1925 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 29 16:38:13.920245 kubelet[1925]: E0129 16:38:13.917199 1925 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 29 16:38:13.920245 kubelet[1925]: W0129 16:38:13.917392 1925 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 29 16:38:13.920834 kubelet[1925]: E0129 16:38:13.917415 1925 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 29 16:38:13.920834 kubelet[1925]: E0129 16:38:13.918222 1925 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.230.24.202.181f3737c83adce8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.230.24.202,UID:10.230.24.202,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.230.24.202,},FirstTimestamp:2025-01-29 16:38:13.9103306 +0000 UTC m=+0.788775445,LastTimestamp:2025-01-29 16:38:13.9103306 +0000 UTC m=+0.788775445,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.230.24.202,}" Jan 29 16:38:13.920834 kubelet[1925]: I0129 16:38:13.919969 1925 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 16:38:13.920834 kubelet[1925]: I0129 16:38:13.919986 1925 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 16:38:13.920834 kubelet[1925]: I0129 16:38:13.920015 1925 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:38:13.924199 kubelet[1925]: I0129 16:38:13.923873 1925 policy_none.go:49] "None policy: Start" Jan 29 16:38:13.926014 kubelet[1925]: I0129 16:38:13.924965 1925 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 16:38:13.926014 kubelet[1925]: I0129 16:38:13.925012 1925 state_mem.go:35] "Initializing new in-memory state store" Jan 29 16:38:13.943667 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 16:38:13.957234 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 16:38:13.962868 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 16:38:13.971847 kubelet[1925]: I0129 16:38:13.971816 1925 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 16:38:13.972280 kubelet[1925]: I0129 16:38:13.972080 1925 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 16:38:13.974455 kubelet[1925]: I0129 16:38:13.972558 1925 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 16:38:13.977591 kubelet[1925]: E0129 16:38:13.977366 1925 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.230.24.202\" not found" Jan 29 16:38:13.990113 kubelet[1925]: I0129 16:38:13.989196 1925 kubelet_node_status.go:73] "Attempting to register node" node="10.230.24.202" Jan 29 16:38:13.996933 kubelet[1925]: I0129 16:38:13.996900 1925 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 16:38:13.998558 kubelet[1925]: I0129 16:38:13.998527 1925 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 16:38:13.998656 kubelet[1925]: I0129 16:38:13.998575 1925 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 16:38:13.998656 kubelet[1925]: I0129 16:38:13.998604 1925 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 16:38:13.998740 kubelet[1925]: E0129 16:38:13.998674 1925 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 29 16:38:14.001686 kubelet[1925]: I0129 16:38:14.001459 1925 kubelet_node_status.go:76] "Successfully registered node" node="10.230.24.202" Jan 29 16:38:14.030297 kubelet[1925]: E0129 16:38:14.030246 1925 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.24.202\" not found" Jan 29 16:38:14.131013 kubelet[1925]: E0129 16:38:14.130764 1925 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.24.202\" not found" Jan 29 16:38:14.143763 sudo[1767]: pam_unix(sudo:session): session closed for user root Jan 29 16:38:14.231571 kubelet[1925]: E0129 16:38:14.231457 1925 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.24.202\" not found" Jan 29 16:38:14.287427 sshd[1766]: Connection closed by 139.178.68.195 port 37574 Jan 29 16:38:14.288474 sshd-session[1764]: pam_unix(sshd:session): session closed for user core Jan 29 16:38:14.294032 systemd[1]: sshd@6-10.230.24.202:22-139.178.68.195:37574.service: Deactivated successfully. Jan 29 16:38:14.297957 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 16:38:14.298426 systemd[1]: session-9.scope: Consumed 598ms CPU time, 108.1M memory peak. Jan 29 16:38:14.301366 systemd-logind[1509]: Session 9 logged out. Waiting for processes to exit. Jan 29 16:38:14.303483 systemd-logind[1509]: Removed session 9. Jan 29 16:38:14.332329 kubelet[1925]: E0129 16:38:14.332275 1925 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.24.202\" not found" Jan 29 16:38:14.433646 kubelet[1925]: E0129 16:38:14.433434 1925 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.24.202\" not found" Jan 29 16:38:14.534413 kubelet[1925]: E0129 16:38:14.534281 1925 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.24.202\" not found" Jan 29 16:38:14.635271 kubelet[1925]: E0129 16:38:14.635151 1925 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.24.202\" not found" Jan 29 16:38:14.736272 kubelet[1925]: E0129 16:38:14.736017 1925 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.24.202\" not found" Jan 29 16:38:14.828527 kubelet[1925]: I0129 16:38:14.828014 1925 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 29 16:38:14.828527 kubelet[1925]: W0129 16:38:14.828375 1925 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 16:38:14.828527 kubelet[1925]: W0129 16:38:14.828475 1925 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 16:38:14.828527 kubelet[1925]: W0129 16:38:14.828476 1925 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 16:38:14.836764 kubelet[1925]: E0129 16:38:14.836716 1925 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.24.202\" not found" Jan 29 16:38:14.863429 kubelet[1925]: E0129 16:38:14.863380 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:38:14.937619 kubelet[1925]: E0129 16:38:14.937544 1925 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.24.202\" not found" Jan 29 16:38:15.038639 kubelet[1925]: E0129 16:38:15.038410 1925 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.24.202\" not found" Jan 29 16:38:15.141100 kubelet[1925]: I0129 16:38:15.141020 1925 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 29 16:38:15.141789 containerd[1523]: time="2025-01-29T16:38:15.141638882Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 16:38:15.142876 kubelet[1925]: I0129 16:38:15.142100 1925 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 29 16:38:15.862848 kubelet[1925]: I0129 16:38:15.862465 1925 apiserver.go:52] "Watching apiserver" Jan 29 16:38:15.863814 kubelet[1925]: E0129 16:38:15.863765 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:38:15.868856 kubelet[1925]: I0129 16:38:15.868783 1925 topology_manager.go:215] "Topology Admit Handler" podUID="0879b6e8-c5af-4b29-b97b-72c7b290e750" podNamespace="calico-system" podName="csi-node-driver-ghhfd" Jan 29 16:38:15.869020 kubelet[1925]: I0129 16:38:15.868993 1925 topology_manager.go:215] "Topology Admit Handler" podUID="f454cec9-8792-4d01-bf6b-5388238d16ed" podNamespace="kube-system" podName="kube-proxy-m6gkj" Jan 29 16:38:15.869159 kubelet[1925]: I0129 16:38:15.869128 1925 topology_manager.go:215] "Topology Admit Handler" podUID="5e3f6b2b-2652-48d9-a90c-8ce4e971426c" podNamespace="calico-system" podName="calico-node-nkvn9" Jan 29 16:38:15.870687 kubelet[1925]: E0129 16:38:15.870273 1925 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ghhfd" podUID="0879b6e8-c5af-4b29-b97b-72c7b290e750" Jan 29 16:38:15.882819 systemd[1]: Created slice kubepods-besteffort-podf454cec9_8792_4d01_bf6b_5388238d16ed.slice - libcontainer container kubepods-besteffort-podf454cec9_8792_4d01_bf6b_5388238d16ed.slice. Jan 29 16:38:15.892365 kubelet[1925]: I0129 16:38:15.891990 1925 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 16:38:15.902369 kubelet[1925]: I0129 16:38:15.901584 1925 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0879b6e8-c5af-4b29-b97b-72c7b290e750-registration-dir\") pod \"csi-node-driver-ghhfd\" (UID: \"0879b6e8-c5af-4b29-b97b-72c7b290e750\") " pod="calico-system/csi-node-driver-ghhfd" Jan 29 16:38:15.902369 kubelet[1925]: I0129 16:38:15.901635 1925 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f454cec9-8792-4d01-bf6b-5388238d16ed-lib-modules\") pod \"kube-proxy-m6gkj\" (UID: \"f454cec9-8792-4d01-bf6b-5388238d16ed\") " pod="kube-system/kube-proxy-m6gkj" Jan 29 16:38:15.902369 kubelet[1925]: I0129 16:38:15.901665 1925 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5e3f6b2b-2652-48d9-a90c-8ce4e971426c-policysync\") pod \"calico-node-nkvn9\" (UID: \"5e3f6b2b-2652-48d9-a90c-8ce4e971426c\") " pod="calico-system/calico-node-nkvn9" Jan 29 16:38:15.902369 kubelet[1925]: I0129 16:38:15.901702 1925 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e3f6b2b-2652-48d9-a90c-8ce4e971426c-tigera-ca-bundle\") pod \"calico-node-nkvn9\" (UID: \"5e3f6b2b-2652-48d9-a90c-8ce4e971426c\") " pod="calico-system/calico-node-nkvn9" Jan 29 16:38:15.902369 kubelet[1925]: I0129 16:38:15.901727 1925 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5e3f6b2b-2652-48d9-a90c-8ce4e971426c-cni-net-dir\") pod \"calico-node-nkvn9\" (UID: \"5e3f6b2b-2652-48d9-a90c-8ce4e971426c\") " pod="calico-system/calico-node-nkvn9" Jan 29 16:38:15.902678 kubelet[1925]: I0129 16:38:15.901751 1925 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj5g2\" (UniqueName: \"kubernetes.io/projected/0879b6e8-c5af-4b29-b97b-72c7b290e750-kube-api-access-rj5g2\") pod \"csi-node-driver-ghhfd\" (UID: \"0879b6e8-c5af-4b29-b97b-72c7b290e750\") " pod="calico-system/csi-node-driver-ghhfd" Jan 29 16:38:15.902678 kubelet[1925]: I0129 16:38:15.901776 1925 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f454cec9-8792-4d01-bf6b-5388238d16ed-kube-proxy\") pod \"kube-proxy-m6gkj\" (UID: \"f454cec9-8792-4d01-bf6b-5388238d16ed\") " pod="kube-system/kube-proxy-m6gkj" Jan 29 16:38:15.902678 kubelet[1925]: I0129 16:38:15.901806 1925 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74zzd\" (UniqueName: \"kubernetes.io/projected/f454cec9-8792-4d01-bf6b-5388238d16ed-kube-api-access-74zzd\") pod \"kube-proxy-m6gkj\" (UID: \"f454cec9-8792-4d01-bf6b-5388238d16ed\") " pod="kube-system/kube-proxy-m6gkj" Jan 29 16:38:15.902678 kubelet[1925]: I0129 16:38:15.901833 1925 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5e3f6b2b-2652-48d9-a90c-8ce4e971426c-xtables-lock\") pod \"calico-node-nkvn9\" (UID: \"5e3f6b2b-2652-48d9-a90c-8ce4e971426c\") " pod="calico-system/calico-node-nkvn9" Jan 29 16:38:15.902678 kubelet[1925]: I0129 16:38:15.901859 1925 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5e3f6b2b-2652-48d9-a90c-8ce4e971426c-var-lib-calico\") pod \"calico-node-nkvn9\" (UID: \"5e3f6b2b-2652-48d9-a90c-8ce4e971426c\") " pod="calico-system/calico-node-nkvn9" Jan 29 16:38:15.902922 kubelet[1925]: I0129 16:38:15.901882 1925 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkns8\" (UniqueName: \"kubernetes.io/projected/5e3f6b2b-2652-48d9-a90c-8ce4e971426c-kube-api-access-rkns8\") pod \"calico-node-nkvn9\" (UID: \"5e3f6b2b-2652-48d9-a90c-8ce4e971426c\") " pod="calico-system/calico-node-nkvn9" Jan 29 16:38:15.902922 kubelet[1925]: I0129 16:38:15.901906 1925 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0879b6e8-c5af-4b29-b97b-72c7b290e750-kubelet-dir\") pod \"csi-node-driver-ghhfd\" (UID: \"0879b6e8-c5af-4b29-b97b-72c7b290e750\") " pod="calico-system/csi-node-driver-ghhfd" Jan 29 16:38:15.902922 kubelet[1925]: I0129 16:38:15.901940 1925 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5e3f6b2b-2652-48d9-a90c-8ce4e971426c-var-run-calico\") pod \"calico-node-nkvn9\" (UID: \"5e3f6b2b-2652-48d9-a90c-8ce4e971426c\") " pod="calico-system/calico-node-nkvn9" Jan 29 16:38:15.902922 kubelet[1925]: I0129 16:38:15.901962 1925 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5e3f6b2b-2652-48d9-a90c-8ce4e971426c-cni-log-dir\") pod \"calico-node-nkvn9\" (UID: \"5e3f6b2b-2652-48d9-a90c-8ce4e971426c\") " pod="calico-system/calico-node-nkvn9" Jan 29 16:38:15.902922 kubelet[1925]: I0129 16:38:15.902008 1925 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5e3f6b2b-2652-48d9-a90c-8ce4e971426c-flexvol-driver-host\") pod \"calico-node-nkvn9\" (UID: \"5e3f6b2b-2652-48d9-a90c-8ce4e971426c\") " pod="calico-system/calico-node-nkvn9" Jan 29 16:38:15.902785 systemd[1]: Created slice kubepods-besteffort-pod5e3f6b2b_2652_48d9_a90c_8ce4e971426c.slice - libcontainer container kubepods-besteffort-pod5e3f6b2b_2652_48d9_a90c_8ce4e971426c.slice. Jan 29 16:38:15.903245 kubelet[1925]: I0129 16:38:15.902035 1925 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0879b6e8-c5af-4b29-b97b-72c7b290e750-varrun\") pod \"csi-node-driver-ghhfd\" (UID: \"0879b6e8-c5af-4b29-b97b-72c7b290e750\") " pod="calico-system/csi-node-driver-ghhfd" Jan 29 16:38:15.903245 kubelet[1925]: I0129 16:38:15.902109 1925 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0879b6e8-c5af-4b29-b97b-72c7b290e750-socket-dir\") pod \"csi-node-driver-ghhfd\" (UID: \"0879b6e8-c5af-4b29-b97b-72c7b290e750\") " pod="calico-system/csi-node-driver-ghhfd" Jan 29 16:38:15.903245 kubelet[1925]: I0129 16:38:15.902140 1925 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f454cec9-8792-4d01-bf6b-5388238d16ed-xtables-lock\") pod \"kube-proxy-m6gkj\" (UID: \"f454cec9-8792-4d01-bf6b-5388238d16ed\") " pod="kube-system/kube-proxy-m6gkj" Jan 29 16:38:15.903245 kubelet[1925]: I0129 16:38:15.902165 1925 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5e3f6b2b-2652-48d9-a90c-8ce4e971426c-lib-modules\") pod \"calico-node-nkvn9\" (UID: \"5e3f6b2b-2652-48d9-a90c-8ce4e971426c\") " pod="calico-system/calico-node-nkvn9" Jan 29 16:38:15.903245 kubelet[1925]: I0129 16:38:15.902188 1925 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5e3f6b2b-2652-48d9-a90c-8ce4e971426c-node-certs\") pod \"calico-node-nkvn9\" (UID: \"5e3f6b2b-2652-48d9-a90c-8ce4e971426c\") " pod="calico-system/calico-node-nkvn9" Jan 29 16:38:15.903660 kubelet[1925]: I0129 16:38:15.902215 1925 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5e3f6b2b-2652-48d9-a90c-8ce4e971426c-cni-bin-dir\") pod \"calico-node-nkvn9\" (UID: \"5e3f6b2b-2652-48d9-a90c-8ce4e971426c\") " pod="calico-system/calico-node-nkvn9" Jan 29 16:38:16.015709 kubelet[1925]: E0129 16:38:16.015662 1925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:38:16.015709 kubelet[1925]: W0129 16:38:16.015706 1925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:38:16.015938 kubelet[1925]: E0129 16:38:16.015765 1925 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:38:16.028527 kubelet[1925]: E0129 16:38:16.028488 1925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:38:16.028527 kubelet[1925]: W0129 16:38:16.028512 1925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:38:16.028527 kubelet[1925]: E0129 16:38:16.028530 1925 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:38:16.040437 kubelet[1925]: E0129 16:38:16.039611 1925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:38:16.040437 kubelet[1925]: W0129 16:38:16.039634 1925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:38:16.040437 kubelet[1925]: E0129 16:38:16.039652 1925 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:38:16.046201 kubelet[1925]: E0129 16:38:16.046172 1925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:38:16.046291 kubelet[1925]: W0129 16:38:16.046194 1925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:38:16.046291 kubelet[1925]: E0129 16:38:16.046232 1925 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:38:16.199584 containerd[1523]: time="2025-01-29T16:38:16.198580986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m6gkj,Uid:f454cec9-8792-4d01-bf6b-5388238d16ed,Namespace:kube-system,Attempt:0,}" Jan 29 16:38:16.207058 containerd[1523]: time="2025-01-29T16:38:16.206677048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nkvn9,Uid:5e3f6b2b-2652-48d9-a90c-8ce4e971426c,Namespace:calico-system,Attempt:0,}" Jan 29 16:38:16.864434 kubelet[1925]: E0129 16:38:16.864331 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:38:16.910613 containerd[1523]: time="2025-01-29T16:38:16.910531683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:38:16.912157 containerd[1523]: time="2025-01-29T16:38:16.912113264Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 29 16:38:16.914084 containerd[1523]: time="2025-01-29T16:38:16.914020105Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:38:16.917889 containerd[1523]: time="2025-01-29T16:38:16.917689473Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 16:38:16.919281 containerd[1523]: time="2025-01-29T16:38:16.919184178Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:38:16.924398 containerd[1523]: time="2025-01-29T16:38:16.924128914Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 717.329797ms" Jan 29 16:38:16.925410 containerd[1523]: time="2025-01-29T16:38:16.925361030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:38:16.927541 containerd[1523]: time="2025-01-29T16:38:16.927029071Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 728.108595ms" Jan 29 16:38:17.021282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3228842115.mount: Deactivated successfully. Jan 29 16:38:17.086262 containerd[1523]: time="2025-01-29T16:38:17.084575254Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:38:17.087121 containerd[1523]: time="2025-01-29T16:38:17.086241524Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:38:17.087121 containerd[1523]: time="2025-01-29T16:38:17.086273109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:38:17.087121 containerd[1523]: time="2025-01-29T16:38:17.086478492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:38:17.087417 containerd[1523]: time="2025-01-29T16:38:17.086873430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:38:17.087417 containerd[1523]: time="2025-01-29T16:38:17.086938044Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:38:17.087417 containerd[1523]: time="2025-01-29T16:38:17.086956961Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:38:17.087417 containerd[1523]: time="2025-01-29T16:38:17.087058430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:38:17.200599 systemd[1]: Started cri-containerd-c02430714488084088f7f18745c5192cd79d21158c01c1fce1b2e53910c1a9e8.scope - libcontainer container c02430714488084088f7f18745c5192cd79d21158c01c1fce1b2e53910c1a9e8. Jan 29 16:38:17.203296 systemd[1]: Started cri-containerd-d5e7935618c23772cc8a2166ab3eba94ef5de6381cf77d96e85d135f36ccfd93.scope - libcontainer container d5e7935618c23772cc8a2166ab3eba94ef5de6381cf77d96e85d135f36ccfd93. Jan 29 16:38:17.251302 containerd[1523]: time="2025-01-29T16:38:17.250895925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nkvn9,Uid:5e3f6b2b-2652-48d9-a90c-8ce4e971426c,Namespace:calico-system,Attempt:0,} returns sandbox id \"c02430714488084088f7f18745c5192cd79d21158c01c1fce1b2e53910c1a9e8\"" Jan 29 16:38:17.255610 containerd[1523]: time="2025-01-29T16:38:17.255559237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m6gkj,Uid:f454cec9-8792-4d01-bf6b-5388238d16ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"d5e7935618c23772cc8a2166ab3eba94ef5de6381cf77d96e85d135f36ccfd93\"" Jan 29 16:38:17.257420 containerd[1523]: time="2025-01-29T16:38:17.257285514Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 29 16:38:17.864699 kubelet[1925]: E0129 16:38:17.864635 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:38:18.001467 kubelet[1925]: E0129 16:38:18.000953 1925 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ghhfd" podUID="0879b6e8-c5af-4b29-b97b-72c7b290e750" Jan 29 16:38:18.563489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1428126333.mount: Deactivated successfully. Jan 29 16:38:18.690241 containerd[1523]: time="2025-01-29T16:38:18.690180740Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:38:18.691933 containerd[1523]: time="2025-01-29T16:38:18.691850569Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 29 16:38:18.692965 containerd[1523]: time="2025-01-29T16:38:18.692872988Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:38:18.695281 containerd[1523]: time="2025-01-29T16:38:18.695221431Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:38:18.696558 containerd[1523]: time="2025-01-29T16:38:18.696309664Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.438982788s" Jan 29 16:38:18.696558 containerd[1523]: time="2025-01-29T16:38:18.696384097Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 29 16:38:18.698664 containerd[1523]: time="2025-01-29T16:38:18.698483091Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 29 16:38:18.700179 containerd[1523]: time="2025-01-29T16:38:18.699995979Z" level=info msg="CreateContainer within sandbox \"c02430714488084088f7f18745c5192cd79d21158c01c1fce1b2e53910c1a9e8\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 16:38:18.721877 containerd[1523]: time="2025-01-29T16:38:18.721829530Z" level=info msg="CreateContainer within sandbox \"c02430714488084088f7f18745c5192cd79d21158c01c1fce1b2e53910c1a9e8\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6f0fc09aa66b65b620e25b7a24bd9eb6dc3031e14e359eefdbab45ff0679188c\"" Jan 29 16:38:18.722888 containerd[1523]: time="2025-01-29T16:38:18.722791223Z" level=info msg="StartContainer for \"6f0fc09aa66b65b620e25b7a24bd9eb6dc3031e14e359eefdbab45ff0679188c\"" Jan 29 16:38:18.764611 systemd[1]: Started cri-containerd-6f0fc09aa66b65b620e25b7a24bd9eb6dc3031e14e359eefdbab45ff0679188c.scope - libcontainer container 6f0fc09aa66b65b620e25b7a24bd9eb6dc3031e14e359eefdbab45ff0679188c. Jan 29 16:38:18.810093 containerd[1523]: time="2025-01-29T16:38:18.808955765Z" level=info msg="StartContainer for \"6f0fc09aa66b65b620e25b7a24bd9eb6dc3031e14e359eefdbab45ff0679188c\" returns successfully" Jan 29 16:38:18.829723 systemd[1]: cri-containerd-6f0fc09aa66b65b620e25b7a24bd9eb6dc3031e14e359eefdbab45ff0679188c.scope: Deactivated successfully. Jan 29 16:38:18.864983 kubelet[1925]: E0129 16:38:18.864870 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:38:18.907082 containerd[1523]: time="2025-01-29T16:38:18.906983913Z" level=info msg="shim disconnected" id=6f0fc09aa66b65b620e25b7a24bd9eb6dc3031e14e359eefdbab45ff0679188c namespace=k8s.io Jan 29 16:38:18.907082 containerd[1523]: time="2025-01-29T16:38:18.907083746Z" level=warning msg="cleaning up after shim disconnected" id=6f0fc09aa66b65b620e25b7a24bd9eb6dc3031e14e359eefdbab45ff0679188c namespace=k8s.io Jan 29 16:38:18.907467 containerd[1523]: time="2025-01-29T16:38:18.907106631Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:38:19.501241 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f0fc09aa66b65b620e25b7a24bd9eb6dc3031e14e359eefdbab45ff0679188c-rootfs.mount: Deactivated successfully. Jan 29 16:38:19.866234 kubelet[1925]: E0129 16:38:19.866034 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:38:19.999529 kubelet[1925]: E0129 16:38:19.999473 1925 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ghhfd" podUID="0879b6e8-c5af-4b29-b97b-72c7b290e750" Jan 29 16:38:20.276113 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount485985070.mount: Deactivated successfully. Jan 29 16:38:20.867315 kubelet[1925]: E0129 16:38:20.867160 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:38:20.894313 containerd[1523]: time="2025-01-29T16:38:20.893947108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:38:20.895496 containerd[1523]: time="2025-01-29T16:38:20.895436663Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058345" Jan 29 16:38:20.896465 containerd[1523]: time="2025-01-29T16:38:20.896405333Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:38:20.899218 containerd[1523]: time="2025-01-29T16:38:20.899133945Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:38:20.900394 containerd[1523]: time="2025-01-29T16:38:20.900200034Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 2.201674435s" Jan 29 16:38:20.900394 containerd[1523]: time="2025-01-29T16:38:20.900242139Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 29 16:38:20.902222 containerd[1523]: time="2025-01-29T16:38:20.902179987Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 29 16:38:20.903310 containerd[1523]: time="2025-01-29T16:38:20.903241942Z" level=info msg="CreateContainer within sandbox \"d5e7935618c23772cc8a2166ab3eba94ef5de6381cf77d96e85d135f36ccfd93\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 16:38:20.920316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2446241166.mount: Deactivated successfully. Jan 29 16:38:20.926445 containerd[1523]: time="2025-01-29T16:38:20.926407713Z" level=info msg="CreateContainer within sandbox \"d5e7935618c23772cc8a2166ab3eba94ef5de6381cf77d96e85d135f36ccfd93\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0304499c8b0532a9d2b4377346be4e40c73711757cdfd9598ec9b58cf684d1ee\"" Jan 29 16:38:20.927372 containerd[1523]: time="2025-01-29T16:38:20.927108207Z" level=info msg="StartContainer for \"0304499c8b0532a9d2b4377346be4e40c73711757cdfd9598ec9b58cf684d1ee\"" Jan 29 16:38:20.970633 systemd[1]: run-containerd-runc-k8s.io-0304499c8b0532a9d2b4377346be4e40c73711757cdfd9598ec9b58cf684d1ee-runc.d8RKpw.mount: Deactivated successfully. Jan 29 16:38:20.982674 systemd[1]: Started cri-containerd-0304499c8b0532a9d2b4377346be4e40c73711757cdfd9598ec9b58cf684d1ee.scope - libcontainer container 0304499c8b0532a9d2b4377346be4e40c73711757cdfd9598ec9b58cf684d1ee. Jan 29 16:38:21.026621 containerd[1523]: time="2025-01-29T16:38:21.026511322Z" level=info msg="StartContainer for \"0304499c8b0532a9d2b4377346be4e40c73711757cdfd9598ec9b58cf684d1ee\" returns successfully" Jan 29 16:38:21.867375 kubelet[1925]: E0129 16:38:21.867295 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:38:21.999451 kubelet[1925]: E0129 16:38:21.999328 1925 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ghhfd" podUID="0879b6e8-c5af-4b29-b97b-72c7b290e750" Jan 29 16:38:22.016820 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 29 16:38:22.055849 kubelet[1925]: I0129 16:38:22.055735 1925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-m6gkj" podStartSLOduration=4.412308913 podStartE2EDuration="8.055672451s" podCreationTimestamp="2025-01-29 16:38:14 +0000 UTC" firstStartedPulling="2025-01-29 16:38:17.257987142 +0000 UTC m=+4.136431971" lastFinishedPulling="2025-01-29 16:38:20.901350664 +0000 UTC m=+7.779795509" observedRunningTime="2025-01-29 16:38:22.05409386 +0000 UTC m=+8.932538705" watchObservedRunningTime="2025-01-29 16:38:22.055672451 +0000 UTC m=+8.934117289" Jan 29 16:38:22.868065 kubelet[1925]: E0129 16:38:22.867990 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:38:23.868668 kubelet[1925]: E0129 16:38:23.868589 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:38:24.000693 kubelet[1925]: E0129 16:38:24.000112 1925 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ghhfd" podUID="0879b6e8-c5af-4b29-b97b-72c7b290e750" Jan 29 16:38:24.591938 systemd-resolved[1403]: Clock change detected. Flushing caches. Jan 29 16:38:24.592387 systemd-timesyncd[1418]: Contacted time server [2a02:390:52e0::123]:123 (2.flatcar.pool.ntp.org). Jan 29 16:38:24.592471 systemd-timesyncd[1418]: Initial clock synchronization to Wed 2025-01-29 16:38:24.591830 UTC. Jan 29 16:38:25.380967 kubelet[1925]: E0129 16:38:25.380863 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:38:26.381318 kubelet[1925]: E0129 16:38:26.381257 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:38:26.511685 kubelet[1925]: E0129 16:38:26.511358 1925 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ghhfd" podUID="0879b6e8-c5af-4b29-b97b-72c7b290e750" Jan 29 16:38:26.882809 containerd[1523]: time="2025-01-29T16:38:26.882726382Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:38:26.883956 containerd[1523]: time="2025-01-29T16:38:26.883895295Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 29 16:38:26.884769 containerd[1523]: time="2025-01-29T16:38:26.884690387Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:38:26.887638 containerd[1523]: time="2025-01-29T16:38:26.887579079Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:38:26.889071 containerd[1523]: time="2025-01-29T16:38:26.888787561Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.475147608s" Jan 29 16:38:26.889071 containerd[1523]: time="2025-01-29T16:38:26.888828724Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 29 16:38:26.892142 containerd[1523]: time="2025-01-29T16:38:26.892095727Z" level=info msg="CreateContainer within sandbox \"c02430714488084088f7f18745c5192cd79d21158c01c1fce1b2e53910c1a9e8\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 16:38:26.910889 containerd[1523]: time="2025-01-29T16:38:26.910825240Z" level=info msg="CreateContainer within sandbox \"c02430714488084088f7f18745c5192cd79d21158c01c1fce1b2e53910c1a9e8\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"009ee5f3dc10519a7c9dd35bb13ea7aec25ed6c6c4b3562018515af35d1cb8b8\"" Jan 29 16:38:26.911944 containerd[1523]: time="2025-01-29T16:38:26.911465262Z" level=info msg="StartContainer for \"009ee5f3dc10519a7c9dd35bb13ea7aec25ed6c6c4b3562018515af35d1cb8b8\"" Jan 29 16:38:26.966992 systemd[1]: Started cri-containerd-009ee5f3dc10519a7c9dd35bb13ea7aec25ed6c6c4b3562018515af35d1cb8b8.scope - libcontainer container 009ee5f3dc10519a7c9dd35bb13ea7aec25ed6c6c4b3562018515af35d1cb8b8. Jan 29 16:38:27.007458 containerd[1523]: time="2025-01-29T16:38:27.007218725Z" level=info msg="StartContainer for \"009ee5f3dc10519a7c9dd35bb13ea7aec25ed6c6c4b3562018515af35d1cb8b8\" returns successfully" Jan 29 16:38:27.382301 kubelet[1925]: E0129 16:38:27.382203 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:38:27.875127 systemd[1]: cri-containerd-009ee5f3dc10519a7c9dd35bb13ea7aec25ed6c6c4b3562018515af35d1cb8b8.scope: Deactivated successfully. Jan 29 16:38:27.875925 systemd[1]: cri-containerd-009ee5f3dc10519a7c9dd35bb13ea7aec25ed6c6c4b3562018515af35d1cb8b8.scope: Consumed 588ms CPU time, 174M memory peak, 151M written to disk. Jan 29 16:38:27.905238 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-009ee5f3dc10519a7c9dd35bb13ea7aec25ed6c6c4b3562018515af35d1cb8b8-rootfs.mount: Deactivated successfully. Jan 29 16:38:27.968123 kubelet[1925]: I0129 16:38:27.967387 1925 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 29 16:38:28.048801 containerd[1523]: time="2025-01-29T16:38:28.048506719Z" level=info msg="shim disconnected" id=009ee5f3dc10519a7c9dd35bb13ea7aec25ed6c6c4b3562018515af35d1cb8b8 namespace=k8s.io Jan 29 16:38:28.049397 containerd[1523]: time="2025-01-29T16:38:28.048822971Z" level=warning msg="cleaning up after shim disconnected" id=009ee5f3dc10519a7c9dd35bb13ea7aec25ed6c6c4b3562018515af35d1cb8b8 namespace=k8s.io Jan 29 16:38:28.049397 containerd[1523]: time="2025-01-29T16:38:28.048868698Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:38:28.383404 kubelet[1925]: E0129 16:38:28.383332 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:38:28.520252 systemd[1]: Created slice kubepods-besteffort-pod0879b6e8_c5af_4b29_b97b_72c7b290e750.slice - libcontainer container kubepods-besteffort-pod0879b6e8_c5af_4b29_b97b_72c7b290e750.slice. Jan 29 16:38:28.524056 containerd[1523]: time="2025-01-29T16:38:28.524000052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ghhfd,Uid:0879b6e8-c5af-4b29-b97b-72c7b290e750,Namespace:calico-system,Attempt:0,}" Jan 29 16:38:28.564886 containerd[1523]: time="2025-01-29T16:38:28.564712262Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 29 16:38:28.628795 containerd[1523]: time="2025-01-29T16:38:28.628587944Z" level=error msg="Failed to destroy network for sandbox \"e35fcdd3250769bbcc1381476a9c6c0d4f96ad243509d698877cddaf95d2c990\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:28.631499 containerd[1523]: time="2025-01-29T16:38:28.631281079Z" level=error msg="encountered an error cleaning up failed sandbox \"e35fcdd3250769bbcc1381476a9c6c0d4f96ad243509d698877cddaf95d2c990\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:28.631499 containerd[1523]: time="2025-01-29T16:38:28.631437374Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ghhfd,Uid:0879b6e8-c5af-4b29-b97b-72c7b290e750,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e35fcdd3250769bbcc1381476a9c6c0d4f96ad243509d698877cddaf95d2c990\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:28.632675 kubelet[1925]: E0129 16:38:28.632074 1925 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e35fcdd3250769bbcc1381476a9c6c0d4f96ad243509d698877cddaf95d2c990\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:28.632675 kubelet[1925]: E0129 16:38:28.632188 1925 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e35fcdd3250769bbcc1381476a9c6c0d4f96ad243509d698877cddaf95d2c990\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ghhfd" Jan 29 16:38:28.632675 kubelet[1925]: E0129 16:38:28.632223 1925 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e35fcdd3250769bbcc1381476a9c6c0d4f96ad243509d698877cddaf95d2c990\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ghhfd" Jan 29 16:38:28.632280 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e35fcdd3250769bbcc1381476a9c6c0d4f96ad243509d698877cddaf95d2c990-shm.mount: Deactivated successfully. Jan 29 16:38:28.633024 kubelet[1925]: E0129 16:38:28.632310 1925 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ghhfd_calico-system(0879b6e8-c5af-4b29-b97b-72c7b290e750)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ghhfd_calico-system(0879b6e8-c5af-4b29-b97b-72c7b290e750)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e35fcdd3250769bbcc1381476a9c6c0d4f96ad243509d698877cddaf95d2c990\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ghhfd" podUID="0879b6e8-c5af-4b29-b97b-72c7b290e750" Jan 29 16:38:29.383916 kubelet[1925]: E0129 16:38:29.383827 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:38:29.566813 kubelet[1925]: I0129 16:38:29.566620 1925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e35fcdd3250769bbcc1381476a9c6c0d4f96ad243509d698877cddaf95d2c990" Jan 29 16:38:29.568015 containerd[1523]: time="2025-01-29T16:38:29.567834093Z" level=info msg="StopPodSandbox for \"e35fcdd3250769bbcc1381476a9c6c0d4f96ad243509d698877cddaf95d2c990\"" Jan 29 16:38:29.572218 containerd[1523]: time="2025-01-29T16:38:29.568139588Z" level=info msg="Ensure that sandbox e35fcdd3250769bbcc1381476a9c6c0d4f96ad243509d698877cddaf95d2c990 in task-service has been cleanup successfully" Jan 29 16:38:29.572218 containerd[1523]: time="2025-01-29T16:38:29.568453268Z" level=info msg="TearDown network for sandbox \"e35fcdd3250769bbcc1381476a9c6c0d4f96ad243509d698877cddaf95d2c990\" successfully" Jan 29 16:38:29.572218 containerd[1523]: time="2025-01-29T16:38:29.568475663Z" level=info msg="StopPodSandbox for \"e35fcdd3250769bbcc1381476a9c6c0d4f96ad243509d698877cddaf95d2c990\" returns successfully" Jan 29 16:38:29.572218 containerd[1523]: time="2025-01-29T16:38:29.571311262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ghhfd,Uid:0879b6e8-c5af-4b29-b97b-72c7b290e750,Namespace:calico-system,Attempt:1,}" Jan 29 16:38:29.571632 systemd[1]: run-netns-cni\x2d128c1461\x2d17a4\x2d17c4\x2d3ba2\x2daa1458cda0be.mount: Deactivated successfully. Jan 29 16:38:29.654832 containerd[1523]: time="2025-01-29T16:38:29.654032088Z" level=error msg="Failed to destroy network for sandbox \"e468574f2c4d61d4c62560ed065c37a21dbd8a84009c1e75ca595e748901caa3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:29.655110 containerd[1523]: time="2025-01-29T16:38:29.655061110Z" level=error msg="encountered an error cleaning up failed sandbox \"e468574f2c4d61d4c62560ed065c37a21dbd8a84009c1e75ca595e748901caa3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:29.655181 containerd[1523]: time="2025-01-29T16:38:29.655144779Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ghhfd,Uid:0879b6e8-c5af-4b29-b97b-72c7b290e750,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"e468574f2c4d61d4c62560ed065c37a21dbd8a84009c1e75ca595e748901caa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:29.657132 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e468574f2c4d61d4c62560ed065c37a21dbd8a84009c1e75ca595e748901caa3-shm.mount: Deactivated successfully. Jan 29 16:38:29.658585 kubelet[1925]: E0129 16:38:29.657962 1925 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e468574f2c4d61d4c62560ed065c37a21dbd8a84009c1e75ca595e748901caa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:29.659315 kubelet[1925]: E0129 16:38:29.659275 1925 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e468574f2c4d61d4c62560ed065c37a21dbd8a84009c1e75ca595e748901caa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ghhfd" Jan 29 16:38:29.659926 kubelet[1925]: E0129 16:38:29.659442 1925 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e468574f2c4d61d4c62560ed065c37a21dbd8a84009c1e75ca595e748901caa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ghhfd" Jan 29 16:38:29.659926 kubelet[1925]: E0129 16:38:29.659524 1925 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ghhfd_calico-system(0879b6e8-c5af-4b29-b97b-72c7b290e750)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ghhfd_calico-system(0879b6e8-c5af-4b29-b97b-72c7b290e750)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e468574f2c4d61d4c62560ed065c37a21dbd8a84009c1e75ca595e748901caa3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ghhfd" podUID="0879b6e8-c5af-4b29-b97b-72c7b290e750" Jan 29 16:38:30.384940 kubelet[1925]: E0129 16:38:30.384848 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:38:30.574028 kubelet[1925]: I0129 16:38:30.573741 1925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e468574f2c4d61d4c62560ed065c37a21dbd8a84009c1e75ca595e748901caa3" Jan 29 16:38:30.585420 containerd[1523]: time="2025-01-29T16:38:30.585305715Z" level=info msg="StopPodSandbox for \"e468574f2c4d61d4c62560ed065c37a21dbd8a84009c1e75ca595e748901caa3\"" Jan 29 16:38:30.586041 containerd[1523]: time="2025-01-29T16:38:30.585607516Z" level=info msg="Ensure that sandbox e468574f2c4d61d4c62560ed065c37a21dbd8a84009c1e75ca595e748901caa3 in task-service has been cleanup successfully" Jan 29 16:38:30.588564 systemd[1]: run-netns-cni\x2d7395e84a\x2d2003\x2d12e7\x2d7131\x2d6b243fd77689.mount: Deactivated successfully. Jan 29 16:38:30.590761 containerd[1523]: time="2025-01-29T16:38:30.590611135Z" level=info msg="TearDown network for sandbox \"e468574f2c4d61d4c62560ed065c37a21dbd8a84009c1e75ca595e748901caa3\" successfully" Jan 29 16:38:30.590761 containerd[1523]: time="2025-01-29T16:38:30.590650753Z" level=info msg="StopPodSandbox for \"e468574f2c4d61d4c62560ed065c37a21dbd8a84009c1e75ca595e748901caa3\" returns successfully" Jan 29 16:38:30.591921 containerd[1523]: time="2025-01-29T16:38:30.591438012Z" level=info msg="StopPodSandbox for \"e35fcdd3250769bbcc1381476a9c6c0d4f96ad243509d698877cddaf95d2c990\"" Jan 29 16:38:30.591921 containerd[1523]: time="2025-01-29T16:38:30.591571832Z" level=info msg="TearDown network for sandbox \"e35fcdd3250769bbcc1381476a9c6c0d4f96ad243509d698877cddaf95d2c990\" successfully" Jan 29 16:38:30.591921 containerd[1523]: time="2025-01-29T16:38:30.591592018Z" level=info msg="StopPodSandbox for \"e35fcdd3250769bbcc1381476a9c6c0d4f96ad243509d698877cddaf95d2c990\" returns successfully" Jan 29 16:38:30.592794 containerd[1523]: time="2025-01-29T16:38:30.592558892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ghhfd,Uid:0879b6e8-c5af-4b29-b97b-72c7b290e750,Namespace:calico-system,Attempt:2,}" Jan 29 16:38:30.712047 containerd[1523]: time="2025-01-29T16:38:30.711913469Z" level=error msg="Failed to destroy network for sandbox \"59eff04dced0609ea87ef538d971c1603251b18fd95be4c5314e00fe23874bcf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:30.716828 containerd[1523]: time="2025-01-29T16:38:30.712996323Z" level=error msg="encountered an error cleaning up failed sandbox \"59eff04dced0609ea87ef538d971c1603251b18fd95be4c5314e00fe23874bcf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:30.716828 containerd[1523]: time="2025-01-29T16:38:30.713086631Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ghhfd,Uid:0879b6e8-c5af-4b29-b97b-72c7b290e750,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"59eff04dced0609ea87ef538d971c1603251b18fd95be4c5314e00fe23874bcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:30.717377 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-59eff04dced0609ea87ef538d971c1603251b18fd95be4c5314e00fe23874bcf-shm.mount: Deactivated successfully. Jan 29 16:38:30.719329 kubelet[1925]: E0129 16:38:30.718409 1925 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59eff04dced0609ea87ef538d971c1603251b18fd95be4c5314e00fe23874bcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:30.719329 kubelet[1925]: E0129 16:38:30.718517 1925 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59eff04dced0609ea87ef538d971c1603251b18fd95be4c5314e00fe23874bcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ghhfd" Jan 29 16:38:30.719329 kubelet[1925]: E0129 16:38:30.718617 1925 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59eff04dced0609ea87ef538d971c1603251b18fd95be4c5314e00fe23874bcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ghhfd" Jan 29 16:38:30.719522 kubelet[1925]: E0129 16:38:30.718687 1925 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ghhfd_calico-system(0879b6e8-c5af-4b29-b97b-72c7b290e750)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ghhfd_calico-system(0879b6e8-c5af-4b29-b97b-72c7b290e750)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"59eff04dced0609ea87ef538d971c1603251b18fd95be4c5314e00fe23874bcf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ghhfd" podUID="0879b6e8-c5af-4b29-b97b-72c7b290e750" Jan 29 16:38:31.385341 kubelet[1925]: E0129 16:38:31.385242 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:38:31.586529 kubelet[1925]: I0129 16:38:31.586450 1925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59eff04dced0609ea87ef538d971c1603251b18fd95be4c5314e00fe23874bcf" Jan 29 16:38:31.588192 containerd[1523]: time="2025-01-29T16:38:31.588143542Z" level=info msg="StopPodSandbox for \"59eff04dced0609ea87ef538d971c1603251b18fd95be4c5314e00fe23874bcf\"" Jan 29 16:38:31.589072 containerd[1523]: time="2025-01-29T16:38:31.588470291Z" level=info msg="Ensure that sandbox 59eff04dced0609ea87ef538d971c1603251b18fd95be4c5314e00fe23874bcf in task-service has been cleanup successfully" Jan 29 16:38:31.592026 containerd[1523]: time="2025-01-29T16:38:31.591838650Z" level=info msg="TearDown network for sandbox \"59eff04dced0609ea87ef538d971c1603251b18fd95be4c5314e00fe23874bcf\" successfully" Jan 29 16:38:31.592026 containerd[1523]: time="2025-01-29T16:38:31.591871822Z" level=info msg="StopPodSandbox for \"59eff04dced0609ea87ef538d971c1603251b18fd95be4c5314e00fe23874bcf\" returns successfully" Jan 29 16:38:31.592879 containerd[1523]: time="2025-01-29T16:38:31.592841293Z" level=info msg="StopPodSandbox for \"e468574f2c4d61d4c62560ed065c37a21dbd8a84009c1e75ca595e748901caa3\"" Jan 29 16:38:31.592968 containerd[1523]: time="2025-01-29T16:38:31.592947153Z" level=info msg="TearDown network for sandbox \"e468574f2c4d61d4c62560ed065c37a21dbd8a84009c1e75ca595e748901caa3\" successfully" Jan 29 16:38:31.593023 containerd[1523]: time="2025-01-29T16:38:31.592966139Z" level=info msg="StopPodSandbox for \"e468574f2c4d61d4c62560ed065c37a21dbd8a84009c1e75ca595e748901caa3\" returns successfully" Jan 29 16:38:31.593286 systemd[1]: run-netns-cni\x2da8e0eb61\x2de6b3\x2d3566\x2d192a\x2d6e14d5c2375a.mount: Deactivated successfully. Jan 29 16:38:31.604508 containerd[1523]: time="2025-01-29T16:38:31.604403644Z" level=info msg="StopPodSandbox for \"e35fcdd3250769bbcc1381476a9c6c0d4f96ad243509d698877cddaf95d2c990\"" Jan 29 16:38:31.605090 containerd[1523]: time="2025-01-29T16:38:31.604515958Z" level=info msg="TearDown network for sandbox \"e35fcdd3250769bbcc1381476a9c6c0d4f96ad243509d698877cddaf95d2c990\" successfully" Jan 29 16:38:31.605090 containerd[1523]: time="2025-01-29T16:38:31.604570388Z" level=info msg="StopPodSandbox for \"e35fcdd3250769bbcc1381476a9c6c0d4f96ad243509d698877cddaf95d2c990\" returns successfully" Jan 29 16:38:31.605318 containerd[1523]: time="2025-01-29T16:38:31.605232276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ghhfd,Uid:0879b6e8-c5af-4b29-b97b-72c7b290e750,Namespace:calico-system,Attempt:3,}" Jan 29 16:38:31.709980 containerd[1523]: time="2025-01-29T16:38:31.706409851Z" level=error msg="Failed to destroy network for sandbox \"dbe56831af47f5f85c6a8b5a0ccd5dc59d1dd15994bc03753a62351bd77cf38b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:31.709980 containerd[1523]: time="2025-01-29T16:38:31.708228959Z" level=error msg="encountered an error cleaning up failed sandbox \"dbe56831af47f5f85c6a8b5a0ccd5dc59d1dd15994bc03753a62351bd77cf38b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:31.709980 containerd[1523]: time="2025-01-29T16:38:31.708349536Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ghhfd,Uid:0879b6e8-c5af-4b29-b97b-72c7b290e750,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"dbe56831af47f5f85c6a8b5a0ccd5dc59d1dd15994bc03753a62351bd77cf38b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:31.710283 kubelet[1925]: E0129 16:38:31.709222 1925 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbe56831af47f5f85c6a8b5a0ccd5dc59d1dd15994bc03753a62351bd77cf38b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:31.710283 kubelet[1925]: E0129 16:38:31.709350 1925 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbe56831af47f5f85c6a8b5a0ccd5dc59d1dd15994bc03753a62351bd77cf38b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ghhfd" Jan 29 16:38:31.710283 kubelet[1925]: E0129 16:38:31.709388 1925 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbe56831af47f5f85c6a8b5a0ccd5dc59d1dd15994bc03753a62351bd77cf38b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ghhfd" Jan 29 16:38:31.710460 kubelet[1925]: E0129 16:38:31.709468 1925 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ghhfd_calico-system(0879b6e8-c5af-4b29-b97b-72c7b290e750)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ghhfd_calico-system(0879b6e8-c5af-4b29-b97b-72c7b290e750)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dbe56831af47f5f85c6a8b5a0ccd5dc59d1dd15994bc03753a62351bd77cf38b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ghhfd" podUID="0879b6e8-c5af-4b29-b97b-72c7b290e750" Jan 29 16:38:31.710573 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dbe56831af47f5f85c6a8b5a0ccd5dc59d1dd15994bc03753a62351bd77cf38b-shm.mount: Deactivated successfully. Jan 29 16:38:32.386415 kubelet[1925]: E0129 16:38:32.386327 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:38:32.592703 kubelet[1925]: I0129 16:38:32.592270 1925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dbe56831af47f5f85c6a8b5a0ccd5dc59d1dd15994bc03753a62351bd77cf38b" Jan 29 16:38:32.593788 containerd[1523]: time="2025-01-29T16:38:32.593492003Z" level=info msg="StopPodSandbox for \"dbe56831af47f5f85c6a8b5a0ccd5dc59d1dd15994bc03753a62351bd77cf38b\"" Jan 29 16:38:32.593788 containerd[1523]: time="2025-01-29T16:38:32.593775373Z" level=info msg="Ensure that sandbox dbe56831af47f5f85c6a8b5a0ccd5dc59d1dd15994bc03753a62351bd77cf38b in task-service has been cleanup successfully" Jan 29 16:38:32.596867 containerd[1523]: time="2025-01-29T16:38:32.596833877Z" level=info msg="TearDown network for sandbox \"dbe56831af47f5f85c6a8b5a0ccd5dc59d1dd15994bc03753a62351bd77cf38b\" successfully" Jan 29 16:38:32.596867 containerd[1523]: time="2025-01-29T16:38:32.596865377Z" level=info msg="StopPodSandbox for \"dbe56831af47f5f85c6a8b5a0ccd5dc59d1dd15994bc03753a62351bd77cf38b\" returns successfully" Jan 29 16:38:32.597281 systemd[1]: run-netns-cni\x2dd891df97\x2dd5c2\x2d1544\x2d761e\x2d9bcc27763de7.mount: Deactivated successfully. Jan 29 16:38:32.598736 containerd[1523]: time="2025-01-29T16:38:32.598245037Z" level=info msg="StopPodSandbox for \"59eff04dced0609ea87ef538d971c1603251b18fd95be4c5314e00fe23874bcf\"" Jan 29 16:38:32.598736 containerd[1523]: time="2025-01-29T16:38:32.598419548Z" level=info msg="TearDown network for sandbox \"59eff04dced0609ea87ef538d971c1603251b18fd95be4c5314e00fe23874bcf\" successfully" Jan 29 16:38:32.598736 containerd[1523]: time="2025-01-29T16:38:32.598439790Z" level=info msg="StopPodSandbox for \"59eff04dced0609ea87ef538d971c1603251b18fd95be4c5314e00fe23874bcf\" returns successfully" Jan 29 16:38:32.600737 containerd[1523]: time="2025-01-29T16:38:32.600520192Z" level=info msg="StopPodSandbox for \"e468574f2c4d61d4c62560ed065c37a21dbd8a84009c1e75ca595e748901caa3\"" Jan 29 16:38:32.600737 containerd[1523]: time="2025-01-29T16:38:32.600630502Z" level=info msg="TearDown network for sandbox \"e468574f2c4d61d4c62560ed065c37a21dbd8a84009c1e75ca595e748901caa3\" successfully" Jan 29 16:38:32.600737 containerd[1523]: time="2025-01-29T16:38:32.600651719Z" level=info msg="StopPodSandbox for \"e468574f2c4d61d4c62560ed065c37a21dbd8a84009c1e75ca595e748901caa3\" returns successfully" Jan 29 16:38:32.602119 containerd[1523]: time="2025-01-29T16:38:32.601721384Z" level=info msg="StopPodSandbox for \"e35fcdd3250769bbcc1381476a9c6c0d4f96ad243509d698877cddaf95d2c990\"" Jan 29 16:38:32.602119 containerd[1523]: time="2025-01-29T16:38:32.601891951Z" level=info msg="TearDown network for sandbox \"e35fcdd3250769bbcc1381476a9c6c0d4f96ad243509d698877cddaf95d2c990\" successfully" Jan 29 16:38:32.602637 containerd[1523]: time="2025-01-29T16:38:32.602486782Z" level=info msg="StopPodSandbox for \"e35fcdd3250769bbcc1381476a9c6c0d4f96ad243509d698877cddaf95d2c990\" returns successfully" Jan 29 16:38:32.603262 containerd[1523]: time="2025-01-29T16:38:32.603140905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ghhfd,Uid:0879b6e8-c5af-4b29-b97b-72c7b290e750,Namespace:calico-system,Attempt:4,}" Jan 29 16:38:32.701349 kubelet[1925]: I0129 16:38:32.700395 1925 topology_manager.go:215] "Topology Admit Handler" podUID="3c303794-1a6e-4a37-b643-42f09e6c9f41" podNamespace="default" podName="nginx-deployment-85f456d6dd-wl4v2" Jan 29 16:38:32.714613 systemd[1]: Created slice kubepods-besteffort-pod3c303794_1a6e_4a37_b643_42f09e6c9f41.slice - libcontainer container kubepods-besteffort-pod3c303794_1a6e_4a37_b643_42f09e6c9f41.slice. Jan 29 16:38:32.720339 containerd[1523]: time="2025-01-29T16:38:32.720211064Z" level=error msg="Failed to destroy network for sandbox \"f3deae6dc24d96e2b66cf92606b0d48661fd90ad8d0cdc9edcad0847bd8927ea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:32.723110 containerd[1523]: time="2025-01-29T16:38:32.723067526Z" level=error msg="encountered an error cleaning up failed sandbox \"f3deae6dc24d96e2b66cf92606b0d48661fd90ad8d0cdc9edcad0847bd8927ea\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:32.723799 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f3deae6dc24d96e2b66cf92606b0d48661fd90ad8d0cdc9edcad0847bd8927ea-shm.mount: Deactivated successfully. Jan 29 16:38:32.724417 kubelet[1925]: I0129 16:38:32.724320 1925 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djxvh\" (UniqueName: \"kubernetes.io/projected/3c303794-1a6e-4a37-b643-42f09e6c9f41-kube-api-access-djxvh\") pod \"nginx-deployment-85f456d6dd-wl4v2\" (UID: \"3c303794-1a6e-4a37-b643-42f09e6c9f41\") " pod="default/nginx-deployment-85f456d6dd-wl4v2" Jan 29 16:38:32.724583 containerd[1523]: time="2025-01-29T16:38:32.724469834Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ghhfd,Uid:0879b6e8-c5af-4b29-b97b-72c7b290e750,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"f3deae6dc24d96e2b66cf92606b0d48661fd90ad8d0cdc9edcad0847bd8927ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:32.726167 kubelet[1925]: E0129 16:38:32.726109 1925 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3deae6dc24d96e2b66cf92606b0d48661fd90ad8d0cdc9edcad0847bd8927ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:32.726564 kubelet[1925]: E0129 16:38:32.726177 1925 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3deae6dc24d96e2b66cf92606b0d48661fd90ad8d0cdc9edcad0847bd8927ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ghhfd" Jan 29 16:38:32.726564 kubelet[1925]: E0129 16:38:32.726207 1925 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3deae6dc24d96e2b66cf92606b0d48661fd90ad8d0cdc9edcad0847bd8927ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ghhfd" Jan 29 16:38:32.726564 kubelet[1925]: E0129 16:38:32.726307 1925 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ghhfd_calico-system(0879b6e8-c5af-4b29-b97b-72c7b290e750)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ghhfd_calico-system(0879b6e8-c5af-4b29-b97b-72c7b290e750)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f3deae6dc24d96e2b66cf92606b0d48661fd90ad8d0cdc9edcad0847bd8927ea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ghhfd" podUID="0879b6e8-c5af-4b29-b97b-72c7b290e750" Jan 29 16:38:33.021257 containerd[1523]: time="2025-01-29T16:38:33.020965258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-wl4v2,Uid:3c303794-1a6e-4a37-b643-42f09e6c9f41,Namespace:default,Attempt:0,}" Jan 29 16:38:33.140145 containerd[1523]: time="2025-01-29T16:38:33.140079240Z" level=error msg="Failed to destroy network for sandbox \"0505bb38903c7c592ccbc828cdc9fdb6579d789547297666de5a28ea1f26b2a4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:33.140724 containerd[1523]: time="2025-01-29T16:38:33.140688391Z" level=error msg="encountered an error cleaning up failed sandbox \"0505bb38903c7c592ccbc828cdc9fdb6579d789547297666de5a28ea1f26b2a4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:33.140842 containerd[1523]: time="2025-01-29T16:38:33.140790445Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-wl4v2,Uid:3c303794-1a6e-4a37-b643-42f09e6c9f41,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0505bb38903c7c592ccbc828cdc9fdb6579d789547297666de5a28ea1f26b2a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:33.141617 kubelet[1925]: E0129 16:38:33.141064 1925 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0505bb38903c7c592ccbc828cdc9fdb6579d789547297666de5a28ea1f26b2a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:33.141617 kubelet[1925]: E0129 16:38:33.141139 1925 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0505bb38903c7c592ccbc828cdc9fdb6579d789547297666de5a28ea1f26b2a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-wl4v2" Jan 29 16:38:33.141617 kubelet[1925]: E0129 16:38:33.141174 1925 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0505bb38903c7c592ccbc828cdc9fdb6579d789547297666de5a28ea1f26b2a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-wl4v2" Jan 29 16:38:33.141805 kubelet[1925]: E0129 16:38:33.141232 1925 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-wl4v2_default(3c303794-1a6e-4a37-b643-42f09e6c9f41)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-wl4v2_default(3c303794-1a6e-4a37-b643-42f09e6c9f41)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0505bb38903c7c592ccbc828cdc9fdb6579d789547297666de5a28ea1f26b2a4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-wl4v2" podUID="3c303794-1a6e-4a37-b643-42f09e6c9f41" Jan 29 16:38:33.386944 kubelet[1925]: E0129 16:38:33.386774 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:38:33.599076 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0505bb38903c7c592ccbc828cdc9fdb6579d789547297666de5a28ea1f26b2a4-shm.mount: Deactivated successfully. Jan 29 16:38:33.603097 kubelet[1925]: I0129 16:38:33.603062 1925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3deae6dc24d96e2b66cf92606b0d48661fd90ad8d0cdc9edcad0847bd8927ea" Jan 29 16:38:33.603957 containerd[1523]: time="2025-01-29T16:38:33.603917318Z" level=info msg="StopPodSandbox for \"f3deae6dc24d96e2b66cf92606b0d48661fd90ad8d0cdc9edcad0847bd8927ea\"" Jan 29 16:38:33.604687 containerd[1523]: time="2025-01-29T16:38:33.604657136Z" level=info msg="Ensure that sandbox f3deae6dc24d96e2b66cf92606b0d48661fd90ad8d0cdc9edcad0847bd8927ea in task-service has been cleanup successfully" Jan 29 16:38:33.606919 containerd[1523]: time="2025-01-29T16:38:33.606888874Z" level=info msg="TearDown network for sandbox \"f3deae6dc24d96e2b66cf92606b0d48661fd90ad8d0cdc9edcad0847bd8927ea\" successfully" Jan 29 16:38:33.607037 containerd[1523]: time="2025-01-29T16:38:33.607013161Z" level=info msg="StopPodSandbox for \"f3deae6dc24d96e2b66cf92606b0d48661fd90ad8d0cdc9edcad0847bd8927ea\" returns successfully" Jan 29 16:38:33.607791 containerd[1523]: time="2025-01-29T16:38:33.607736466Z" level=info msg="StopPodSandbox for \"dbe56831af47f5f85c6a8b5a0ccd5dc59d1dd15994bc03753a62351bd77cf38b\"" Jan 29 16:38:33.608011 containerd[1523]: time="2025-01-29T16:38:33.607982268Z" level=info msg="TearDown network for sandbox \"dbe56831af47f5f85c6a8b5a0ccd5dc59d1dd15994bc03753a62351bd77cf38b\" successfully" Jan 29 16:38:33.608698 containerd[1523]: time="2025-01-29T16:38:33.608092516Z" level=info msg="StopPodSandbox for \"dbe56831af47f5f85c6a8b5a0ccd5dc59d1dd15994bc03753a62351bd77cf38b\" returns successfully" Jan 29 16:38:33.608437 systemd[1]: run-netns-cni\x2dc7174ea9\x2d00a9\x2d248c\x2deab8\x2d9e2e599130a7.mount: Deactivated successfully. Jan 29 16:38:33.608955 kubelet[1925]: I0129 16:38:33.608325 1925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0505bb38903c7c592ccbc828cdc9fdb6579d789547297666de5a28ea1f26b2a4" Jan 29 16:38:33.612053 containerd[1523]: time="2025-01-29T16:38:33.609176099Z" level=info msg="StopPodSandbox for \"0505bb38903c7c592ccbc828cdc9fdb6579d789547297666de5a28ea1f26b2a4\"" Jan 29 16:38:33.612053 containerd[1523]: time="2025-01-29T16:38:33.609396522Z" level=info msg="Ensure that sandbox 0505bb38903c7c592ccbc828cdc9fdb6579d789547297666de5a28ea1f26b2a4 in task-service has been cleanup successfully" Jan 29 16:38:33.612898 containerd[1523]: time="2025-01-29T16:38:33.612861182Z" level=info msg="TearDown network for sandbox \"0505bb38903c7c592ccbc828cdc9fdb6579d789547297666de5a28ea1f26b2a4\" successfully" Jan 29 16:38:33.613789 systemd[1]: run-netns-cni\x2d463a0106\x2da76d\x2d40d0\x2d1d80\x2dd855b6c270eb.mount: Deactivated successfully. Jan 29 16:38:33.615369 containerd[1523]: time="2025-01-29T16:38:33.615338577Z" level=info msg="StopPodSandbox for \"0505bb38903c7c592ccbc828cdc9fdb6579d789547297666de5a28ea1f26b2a4\" returns successfully" Jan 29 16:38:33.615561 containerd[1523]: time="2025-01-29T16:38:33.615531624Z" level=info msg="StopPodSandbox for \"59eff04dced0609ea87ef538d971c1603251b18fd95be4c5314e00fe23874bcf\"" Jan 29 16:38:33.615744 containerd[1523]: time="2025-01-29T16:38:33.615718137Z" level=info msg="TearDown network for sandbox \"59eff04dced0609ea87ef538d971c1603251b18fd95be4c5314e00fe23874bcf\" successfully" Jan 29 16:38:33.615891 containerd[1523]: time="2025-01-29T16:38:33.615866834Z" level=info msg="StopPodSandbox for \"59eff04dced0609ea87ef538d971c1603251b18fd95be4c5314e00fe23874bcf\" returns successfully" Jan 29 16:38:33.616819 containerd[1523]: time="2025-01-29T16:38:33.616532101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-wl4v2,Uid:3c303794-1a6e-4a37-b643-42f09e6c9f41,Namespace:default,Attempt:1,}" Jan 29 16:38:33.620500 containerd[1523]: time="2025-01-29T16:38:33.620463501Z" level=info msg="StopPodSandbox for \"e468574f2c4d61d4c62560ed065c37a21dbd8a84009c1e75ca595e748901caa3\"" Jan 29 16:38:33.620604 containerd[1523]: time="2025-01-29T16:38:33.620578419Z" level=info msg="TearDown network for sandbox \"e468574f2c4d61d4c62560ed065c37a21dbd8a84009c1e75ca595e748901caa3\" successfully" Jan 29 16:38:33.620668 containerd[1523]: time="2025-01-29T16:38:33.620604494Z" level=info msg="StopPodSandbox for \"e468574f2c4d61d4c62560ed065c37a21dbd8a84009c1e75ca595e748901caa3\" returns successfully" Jan 29 16:38:33.625083 containerd[1523]: time="2025-01-29T16:38:33.625024406Z" level=info msg="StopPodSandbox for \"e35fcdd3250769bbcc1381476a9c6c0d4f96ad243509d698877cddaf95d2c990\"" Jan 29 16:38:33.625236 containerd[1523]: time="2025-01-29T16:38:33.625199461Z" level=info msg="TearDown network for sandbox \"e35fcdd3250769bbcc1381476a9c6c0d4f96ad243509d698877cddaf95d2c990\" successfully" Jan 29 16:38:33.625236 containerd[1523]: time="2025-01-29T16:38:33.625228966Z" level=info msg="StopPodSandbox for \"e35fcdd3250769bbcc1381476a9c6c0d4f96ad243509d698877cddaf95d2c990\" returns successfully" Jan 29 16:38:33.631232 containerd[1523]: time="2025-01-29T16:38:33.631173983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ghhfd,Uid:0879b6e8-c5af-4b29-b97b-72c7b290e750,Namespace:calico-system,Attempt:5,}" Jan 29 16:38:33.806697 containerd[1523]: time="2025-01-29T16:38:33.806611972Z" level=error msg="Failed to destroy network for sandbox \"405609bd3ee7c1a1167c5207814e6ad2286d312d1d0e5a15b0abc08af3720852\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:33.808132 containerd[1523]: time="2025-01-29T16:38:33.808088423Z" level=error msg="encountered an error cleaning up failed sandbox \"405609bd3ee7c1a1167c5207814e6ad2286d312d1d0e5a15b0abc08af3720852\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:33.808232 containerd[1523]: time="2025-01-29T16:38:33.808175561Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-wl4v2,Uid:3c303794-1a6e-4a37-b643-42f09e6c9f41,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"405609bd3ee7c1a1167c5207814e6ad2286d312d1d0e5a15b0abc08af3720852\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:33.808612 kubelet[1925]: E0129 16:38:33.808482 1925 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"405609bd3ee7c1a1167c5207814e6ad2286d312d1d0e5a15b0abc08af3720852\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:33.808612 kubelet[1925]: E0129 16:38:33.808568 1925 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"405609bd3ee7c1a1167c5207814e6ad2286d312d1d0e5a15b0abc08af3720852\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-wl4v2" Jan 29 16:38:33.808612 kubelet[1925]: E0129 16:38:33.808597 1925 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"405609bd3ee7c1a1167c5207814e6ad2286d312d1d0e5a15b0abc08af3720852\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-wl4v2" Jan 29 16:38:33.809224 kubelet[1925]: E0129 16:38:33.808670 1925 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-wl4v2_default(3c303794-1a6e-4a37-b643-42f09e6c9f41)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-wl4v2_default(3c303794-1a6e-4a37-b643-42f09e6c9f41)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"405609bd3ee7c1a1167c5207814e6ad2286d312d1d0e5a15b0abc08af3720852\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-wl4v2" podUID="3c303794-1a6e-4a37-b643-42f09e6c9f41" Jan 29 16:38:33.814278 containerd[1523]: time="2025-01-29T16:38:33.814210159Z" level=error msg="Failed to destroy network for sandbox \"2efbf5d0f307b3be20fff5705f37217ad67ab6699fe3eb4f1d925817f4d98cf5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:33.815114 containerd[1523]: time="2025-01-29T16:38:33.814847052Z" level=error msg="encountered an error cleaning up failed sandbox \"2efbf5d0f307b3be20fff5705f37217ad67ab6699fe3eb4f1d925817f4d98cf5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:33.815114 containerd[1523]: time="2025-01-29T16:38:33.814923553Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ghhfd,Uid:0879b6e8-c5af-4b29-b97b-72c7b290e750,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"2efbf5d0f307b3be20fff5705f37217ad67ab6699fe3eb4f1d925817f4d98cf5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:33.815943 kubelet[1925]: E0129 16:38:33.815796 1925 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2efbf5d0f307b3be20fff5705f37217ad67ab6699fe3eb4f1d925817f4d98cf5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:33.815943 kubelet[1925]: E0129 16:38:33.815848 1925 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2efbf5d0f307b3be20fff5705f37217ad67ab6699fe3eb4f1d925817f4d98cf5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ghhfd" Jan 29 16:38:33.815943 kubelet[1925]: E0129 16:38:33.815878 1925 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2efbf5d0f307b3be20fff5705f37217ad67ab6699fe3eb4f1d925817f4d98cf5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ghhfd" Jan 29 16:38:33.816129 kubelet[1925]: E0129 16:38:33.815927 1925 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ghhfd_calico-system(0879b6e8-c5af-4b29-b97b-72c7b290e750)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ghhfd_calico-system(0879b6e8-c5af-4b29-b97b-72c7b290e750)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2efbf5d0f307b3be20fff5705f37217ad67ab6699fe3eb4f1d925817f4d98cf5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ghhfd" podUID="0879b6e8-c5af-4b29-b97b-72c7b290e750" Jan 29 16:38:34.371422 kubelet[1925]: E0129 16:38:34.371349 1925 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:38:34.387686 kubelet[1925]: E0129 16:38:34.387652 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:38:34.598253 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-405609bd3ee7c1a1167c5207814e6ad2286d312d1d0e5a15b0abc08af3720852-shm.mount: Deactivated successfully. Jan 29 16:38:34.618498 kubelet[1925]: I0129 16:38:34.617936 1925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2efbf5d0f307b3be20fff5705f37217ad67ab6699fe3eb4f1d925817f4d98cf5" Jan 29 16:38:34.620340 containerd[1523]: time="2025-01-29T16:38:34.620283129Z" level=info msg="StopPodSandbox for \"2efbf5d0f307b3be20fff5705f37217ad67ab6699fe3eb4f1d925817f4d98cf5\"" Jan 29 16:38:34.620883 containerd[1523]: time="2025-01-29T16:38:34.620643981Z" level=info msg="Ensure that sandbox 2efbf5d0f307b3be20fff5705f37217ad67ab6699fe3eb4f1d925817f4d98cf5 in task-service has been cleanup successfully" Jan 29 16:38:34.623863 containerd[1523]: time="2025-01-29T16:38:34.621150612Z" level=info msg="TearDown network for sandbox \"2efbf5d0f307b3be20fff5705f37217ad67ab6699fe3eb4f1d925817f4d98cf5\" successfully" Jan 29 16:38:34.623863 containerd[1523]: time="2025-01-29T16:38:34.621180879Z" level=info msg="StopPodSandbox for \"2efbf5d0f307b3be20fff5705f37217ad67ab6699fe3eb4f1d925817f4d98cf5\" returns successfully" Jan 29 16:38:34.628104 containerd[1523]: time="2025-01-29T16:38:34.624326398Z" level=info msg="StopPodSandbox for \"f3deae6dc24d96e2b66cf92606b0d48661fd90ad8d0cdc9edcad0847bd8927ea\"" Jan 29 16:38:34.628104 containerd[1523]: time="2025-01-29T16:38:34.624460942Z" level=info msg="TearDown network for sandbox \"f3deae6dc24d96e2b66cf92606b0d48661fd90ad8d0cdc9edcad0847bd8927ea\" successfully" Jan 29 16:38:34.628104 containerd[1523]: time="2025-01-29T16:38:34.624524834Z" level=info msg="StopPodSandbox for \"f3deae6dc24d96e2b66cf92606b0d48661fd90ad8d0cdc9edcad0847bd8927ea\" returns successfully" Jan 29 16:38:34.626785 systemd[1]: run-netns-cni\x2dd5b8fae1\x2d410c\x2d6800\x2dfe7d\x2d4a15124f1d1f.mount: Deactivated successfully. Jan 29 16:38:34.628717 kubelet[1925]: I0129 16:38:34.626135 1925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="405609bd3ee7c1a1167c5207814e6ad2286d312d1d0e5a15b0abc08af3720852" Jan 29 16:38:34.633784 containerd[1523]: time="2025-01-29T16:38:34.630768209Z" level=info msg="StopPodSandbox for \"dbe56831af47f5f85c6a8b5a0ccd5dc59d1dd15994bc03753a62351bd77cf38b\"" Jan 29 16:38:34.633784 containerd[1523]: time="2025-01-29T16:38:34.630915320Z" level=info msg="TearDown network for sandbox \"dbe56831af47f5f85c6a8b5a0ccd5dc59d1dd15994bc03753a62351bd77cf38b\" successfully" Jan 29 16:38:34.633784 containerd[1523]: time="2025-01-29T16:38:34.630936738Z" level=info msg="StopPodSandbox for \"dbe56831af47f5f85c6a8b5a0ccd5dc59d1dd15994bc03753a62351bd77cf38b\" returns successfully" Jan 29 16:38:34.633784 containerd[1523]: time="2025-01-29T16:38:34.631066395Z" level=info msg="StopPodSandbox for \"405609bd3ee7c1a1167c5207814e6ad2286d312d1d0e5a15b0abc08af3720852\"" Jan 29 16:38:34.633784 containerd[1523]: time="2025-01-29T16:38:34.631407132Z" level=info msg="Ensure that sandbox 405609bd3ee7c1a1167c5207814e6ad2286d312d1d0e5a15b0abc08af3720852 in task-service has been cleanup successfully" Jan 29 16:38:34.634313 containerd[1523]: time="2025-01-29T16:38:34.634284338Z" level=info msg="TearDown network for sandbox \"405609bd3ee7c1a1167c5207814e6ad2286d312d1d0e5a15b0abc08af3720852\" successfully" Jan 29 16:38:34.634420 containerd[1523]: time="2025-01-29T16:38:34.634395446Z" level=info msg="StopPodSandbox for \"405609bd3ee7c1a1167c5207814e6ad2286d312d1d0e5a15b0abc08af3720852\" returns successfully" Jan 29 16:38:34.635347 containerd[1523]: time="2025-01-29T16:38:34.634909452Z" level=info msg="StopPodSandbox for \"0505bb38903c7c592ccbc828cdc9fdb6579d789547297666de5a28ea1f26b2a4\"" Jan 29 16:38:34.635071 systemd[1]: run-netns-cni\x2de092d02c\x2d9364\x2d2918\x2d25ab\x2d18ba06185d29.mount: Deactivated successfully. Jan 29 16:38:34.637203 containerd[1523]: time="2025-01-29T16:38:34.637174953Z" level=info msg="TearDown network for sandbox \"0505bb38903c7c592ccbc828cdc9fdb6579d789547297666de5a28ea1f26b2a4\" successfully" Jan 29 16:38:34.637340 containerd[1523]: time="2025-01-29T16:38:34.637315057Z" level=info msg="StopPodSandbox for \"0505bb38903c7c592ccbc828cdc9fdb6579d789547297666de5a28ea1f26b2a4\" returns successfully" Jan 29 16:38:34.637527 containerd[1523]: time="2025-01-29T16:38:34.637499941Z" level=info msg="StopPodSandbox for \"59eff04dced0609ea87ef538d971c1603251b18fd95be4c5314e00fe23874bcf\"" Jan 29 16:38:34.638119 containerd[1523]: time="2025-01-29T16:38:34.638091056Z" level=info msg="TearDown network for sandbox \"59eff04dced0609ea87ef538d971c1603251b18fd95be4c5314e00fe23874bcf\" successfully" Jan 29 16:38:34.638289 containerd[1523]: time="2025-01-29T16:38:34.638263842Z" level=info msg="StopPodSandbox for \"59eff04dced0609ea87ef538d971c1603251b18fd95be4c5314e00fe23874bcf\" returns successfully" Jan 29 16:38:34.639329 containerd[1523]: time="2025-01-29T16:38:34.639298642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-wl4v2,Uid:3c303794-1a6e-4a37-b643-42f09e6c9f41,Namespace:default,Attempt:2,}" Jan 29 16:38:34.644102 containerd[1523]: time="2025-01-29T16:38:34.644052874Z" level=info msg="StopPodSandbox for \"e468574f2c4d61d4c62560ed065c37a21dbd8a84009c1e75ca595e748901caa3\"" Jan 29 16:38:34.644602 containerd[1523]: time="2025-01-29T16:38:34.644218701Z" level=info msg="TearDown network for sandbox \"e468574f2c4d61d4c62560ed065c37a21dbd8a84009c1e75ca595e748901caa3\" successfully" Jan 29 16:38:34.644602 containerd[1523]: time="2025-01-29T16:38:34.644263300Z" level=info msg="StopPodSandbox for \"e468574f2c4d61d4c62560ed065c37a21dbd8a84009c1e75ca595e748901caa3\" returns successfully" Jan 29 16:38:34.645300 containerd[1523]: time="2025-01-29T16:38:34.645209494Z" level=info msg="StopPodSandbox for \"e35fcdd3250769bbcc1381476a9c6c0d4f96ad243509d698877cddaf95d2c990\"" Jan 29 16:38:34.645405 containerd[1523]: time="2025-01-29T16:38:34.645378340Z" level=info msg="TearDown network for sandbox \"e35fcdd3250769bbcc1381476a9c6c0d4f96ad243509d698877cddaf95d2c990\" successfully" Jan 29 16:38:34.645477 containerd[1523]: time="2025-01-29T16:38:34.645404875Z" level=info msg="StopPodSandbox for \"e35fcdd3250769bbcc1381476a9c6c0d4f96ad243509d698877cddaf95d2c990\" returns successfully" Jan 29 16:38:34.648302 containerd[1523]: time="2025-01-29T16:38:34.648133913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ghhfd,Uid:0879b6e8-c5af-4b29-b97b-72c7b290e750,Namespace:calico-system,Attempt:6,}" Jan 29 16:38:34.854651 containerd[1523]: time="2025-01-29T16:38:34.854552690Z" level=error msg="Failed to destroy network for sandbox \"f2677d74a531cf065342690475879b4cc0ab35885aa54345aac468f71ec78ac9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:34.855772 containerd[1523]: time="2025-01-29T16:38:34.855607297Z" level=error msg="encountered an error cleaning up failed sandbox \"f2677d74a531cf065342690475879b4cc0ab35885aa54345aac468f71ec78ac9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:34.855772 containerd[1523]: time="2025-01-29T16:38:34.855708531Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ghhfd,Uid:0879b6e8-c5af-4b29-b97b-72c7b290e750,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"f2677d74a531cf065342690475879b4cc0ab35885aa54345aac468f71ec78ac9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:34.857465 kubelet[1925]: E0129 16:38:34.856375 1925 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2677d74a531cf065342690475879b4cc0ab35885aa54345aac468f71ec78ac9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:34.857465 kubelet[1925]: E0129 16:38:34.857070 1925 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2677d74a531cf065342690475879b4cc0ab35885aa54345aac468f71ec78ac9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ghhfd" Jan 29 16:38:34.857465 kubelet[1925]: E0129 16:38:34.857106 1925 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2677d74a531cf065342690475879b4cc0ab35885aa54345aac468f71ec78ac9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ghhfd" Jan 29 16:38:34.857676 kubelet[1925]: E0129 16:38:34.857192 1925 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ghhfd_calico-system(0879b6e8-c5af-4b29-b97b-72c7b290e750)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ghhfd_calico-system(0879b6e8-c5af-4b29-b97b-72c7b290e750)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f2677d74a531cf065342690475879b4cc0ab35885aa54345aac468f71ec78ac9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ghhfd" podUID="0879b6e8-c5af-4b29-b97b-72c7b290e750" Jan 29 16:38:34.866603 containerd[1523]: time="2025-01-29T16:38:34.866540201Z" level=error msg="Failed to destroy network for sandbox \"2eed4f29416d08e8a229b320b2680c8e4bdfa4226deebc48886d61e197c85549\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:34.867340 containerd[1523]: time="2025-01-29T16:38:34.867303664Z" level=error msg="encountered an error cleaning up failed sandbox \"2eed4f29416d08e8a229b320b2680c8e4bdfa4226deebc48886d61e197c85549\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:34.867729 containerd[1523]: time="2025-01-29T16:38:34.867665114Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-wl4v2,Uid:3c303794-1a6e-4a37-b643-42f09e6c9f41,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"2eed4f29416d08e8a229b320b2680c8e4bdfa4226deebc48886d61e197c85549\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:34.869121 kubelet[1925]: E0129 16:38:34.869062 1925 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2eed4f29416d08e8a229b320b2680c8e4bdfa4226deebc48886d61e197c85549\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:34.869213 kubelet[1925]: E0129 16:38:34.869150 1925 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2eed4f29416d08e8a229b320b2680c8e4bdfa4226deebc48886d61e197c85549\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-wl4v2" Jan 29 16:38:34.869213 kubelet[1925]: E0129 16:38:34.869185 1925 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2eed4f29416d08e8a229b320b2680c8e4bdfa4226deebc48886d61e197c85549\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-wl4v2" Jan 29 16:38:34.869340 kubelet[1925]: E0129 16:38:34.869279 1925 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-wl4v2_default(3c303794-1a6e-4a37-b643-42f09e6c9f41)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-wl4v2_default(3c303794-1a6e-4a37-b643-42f09e6c9f41)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2eed4f29416d08e8a229b320b2680c8e4bdfa4226deebc48886d61e197c85549\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-wl4v2" podUID="3c303794-1a6e-4a37-b643-42f09e6c9f41" Jan 29 16:38:35.388663 kubelet[1925]: E0129 16:38:35.388550 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:38:35.599827 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2eed4f29416d08e8a229b320b2680c8e4bdfa4226deebc48886d61e197c85549-shm.mount: Deactivated successfully. Jan 29 16:38:35.634706 kubelet[1925]: I0129 16:38:35.634408 1925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2677d74a531cf065342690475879b4cc0ab35885aa54345aac468f71ec78ac9" Jan 29 16:38:35.637135 containerd[1523]: time="2025-01-29T16:38:35.636046282Z" level=info msg="StopPodSandbox for \"f2677d74a531cf065342690475879b4cc0ab35885aa54345aac468f71ec78ac9\"" Jan 29 16:38:35.646670 kubelet[1925]: I0129 16:38:35.646534 1925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2eed4f29416d08e8a229b320b2680c8e4bdfa4226deebc48886d61e197c85549" Jan 29 16:38:35.655661 containerd[1523]: time="2025-01-29T16:38:35.655394503Z" level=info msg="StopPodSandbox for \"2eed4f29416d08e8a229b320b2680c8e4bdfa4226deebc48886d61e197c85549\"" Jan 29 16:38:35.655867 containerd[1523]: time="2025-01-29T16:38:35.655832503Z" level=info msg="Ensure that sandbox 2eed4f29416d08e8a229b320b2680c8e4bdfa4226deebc48886d61e197c85549 in task-service has been cleanup successfully" Jan 29 16:38:35.656269 containerd[1523]: time="2025-01-29T16:38:35.656091013Z" level=info msg="TearDown network for sandbox \"2eed4f29416d08e8a229b320b2680c8e4bdfa4226deebc48886d61e197c85549\" successfully" Jan 29 16:38:35.656269 containerd[1523]: time="2025-01-29T16:38:35.656118885Z" level=info msg="StopPodSandbox for \"2eed4f29416d08e8a229b320b2680c8e4bdfa4226deebc48886d61e197c85549\" returns successfully" Jan 29 16:38:35.659012 containerd[1523]: time="2025-01-29T16:38:35.658979634Z" level=info msg="StopPodSandbox for \"405609bd3ee7c1a1167c5207814e6ad2286d312d1d0e5a15b0abc08af3720852\"" Jan 29 16:38:35.659121 containerd[1523]: time="2025-01-29T16:38:35.659094648Z" level=info msg="TearDown network for sandbox \"405609bd3ee7c1a1167c5207814e6ad2286d312d1d0e5a15b0abc08af3720852\" successfully" Jan 29 16:38:35.659188 containerd[1523]: time="2025-01-29T16:38:35.659120613Z" level=info msg="StopPodSandbox for \"405609bd3ee7c1a1167c5207814e6ad2286d312d1d0e5a15b0abc08af3720852\" returns successfully" Jan 29 16:38:35.660434 systemd[1]: run-netns-cni\x2d41e731f5\x2de739\x2d0ca7\x2de5c5\x2d811a35a9ca33.mount: Deactivated successfully. Jan 29 16:38:35.661664 containerd[1523]: time="2025-01-29T16:38:35.661156836Z" level=info msg="StopPodSandbox for \"0505bb38903c7c592ccbc828cdc9fdb6579d789547297666de5a28ea1f26b2a4\"" Jan 29 16:38:35.661664 containerd[1523]: time="2025-01-29T16:38:35.661287367Z" level=info msg="TearDown network for sandbox \"0505bb38903c7c592ccbc828cdc9fdb6579d789547297666de5a28ea1f26b2a4\" successfully" Jan 29 16:38:35.661664 containerd[1523]: time="2025-01-29T16:38:35.661309508Z" level=info msg="StopPodSandbox for \"0505bb38903c7c592ccbc828cdc9fdb6579d789547297666de5a28ea1f26b2a4\" returns successfully" Jan 29 16:38:35.663298 containerd[1523]: time="2025-01-29T16:38:35.662615547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-wl4v2,Uid:3c303794-1a6e-4a37-b643-42f09e6c9f41,Namespace:default,Attempt:3,}" Jan 29 16:38:35.670805 containerd[1523]: time="2025-01-29T16:38:35.666509262Z" level=info msg="Ensure that sandbox f2677d74a531cf065342690475879b4cc0ab35885aa54345aac468f71ec78ac9 in task-service has been cleanup successfully" Jan 29 16:38:35.670805 containerd[1523]: time="2025-01-29T16:38:35.666841072Z" level=info msg="TearDown network for sandbox \"f2677d74a531cf065342690475879b4cc0ab35885aa54345aac468f71ec78ac9\" successfully" Jan 29 16:38:35.670805 containerd[1523]: time="2025-01-29T16:38:35.666867086Z" level=info msg="StopPodSandbox for \"f2677d74a531cf065342690475879b4cc0ab35885aa54345aac468f71ec78ac9\" returns successfully" Jan 29 16:38:35.671409 systemd[1]: run-netns-cni\x2d6be4d33f\x2d603c\x2d50da\x2d4439\x2d93c8baf30029.mount: Deactivated successfully. Jan 29 16:38:35.676208 containerd[1523]: time="2025-01-29T16:38:35.676116661Z" level=info msg="StopPodSandbox for \"2efbf5d0f307b3be20fff5705f37217ad67ab6699fe3eb4f1d925817f4d98cf5\"" Jan 29 16:38:35.676366 containerd[1523]: time="2025-01-29T16:38:35.676326422Z" level=info msg="TearDown network for sandbox \"2efbf5d0f307b3be20fff5705f37217ad67ab6699fe3eb4f1d925817f4d98cf5\" successfully" Jan 29 16:38:35.676366 containerd[1523]: time="2025-01-29T16:38:35.676358140Z" level=info msg="StopPodSandbox for \"2efbf5d0f307b3be20fff5705f37217ad67ab6699fe3eb4f1d925817f4d98cf5\" returns successfully" Jan 29 16:38:35.677286 containerd[1523]: time="2025-01-29T16:38:35.677231028Z" level=info msg="StopPodSandbox for \"f3deae6dc24d96e2b66cf92606b0d48661fd90ad8d0cdc9edcad0847bd8927ea\"" Jan 29 16:38:35.677374 containerd[1523]: time="2025-01-29T16:38:35.677345618Z" level=info msg="TearDown network for sandbox \"f3deae6dc24d96e2b66cf92606b0d48661fd90ad8d0cdc9edcad0847bd8927ea\" successfully" Jan 29 16:38:35.677441 containerd[1523]: time="2025-01-29T16:38:35.677416454Z" level=info msg="StopPodSandbox for \"f3deae6dc24d96e2b66cf92606b0d48661fd90ad8d0cdc9edcad0847bd8927ea\" returns successfully" Jan 29 16:38:35.683861 containerd[1523]: time="2025-01-29T16:38:35.683510388Z" level=info msg="StopPodSandbox for \"dbe56831af47f5f85c6a8b5a0ccd5dc59d1dd15994bc03753a62351bd77cf38b\"" Jan 29 16:38:35.683861 containerd[1523]: time="2025-01-29T16:38:35.683639250Z" level=info msg="TearDown network for sandbox \"dbe56831af47f5f85c6a8b5a0ccd5dc59d1dd15994bc03753a62351bd77cf38b\" successfully" Jan 29 16:38:35.683861 containerd[1523]: time="2025-01-29T16:38:35.683656818Z" level=info msg="StopPodSandbox for \"dbe56831af47f5f85c6a8b5a0ccd5dc59d1dd15994bc03753a62351bd77cf38b\" returns successfully" Jan 29 16:38:35.685704 containerd[1523]: time="2025-01-29T16:38:35.685664867Z" level=info msg="StopPodSandbox for \"59eff04dced0609ea87ef538d971c1603251b18fd95be4c5314e00fe23874bcf\"" Jan 29 16:38:35.688443 containerd[1523]: time="2025-01-29T16:38:35.685791889Z" level=info msg="TearDown network for sandbox \"59eff04dced0609ea87ef538d971c1603251b18fd95be4c5314e00fe23874bcf\" successfully" Jan 29 16:38:35.688443 containerd[1523]: time="2025-01-29T16:38:35.685829398Z" level=info msg="StopPodSandbox for \"59eff04dced0609ea87ef538d971c1603251b18fd95be4c5314e00fe23874bcf\" returns successfully" Jan 29 16:38:35.688443 containerd[1523]: time="2025-01-29T16:38:35.686274300Z" level=info msg="StopPodSandbox for \"e468574f2c4d61d4c62560ed065c37a21dbd8a84009c1e75ca595e748901caa3\"" Jan 29 16:38:35.688443 containerd[1523]: time="2025-01-29T16:38:35.686377041Z" level=info msg="TearDown network for sandbox \"e468574f2c4d61d4c62560ed065c37a21dbd8a84009c1e75ca595e748901caa3\" successfully" Jan 29 16:38:35.688443 containerd[1523]: time="2025-01-29T16:38:35.686394973Z" level=info msg="StopPodSandbox for \"e468574f2c4d61d4c62560ed065c37a21dbd8a84009c1e75ca595e748901caa3\" returns successfully" Jan 29 16:38:35.688443 containerd[1523]: time="2025-01-29T16:38:35.686829211Z" level=info msg="StopPodSandbox for \"e35fcdd3250769bbcc1381476a9c6c0d4f96ad243509d698877cddaf95d2c990\"" Jan 29 16:38:35.688443 containerd[1523]: time="2025-01-29T16:38:35.686928978Z" level=info msg="TearDown network for sandbox \"e35fcdd3250769bbcc1381476a9c6c0d4f96ad243509d698877cddaf95d2c990\" successfully" Jan 29 16:38:35.688443 containerd[1523]: time="2025-01-29T16:38:35.686947328Z" level=info msg="StopPodSandbox for \"e35fcdd3250769bbcc1381476a9c6c0d4f96ad243509d698877cddaf95d2c990\" returns successfully" Jan 29 16:38:35.688443 containerd[1523]: time="2025-01-29T16:38:35.687951603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ghhfd,Uid:0879b6e8-c5af-4b29-b97b-72c7b290e750,Namespace:calico-system,Attempt:7,}" Jan 29 16:38:35.827818 containerd[1523]: time="2025-01-29T16:38:35.827620698Z" level=error msg="Failed to destroy network for sandbox \"d7c013bfec3a2464280f3995f4d6ddd04671d9963f35776b2c6f5a315516ef8e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:35.828898 containerd[1523]: time="2025-01-29T16:38:35.828559270Z" level=error msg="encountered an error cleaning up failed sandbox \"d7c013bfec3a2464280f3995f4d6ddd04671d9963f35776b2c6f5a315516ef8e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:35.828898 containerd[1523]: time="2025-01-29T16:38:35.828679994Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-wl4v2,Uid:3c303794-1a6e-4a37-b643-42f09e6c9f41,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"d7c013bfec3a2464280f3995f4d6ddd04671d9963f35776b2c6f5a315516ef8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:35.830164 kubelet[1925]: E0129 16:38:35.829715 1925 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7c013bfec3a2464280f3995f4d6ddd04671d9963f35776b2c6f5a315516ef8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:35.830164 kubelet[1925]: E0129 16:38:35.829999 1925 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7c013bfec3a2464280f3995f4d6ddd04671d9963f35776b2c6f5a315516ef8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-wl4v2" Jan 29 16:38:35.831170 kubelet[1925]: E0129 16:38:35.830043 1925 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7c013bfec3a2464280f3995f4d6ddd04671d9963f35776b2c6f5a315516ef8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-wl4v2" Jan 29 16:38:35.831170 kubelet[1925]: E0129 16:38:35.830604 1925 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-wl4v2_default(3c303794-1a6e-4a37-b643-42f09e6c9f41)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-wl4v2_default(3c303794-1a6e-4a37-b643-42f09e6c9f41)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d7c013bfec3a2464280f3995f4d6ddd04671d9963f35776b2c6f5a315516ef8e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-wl4v2" podUID="3c303794-1a6e-4a37-b643-42f09e6c9f41" Jan 29 16:38:35.907322 update_engine[1510]: I20250129 16:38:35.905938 1510 update_attempter.cc:509] Updating boot flags... Jan 29 16:38:36.004351 containerd[1523]: time="2025-01-29T16:38:36.004053237Z" level=error msg="Failed to destroy network for sandbox \"cc5ee80badb343f0e1bd663d0069643ce837b585eb1873cf591a609969442bcb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:36.006591 containerd[1523]: time="2025-01-29T16:38:36.006131423Z" level=error msg="encountered an error cleaning up failed sandbox \"cc5ee80badb343f0e1bd663d0069643ce837b585eb1873cf591a609969442bcb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:36.013772 containerd[1523]: time="2025-01-29T16:38:36.010236873Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ghhfd,Uid:0879b6e8-c5af-4b29-b97b-72c7b290e750,Namespace:calico-system,Attempt:7,} failed, error" error="failed to setup network for sandbox \"cc5ee80badb343f0e1bd663d0069643ce837b585eb1873cf591a609969442bcb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:36.013870 kubelet[1925]: E0129 16:38:36.011077 1925 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc5ee80badb343f0e1bd663d0069643ce837b585eb1873cf591a609969442bcb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:36.013870 kubelet[1925]: E0129 16:38:36.011181 1925 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc5ee80badb343f0e1bd663d0069643ce837b585eb1873cf591a609969442bcb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ghhfd" Jan 29 16:38:36.013870 kubelet[1925]: E0129 16:38:36.011226 1925 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc5ee80badb343f0e1bd663d0069643ce837b585eb1873cf591a609969442bcb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ghhfd" Jan 29 16:38:36.014056 kubelet[1925]: E0129 16:38:36.011305 1925 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ghhfd_calico-system(0879b6e8-c5af-4b29-b97b-72c7b290e750)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ghhfd_calico-system(0879b6e8-c5af-4b29-b97b-72c7b290e750)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc5ee80badb343f0e1bd663d0069643ce837b585eb1873cf591a609969442bcb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ghhfd" podUID="0879b6e8-c5af-4b29-b97b-72c7b290e750" Jan 29 16:38:36.048918 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2750) Jan 29 16:38:36.201608 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2749) Jan 29 16:38:36.318802 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2749) Jan 29 16:38:36.388820 kubelet[1925]: E0129 16:38:36.388766 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:38:36.601882 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d7c013bfec3a2464280f3995f4d6ddd04671d9963f35776b2c6f5a315516ef8e-shm.mount: Deactivated successfully. Jan 29 16:38:36.657093 kubelet[1925]: I0129 16:38:36.657041 1925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7c013bfec3a2464280f3995f4d6ddd04671d9963f35776b2c6f5a315516ef8e" Jan 29 16:38:36.659892 containerd[1523]: time="2025-01-29T16:38:36.659408810Z" level=info msg="StopPodSandbox for \"d7c013bfec3a2464280f3995f4d6ddd04671d9963f35776b2c6f5a315516ef8e\"" Jan 29 16:38:36.659892 containerd[1523]: time="2025-01-29T16:38:36.659694393Z" level=info msg="Ensure that sandbox d7c013bfec3a2464280f3995f4d6ddd04671d9963f35776b2c6f5a315516ef8e in task-service has been cleanup successfully" Jan 29 16:38:36.660452 containerd[1523]: time="2025-01-29T16:38:36.659943258Z" level=info msg="TearDown network for sandbox \"d7c013bfec3a2464280f3995f4d6ddd04671d9963f35776b2c6f5a315516ef8e\" successfully" Jan 29 16:38:36.660452 containerd[1523]: time="2025-01-29T16:38:36.659967236Z" level=info msg="StopPodSandbox for \"d7c013bfec3a2464280f3995f4d6ddd04671d9963f35776b2c6f5a315516ef8e\" returns successfully" Jan 29 16:38:36.662565 containerd[1523]: time="2025-01-29T16:38:36.662331945Z" level=info msg="StopPodSandbox for \"2eed4f29416d08e8a229b320b2680c8e4bdfa4226deebc48886d61e197c85549\"" Jan 29 16:38:36.662565 containerd[1523]: time="2025-01-29T16:38:36.662459883Z" level=info msg="TearDown network for sandbox \"2eed4f29416d08e8a229b320b2680c8e4bdfa4226deebc48886d61e197c85549\" successfully" Jan 29 16:38:36.662565 containerd[1523]: time="2025-01-29T16:38:36.662492522Z" level=info msg="StopPodSandbox for \"2eed4f29416d08e8a229b320b2680c8e4bdfa4226deebc48886d61e197c85549\" returns successfully" Jan 29 16:38:36.664103 containerd[1523]: time="2025-01-29T16:38:36.664073184Z" level=info msg="StopPodSandbox for \"405609bd3ee7c1a1167c5207814e6ad2286d312d1d0e5a15b0abc08af3720852\"" Jan 29 16:38:36.665132 containerd[1523]: time="2025-01-29T16:38:36.664634814Z" level=info msg="TearDown network for sandbox \"405609bd3ee7c1a1167c5207814e6ad2286d312d1d0e5a15b0abc08af3720852\" successfully" Jan 29 16:38:36.665132 containerd[1523]: time="2025-01-29T16:38:36.664665601Z" level=info msg="StopPodSandbox for \"405609bd3ee7c1a1167c5207814e6ad2286d312d1d0e5a15b0abc08af3720852\" returns successfully" Jan 29 16:38:36.665334 systemd[1]: run-netns-cni\x2daa522bca\x2d701c\x2db0e6\x2de1be\x2d09a960cacd2d.mount: Deactivated successfully. Jan 29 16:38:36.666579 containerd[1523]: time="2025-01-29T16:38:36.665950206Z" level=info msg="StopPodSandbox for \"0505bb38903c7c592ccbc828cdc9fdb6579d789547297666de5a28ea1f26b2a4\"" Jan 29 16:38:36.666579 containerd[1523]: time="2025-01-29T16:38:36.666053313Z" level=info msg="TearDown network for sandbox \"0505bb38903c7c592ccbc828cdc9fdb6579d789547297666de5a28ea1f26b2a4\" successfully" Jan 29 16:38:36.666579 containerd[1523]: time="2025-01-29T16:38:36.666072010Z" level=info msg="StopPodSandbox for \"0505bb38903c7c592ccbc828cdc9fdb6579d789547297666de5a28ea1f26b2a4\" returns successfully" Jan 29 16:38:36.667818 containerd[1523]: time="2025-01-29T16:38:36.667777182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-wl4v2,Uid:3c303794-1a6e-4a37-b643-42f09e6c9f41,Namespace:default,Attempt:4,}" Jan 29 16:38:36.679752 kubelet[1925]: I0129 16:38:36.679612 1925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc5ee80badb343f0e1bd663d0069643ce837b585eb1873cf591a609969442bcb" Jan 29 16:38:36.681466 containerd[1523]: time="2025-01-29T16:38:36.681261943Z" level=info msg="StopPodSandbox for \"cc5ee80badb343f0e1bd663d0069643ce837b585eb1873cf591a609969442bcb\"" Jan 29 16:38:36.682543 containerd[1523]: time="2025-01-29T16:38:36.682353901Z" level=info msg="Ensure that sandbox cc5ee80badb343f0e1bd663d0069643ce837b585eb1873cf591a609969442bcb in task-service has been cleanup successfully" Jan 29 16:38:36.685242 containerd[1523]: time="2025-01-29T16:38:36.685058129Z" level=info msg="TearDown network for sandbox \"cc5ee80badb343f0e1bd663d0069643ce837b585eb1873cf591a609969442bcb\" successfully" Jan 29 16:38:36.685242 containerd[1523]: time="2025-01-29T16:38:36.685088824Z" level=info msg="StopPodSandbox for \"cc5ee80badb343f0e1bd663d0069643ce837b585eb1873cf591a609969442bcb\" returns successfully" Jan 29 16:38:36.686155 containerd[1523]: time="2025-01-29T16:38:36.686100257Z" level=info msg="StopPodSandbox for \"f2677d74a531cf065342690475879b4cc0ab35885aa54345aac468f71ec78ac9\"" Jan 29 16:38:36.686954 containerd[1523]: time="2025-01-29T16:38:36.686256168Z" level=info msg="TearDown network for sandbox \"f2677d74a531cf065342690475879b4cc0ab35885aa54345aac468f71ec78ac9\" successfully" Jan 29 16:38:36.686954 containerd[1523]: time="2025-01-29T16:38:36.686282469Z" level=info msg="StopPodSandbox for \"f2677d74a531cf065342690475879b4cc0ab35885aa54345aac468f71ec78ac9\" returns successfully" Jan 29 16:38:36.686954 containerd[1523]: time="2025-01-29T16:38:36.686587882Z" level=info msg="StopPodSandbox for \"2efbf5d0f307b3be20fff5705f37217ad67ab6699fe3eb4f1d925817f4d98cf5\"" Jan 29 16:38:36.686954 containerd[1523]: time="2025-01-29T16:38:36.686699363Z" level=info msg="TearDown network for sandbox \"2efbf5d0f307b3be20fff5705f37217ad67ab6699fe3eb4f1d925817f4d98cf5\" successfully" Jan 29 16:38:36.686954 containerd[1523]: time="2025-01-29T16:38:36.686727046Z" level=info msg="StopPodSandbox for \"2efbf5d0f307b3be20fff5705f37217ad67ab6699fe3eb4f1d925817f4d98cf5\" returns successfully" Jan 29 16:38:36.692833 containerd[1523]: time="2025-01-29T16:38:36.690350500Z" level=info msg="StopPodSandbox for \"f3deae6dc24d96e2b66cf92606b0d48661fd90ad8d0cdc9edcad0847bd8927ea\"" Jan 29 16:38:36.692833 containerd[1523]: time="2025-01-29T16:38:36.690456816Z" level=info msg="TearDown network for sandbox \"f3deae6dc24d96e2b66cf92606b0d48661fd90ad8d0cdc9edcad0847bd8927ea\" successfully" Jan 29 16:38:36.692833 containerd[1523]: time="2025-01-29T16:38:36.690484908Z" level=info msg="StopPodSandbox for \"f3deae6dc24d96e2b66cf92606b0d48661fd90ad8d0cdc9edcad0847bd8927ea\" returns successfully" Jan 29 16:38:36.690988 systemd[1]: run-netns-cni\x2dabc5260d\x2d2ac3\x2de517\x2d9a04\x2d2c06d37950f2.mount: Deactivated successfully. Jan 29 16:38:36.694407 containerd[1523]: time="2025-01-29T16:38:36.693330915Z" level=info msg="StopPodSandbox for \"dbe56831af47f5f85c6a8b5a0ccd5dc59d1dd15994bc03753a62351bd77cf38b\"" Jan 29 16:38:36.694407 containerd[1523]: time="2025-01-29T16:38:36.693584264Z" level=info msg="TearDown network for sandbox \"dbe56831af47f5f85c6a8b5a0ccd5dc59d1dd15994bc03753a62351bd77cf38b\" successfully" Jan 29 16:38:36.694407 containerd[1523]: time="2025-01-29T16:38:36.693807456Z" level=info msg="StopPodSandbox for \"dbe56831af47f5f85c6a8b5a0ccd5dc59d1dd15994bc03753a62351bd77cf38b\" returns successfully" Jan 29 16:38:36.700715 containerd[1523]: time="2025-01-29T16:38:36.700270795Z" level=info msg="StopPodSandbox for \"59eff04dced0609ea87ef538d971c1603251b18fd95be4c5314e00fe23874bcf\"" Jan 29 16:38:36.700715 containerd[1523]: time="2025-01-29T16:38:36.700387935Z" level=info msg="TearDown network for sandbox \"59eff04dced0609ea87ef538d971c1603251b18fd95be4c5314e00fe23874bcf\" successfully" Jan 29 16:38:36.700715 containerd[1523]: time="2025-01-29T16:38:36.700407431Z" level=info msg="StopPodSandbox for \"59eff04dced0609ea87ef538d971c1603251b18fd95be4c5314e00fe23874bcf\" returns successfully" Jan 29 16:38:36.703119 containerd[1523]: time="2025-01-29T16:38:36.702280197Z" level=info msg="StopPodSandbox for \"e468574f2c4d61d4c62560ed065c37a21dbd8a84009c1e75ca595e748901caa3\"" Jan 29 16:38:36.703119 containerd[1523]: time="2025-01-29T16:38:36.702403340Z" level=info msg="TearDown network for sandbox \"e468574f2c4d61d4c62560ed065c37a21dbd8a84009c1e75ca595e748901caa3\" successfully" Jan 29 16:38:36.703119 containerd[1523]: time="2025-01-29T16:38:36.702485836Z" level=info msg="StopPodSandbox for \"e468574f2c4d61d4c62560ed065c37a21dbd8a84009c1e75ca595e748901caa3\" returns successfully" Jan 29 16:38:36.703868 containerd[1523]: time="2025-01-29T16:38:36.703814137Z" level=info msg="StopPodSandbox for \"e35fcdd3250769bbcc1381476a9c6c0d4f96ad243509d698877cddaf95d2c990\"" Jan 29 16:38:36.704366 containerd[1523]: time="2025-01-29T16:38:36.704263280Z" level=info msg="TearDown network for sandbox \"e35fcdd3250769bbcc1381476a9c6c0d4f96ad243509d698877cddaf95d2c990\" successfully" Jan 29 16:38:36.704677 containerd[1523]: time="2025-01-29T16:38:36.704551936Z" level=info msg="StopPodSandbox for \"e35fcdd3250769bbcc1381476a9c6c0d4f96ad243509d698877cddaf95d2c990\" returns successfully" Jan 29 16:38:36.705778 containerd[1523]: time="2025-01-29T16:38:36.705669519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ghhfd,Uid:0879b6e8-c5af-4b29-b97b-72c7b290e750,Namespace:calico-system,Attempt:8,}" Jan 29 16:38:36.918535 containerd[1523]: time="2025-01-29T16:38:36.917605736Z" level=error msg="Failed to destroy network for sandbox \"c0e11258579202d3efc31396bfbd1d950f6d62878d72cd969a376f862ff88ca8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:36.919856 containerd[1523]: time="2025-01-29T16:38:36.918887856Z" level=error msg="Failed to destroy network for sandbox \"fda0c38bc56f22a1e981bfcb1ea48337151e4bfc8bc8f7a9b6b010dd930d54ef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:36.919856 containerd[1523]: time="2025-01-29T16:38:36.918947485Z" level=error msg="encountered an error cleaning up failed sandbox \"c0e11258579202d3efc31396bfbd1d950f6d62878d72cd969a376f862ff88ca8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:36.919856 containerd[1523]: time="2025-01-29T16:38:36.919042848Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ghhfd,Uid:0879b6e8-c5af-4b29-b97b-72c7b290e750,Namespace:calico-system,Attempt:8,} failed, error" error="failed to setup network for sandbox \"c0e11258579202d3efc31396bfbd1d950f6d62878d72cd969a376f862ff88ca8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:36.920039 kubelet[1925]: E0129 16:38:36.919399 1925 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0e11258579202d3efc31396bfbd1d950f6d62878d72cd969a376f862ff88ca8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:36.920039 kubelet[1925]: E0129 16:38:36.919477 1925 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0e11258579202d3efc31396bfbd1d950f6d62878d72cd969a376f862ff88ca8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ghhfd" Jan 29 16:38:36.920039 kubelet[1925]: E0129 16:38:36.919507 1925 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0e11258579202d3efc31396bfbd1d950f6d62878d72cd969a376f862ff88ca8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ghhfd" Jan 29 16:38:36.920249 kubelet[1925]: E0129 16:38:36.919578 1925 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ghhfd_calico-system(0879b6e8-c5af-4b29-b97b-72c7b290e750)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ghhfd_calico-system(0879b6e8-c5af-4b29-b97b-72c7b290e750)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c0e11258579202d3efc31396bfbd1d950f6d62878d72cd969a376f862ff88ca8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ghhfd" podUID="0879b6e8-c5af-4b29-b97b-72c7b290e750" Jan 29 16:38:36.921449 containerd[1523]: time="2025-01-29T16:38:36.921110685Z" level=error msg="encountered an error cleaning up failed sandbox \"fda0c38bc56f22a1e981bfcb1ea48337151e4bfc8bc8f7a9b6b010dd930d54ef\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:36.921449 containerd[1523]: time="2025-01-29T16:38:36.921231276Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-wl4v2,Uid:3c303794-1a6e-4a37-b643-42f09e6c9f41,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"fda0c38bc56f22a1e981bfcb1ea48337151e4bfc8bc8f7a9b6b010dd930d54ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:36.921826 kubelet[1925]: E0129 16:38:36.921597 1925 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fda0c38bc56f22a1e981bfcb1ea48337151e4bfc8bc8f7a9b6b010dd930d54ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:36.921826 kubelet[1925]: E0129 16:38:36.921654 1925 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fda0c38bc56f22a1e981bfcb1ea48337151e4bfc8bc8f7a9b6b010dd930d54ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-wl4v2" Jan 29 16:38:36.921826 kubelet[1925]: E0129 16:38:36.921680 1925 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fda0c38bc56f22a1e981bfcb1ea48337151e4bfc8bc8f7a9b6b010dd930d54ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-wl4v2" Jan 29 16:38:36.922104 kubelet[1925]: E0129 16:38:36.922024 1925 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-wl4v2_default(3c303794-1a6e-4a37-b643-42f09e6c9f41)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-wl4v2_default(3c303794-1a6e-4a37-b643-42f09e6c9f41)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fda0c38bc56f22a1e981bfcb1ea48337151e4bfc8bc8f7a9b6b010dd930d54ef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-wl4v2" podUID="3c303794-1a6e-4a37-b643-42f09e6c9f41" Jan 29 16:38:37.389707 kubelet[1925]: E0129 16:38:37.389627 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:38:37.598845 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fda0c38bc56f22a1e981bfcb1ea48337151e4bfc8bc8f7a9b6b010dd930d54ef-shm.mount: Deactivated successfully. Jan 29 16:38:37.693650 kubelet[1925]: I0129 16:38:37.692319 1925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fda0c38bc56f22a1e981bfcb1ea48337151e4bfc8bc8f7a9b6b010dd930d54ef" Jan 29 16:38:37.696398 containerd[1523]: time="2025-01-29T16:38:37.696276650Z" level=info msg="StopPodSandbox for \"fda0c38bc56f22a1e981bfcb1ea48337151e4bfc8bc8f7a9b6b010dd930d54ef\"" Jan 29 16:38:37.697438 containerd[1523]: time="2025-01-29T16:38:37.697406689Z" level=info msg="Ensure that sandbox fda0c38bc56f22a1e981bfcb1ea48337151e4bfc8bc8f7a9b6b010dd930d54ef in task-service has been cleanup successfully" Jan 29 16:38:37.698718 containerd[1523]: time="2025-01-29T16:38:37.698663108Z" level=info msg="TearDown network for sandbox \"fda0c38bc56f22a1e981bfcb1ea48337151e4bfc8bc8f7a9b6b010dd930d54ef\" successfully" Jan 29 16:38:37.700954 containerd[1523]: time="2025-01-29T16:38:37.700921904Z" level=info msg="StopPodSandbox for \"fda0c38bc56f22a1e981bfcb1ea48337151e4bfc8bc8f7a9b6b010dd930d54ef\" returns successfully" Jan 29 16:38:37.701837 systemd[1]: run-netns-cni\x2df9eb1339\x2d6451\x2d079c\x2d66be\x2de8355e414418.mount: Deactivated successfully. Jan 29 16:38:37.704620 containerd[1523]: time="2025-01-29T16:38:37.704269560Z" level=info msg="StopPodSandbox for \"d7c013bfec3a2464280f3995f4d6ddd04671d9963f35776b2c6f5a315516ef8e\"" Jan 29 16:38:37.704620 containerd[1523]: time="2025-01-29T16:38:37.704385746Z" level=info msg="TearDown network for sandbox \"d7c013bfec3a2464280f3995f4d6ddd04671d9963f35776b2c6f5a315516ef8e\" successfully" Jan 29 16:38:37.704620 containerd[1523]: time="2025-01-29T16:38:37.704405683Z" level=info msg="StopPodSandbox for \"d7c013bfec3a2464280f3995f4d6ddd04671d9963f35776b2c6f5a315516ef8e\" returns successfully" Jan 29 16:38:37.705670 containerd[1523]: time="2025-01-29T16:38:37.705639936Z" level=info msg="StopPodSandbox for \"2eed4f29416d08e8a229b320b2680c8e4bdfa4226deebc48886d61e197c85549\"" Jan 29 16:38:37.706802 containerd[1523]: time="2025-01-29T16:38:37.706757364Z" level=info msg="TearDown network for sandbox \"2eed4f29416d08e8a229b320b2680c8e4bdfa4226deebc48886d61e197c85549\" successfully" Jan 29 16:38:37.707216 containerd[1523]: time="2025-01-29T16:38:37.706928436Z" level=info msg="StopPodSandbox for \"2eed4f29416d08e8a229b320b2680c8e4bdfa4226deebc48886d61e197c85549\" returns successfully" Jan 29 16:38:37.707884 containerd[1523]: time="2025-01-29T16:38:37.707348814Z" level=info msg="StopPodSandbox for \"405609bd3ee7c1a1167c5207814e6ad2286d312d1d0e5a15b0abc08af3720852\"" Jan 29 16:38:37.707884 containerd[1523]: time="2025-01-29T16:38:37.707465224Z" level=info msg="TearDown network for sandbox \"405609bd3ee7c1a1167c5207814e6ad2286d312d1d0e5a15b0abc08af3720852\" successfully" Jan 29 16:38:37.707884 containerd[1523]: time="2025-01-29T16:38:37.707493023Z" level=info msg="StopPodSandbox for \"405609bd3ee7c1a1167c5207814e6ad2286d312d1d0e5a15b0abc08af3720852\" returns successfully" Jan 29 16:38:37.709290 containerd[1523]: time="2025-01-29T16:38:37.709255215Z" level=info msg="StopPodSandbox for \"0505bb38903c7c592ccbc828cdc9fdb6579d789547297666de5a28ea1f26b2a4\"" Jan 29 16:38:37.709422 containerd[1523]: time="2025-01-29T16:38:37.709364131Z" level=info msg="TearDown network for sandbox \"0505bb38903c7c592ccbc828cdc9fdb6579d789547297666de5a28ea1f26b2a4\" successfully" Jan 29 16:38:37.709500 containerd[1523]: time="2025-01-29T16:38:37.709423003Z" level=info msg="StopPodSandbox for \"0505bb38903c7c592ccbc828cdc9fdb6579d789547297666de5a28ea1f26b2a4\" returns successfully" Jan 29 16:38:37.710785 containerd[1523]: time="2025-01-29T16:38:37.710687103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-wl4v2,Uid:3c303794-1a6e-4a37-b643-42f09e6c9f41,Namespace:default,Attempt:5,}" Jan 29 16:38:37.740144 kubelet[1925]: I0129 16:38:37.739245 1925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0e11258579202d3efc31396bfbd1d950f6d62878d72cd969a376f862ff88ca8" Jan 29 16:38:37.741523 containerd[1523]: time="2025-01-29T16:38:37.741482156Z" level=info msg="StopPodSandbox for \"c0e11258579202d3efc31396bfbd1d950f6d62878d72cd969a376f862ff88ca8\"" Jan 29 16:38:37.741784 containerd[1523]: time="2025-01-29T16:38:37.741736566Z" level=info msg="Ensure that sandbox c0e11258579202d3efc31396bfbd1d950f6d62878d72cd969a376f862ff88ca8 in task-service has been cleanup successfully" Jan 29 16:38:37.743803 containerd[1523]: time="2025-01-29T16:38:37.742468639Z" level=info msg="TearDown network for sandbox \"c0e11258579202d3efc31396bfbd1d950f6d62878d72cd969a376f862ff88ca8\" successfully" Jan 29 16:38:37.743803 containerd[1523]: time="2025-01-29T16:38:37.742498120Z" level=info msg="StopPodSandbox for \"c0e11258579202d3efc31396bfbd1d950f6d62878d72cd969a376f862ff88ca8\" returns successfully" Jan 29 16:38:37.744389 containerd[1523]: time="2025-01-29T16:38:37.744355890Z" level=info msg="StopPodSandbox for \"cc5ee80badb343f0e1bd663d0069643ce837b585eb1873cf591a609969442bcb\"" Jan 29 16:38:37.744920 containerd[1523]: time="2025-01-29T16:38:37.744465927Z" level=info msg="TearDown network for sandbox \"cc5ee80badb343f0e1bd663d0069643ce837b585eb1873cf591a609969442bcb\" successfully" Jan 29 16:38:37.744920 containerd[1523]: time="2025-01-29T16:38:37.744491198Z" level=info msg="StopPodSandbox for \"cc5ee80badb343f0e1bd663d0069643ce837b585eb1873cf591a609969442bcb\" returns successfully" Jan 29 16:38:37.747254 containerd[1523]: time="2025-01-29T16:38:37.745171272Z" level=info msg="StopPodSandbox for \"f2677d74a531cf065342690475879b4cc0ab35885aa54345aac468f71ec78ac9\"" Jan 29 16:38:37.747254 containerd[1523]: time="2025-01-29T16:38:37.745277325Z" level=info msg="TearDown network for sandbox \"f2677d74a531cf065342690475879b4cc0ab35885aa54345aac468f71ec78ac9\" successfully" Jan 29 16:38:37.747254 containerd[1523]: time="2025-01-29T16:38:37.745295391Z" level=info msg="StopPodSandbox for \"f2677d74a531cf065342690475879b4cc0ab35885aa54345aac468f71ec78ac9\" returns successfully" Jan 29 16:38:37.747254 containerd[1523]: time="2025-01-29T16:38:37.745780676Z" level=info msg="StopPodSandbox for \"2efbf5d0f307b3be20fff5705f37217ad67ab6699fe3eb4f1d925817f4d98cf5\"" Jan 29 16:38:37.747254 containerd[1523]: time="2025-01-29T16:38:37.745875400Z" level=info msg="TearDown network for sandbox \"2efbf5d0f307b3be20fff5705f37217ad67ab6699fe3eb4f1d925817f4d98cf5\" successfully" Jan 29 16:38:37.747254 containerd[1523]: time="2025-01-29T16:38:37.745899104Z" level=info msg="StopPodSandbox for \"2efbf5d0f307b3be20fff5705f37217ad67ab6699fe3eb4f1d925817f4d98cf5\" returns successfully" Jan 29 16:38:37.745256 systemd[1]: run-netns-cni\x2d74a4bc71\x2d6c3e\x2d2c76\x2d97e4\x2d9d3a8dc643ba.mount: Deactivated successfully. Jan 29 16:38:37.748858 containerd[1523]: time="2025-01-29T16:38:37.748417086Z" level=info msg="StopPodSandbox for \"f3deae6dc24d96e2b66cf92606b0d48661fd90ad8d0cdc9edcad0847bd8927ea\"" Jan 29 16:38:37.748858 containerd[1523]: time="2025-01-29T16:38:37.748526410Z" level=info msg="TearDown network for sandbox \"f3deae6dc24d96e2b66cf92606b0d48661fd90ad8d0cdc9edcad0847bd8927ea\" successfully" Jan 29 16:38:37.748858 containerd[1523]: time="2025-01-29T16:38:37.748548483Z" level=info msg="StopPodSandbox for \"f3deae6dc24d96e2b66cf92606b0d48661fd90ad8d0cdc9edcad0847bd8927ea\" returns successfully" Jan 29 16:38:37.751201 containerd[1523]: time="2025-01-29T16:38:37.751168548Z" level=info msg="StopPodSandbox for \"dbe56831af47f5f85c6a8b5a0ccd5dc59d1dd15994bc03753a62351bd77cf38b\"" Jan 29 16:38:37.751293 containerd[1523]: time="2025-01-29T16:38:37.751277459Z" level=info msg="TearDown network for sandbox \"dbe56831af47f5f85c6a8b5a0ccd5dc59d1dd15994bc03753a62351bd77cf38b\" successfully" Jan 29 16:38:37.751361 containerd[1523]: time="2025-01-29T16:38:37.751296262Z" level=info msg="StopPodSandbox for \"dbe56831af47f5f85c6a8b5a0ccd5dc59d1dd15994bc03753a62351bd77cf38b\" returns successfully" Jan 29 16:38:37.751683 containerd[1523]: time="2025-01-29T16:38:37.751581675Z" level=info msg="StopPodSandbox for \"59eff04dced0609ea87ef538d971c1603251b18fd95be4c5314e00fe23874bcf\"" Jan 29 16:38:37.751970 containerd[1523]: time="2025-01-29T16:38:37.751688012Z" level=info msg="TearDown network for sandbox \"59eff04dced0609ea87ef538d971c1603251b18fd95be4c5314e00fe23874bcf\" successfully" Jan 29 16:38:37.751970 containerd[1523]: time="2025-01-29T16:38:37.751708043Z" level=info msg="StopPodSandbox for \"59eff04dced0609ea87ef538d971c1603251b18fd95be4c5314e00fe23874bcf\" returns successfully" Jan 29 16:38:37.752237 containerd[1523]: time="2025-01-29T16:38:37.752093831Z" level=info msg="StopPodSandbox for \"e468574f2c4d61d4c62560ed065c37a21dbd8a84009c1e75ca595e748901caa3\"" Jan 29 16:38:37.752237 containerd[1523]: time="2025-01-29T16:38:37.752224629Z" level=info msg="TearDown network for sandbox \"e468574f2c4d61d4c62560ed065c37a21dbd8a84009c1e75ca595e748901caa3\" successfully" Jan 29 16:38:37.752485 containerd[1523]: time="2025-01-29T16:38:37.752243028Z" level=info msg="StopPodSandbox for \"e468574f2c4d61d4c62560ed065c37a21dbd8a84009c1e75ca595e748901caa3\" returns successfully" Jan 29 16:38:37.754775 containerd[1523]: time="2025-01-29T16:38:37.754608975Z" level=info msg="StopPodSandbox for \"e35fcdd3250769bbcc1381476a9c6c0d4f96ad243509d698877cddaf95d2c990\"" Jan 29 16:38:37.754775 containerd[1523]: time="2025-01-29T16:38:37.754718325Z" level=info msg="TearDown network for sandbox \"e35fcdd3250769bbcc1381476a9c6c0d4f96ad243509d698877cddaf95d2c990\" successfully" Jan 29 16:38:37.754775 containerd[1523]: time="2025-01-29T16:38:37.754737893Z" level=info msg="StopPodSandbox for \"e35fcdd3250769bbcc1381476a9c6c0d4f96ad243509d698877cddaf95d2c990\" returns successfully" Jan 29 16:38:37.756417 containerd[1523]: time="2025-01-29T16:38:37.756296225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ghhfd,Uid:0879b6e8-c5af-4b29-b97b-72c7b290e750,Namespace:calico-system,Attempt:9,}" Jan 29 16:38:37.876290 containerd[1523]: time="2025-01-29T16:38:37.876212985Z" level=error msg="Failed to destroy network for sandbox \"c40910d4f83c9120d4501590a6953986d18f9464fe2b40137c612ae95ef61f4e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:37.877154 containerd[1523]: time="2025-01-29T16:38:37.876866828Z" level=error msg="encountered an error cleaning up failed sandbox \"c40910d4f83c9120d4501590a6953986d18f9464fe2b40137c612ae95ef61f4e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:37.877235 containerd[1523]: time="2025-01-29T16:38:37.877201597Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-wl4v2,Uid:3c303794-1a6e-4a37-b643-42f09e6c9f41,Namespace:default,Attempt:5,} failed, error" error="failed to setup network for sandbox \"c40910d4f83c9120d4501590a6953986d18f9464fe2b40137c612ae95ef61f4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:37.877623 kubelet[1925]: E0129 16:38:37.877554 1925 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c40910d4f83c9120d4501590a6953986d18f9464fe2b40137c612ae95ef61f4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:37.877867 kubelet[1925]: E0129 16:38:37.877827 1925 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c40910d4f83c9120d4501590a6953986d18f9464fe2b40137c612ae95ef61f4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-wl4v2" Jan 29 16:38:37.878996 kubelet[1925]: E0129 16:38:37.878817 1925 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c40910d4f83c9120d4501590a6953986d18f9464fe2b40137c612ae95ef61f4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-wl4v2" Jan 29 16:38:37.878996 kubelet[1925]: E0129 16:38:37.878916 1925 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-wl4v2_default(3c303794-1a6e-4a37-b643-42f09e6c9f41)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-wl4v2_default(3c303794-1a6e-4a37-b643-42f09e6c9f41)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c40910d4f83c9120d4501590a6953986d18f9464fe2b40137c612ae95ef61f4e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-wl4v2" podUID="3c303794-1a6e-4a37-b643-42f09e6c9f41" Jan 29 16:38:37.908142 containerd[1523]: time="2025-01-29T16:38:37.908072267Z" level=error msg="Failed to destroy network for sandbox \"4fb53295ae6d8d87e43ed4bff9d5dd88ef07c58d516bb74446f47c7be99498fc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:37.908810 containerd[1523]: time="2025-01-29T16:38:37.908733923Z" level=error msg="encountered an error cleaning up failed sandbox \"4fb53295ae6d8d87e43ed4bff9d5dd88ef07c58d516bb74446f47c7be99498fc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:37.908999 containerd[1523]: time="2025-01-29T16:38:37.908962262Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ghhfd,Uid:0879b6e8-c5af-4b29-b97b-72c7b290e750,Namespace:calico-system,Attempt:9,} failed, error" error="failed to setup network for sandbox \"4fb53295ae6d8d87e43ed4bff9d5dd88ef07c58d516bb74446f47c7be99498fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:37.909423 kubelet[1925]: E0129 16:38:37.909375 1925 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fb53295ae6d8d87e43ed4bff9d5dd88ef07c58d516bb74446f47c7be99498fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:37.909623 kubelet[1925]: E0129 16:38:37.909584 1925 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fb53295ae6d8d87e43ed4bff9d5dd88ef07c58d516bb74446f47c7be99498fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ghhfd" Jan 29 16:38:37.909739 kubelet[1925]: E0129 16:38:37.909712 1925 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fb53295ae6d8d87e43ed4bff9d5dd88ef07c58d516bb74446f47c7be99498fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ghhfd" Jan 29 16:38:37.910553 kubelet[1925]: E0129 16:38:37.910047 1925 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ghhfd_calico-system(0879b6e8-c5af-4b29-b97b-72c7b290e750)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ghhfd_calico-system(0879b6e8-c5af-4b29-b97b-72c7b290e750)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4fb53295ae6d8d87e43ed4bff9d5dd88ef07c58d516bb74446f47c7be99498fc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ghhfd" podUID="0879b6e8-c5af-4b29-b97b-72c7b290e750" Jan 29 16:38:38.390857 kubelet[1925]: E0129 16:38:38.390599 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:38:38.599709 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4fb53295ae6d8d87e43ed4bff9d5dd88ef07c58d516bb74446f47c7be99498fc-shm.mount: Deactivated successfully. Jan 29 16:38:38.599925 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c40910d4f83c9120d4501590a6953986d18f9464fe2b40137c612ae95ef61f4e-shm.mount: Deactivated successfully. Jan 29 16:38:38.628578 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1284468193.mount: Deactivated successfully. Jan 29 16:38:38.683037 containerd[1523]: time="2025-01-29T16:38:38.682768876Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:38:38.684007 containerd[1523]: time="2025-01-29T16:38:38.683860761Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 29 16:38:38.684646 containerd[1523]: time="2025-01-29T16:38:38.684571203Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:38:38.687861 containerd[1523]: time="2025-01-29T16:38:38.687228783Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:38:38.688591 containerd[1523]: time="2025-01-29T16:38:38.688299330Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 10.12348284s" Jan 29 16:38:38.688591 containerd[1523]: time="2025-01-29T16:38:38.688342718Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 29 16:38:38.705974 containerd[1523]: time="2025-01-29T16:38:38.705915857Z" level=info msg="CreateContainer within sandbox \"c02430714488084088f7f18745c5192cd79d21158c01c1fce1b2e53910c1a9e8\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 16:38:38.729049 containerd[1523]: time="2025-01-29T16:38:38.728998284Z" level=info msg="CreateContainer within sandbox \"c02430714488084088f7f18745c5192cd79d21158c01c1fce1b2e53910c1a9e8\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7c97a44b3cc29f2201398d4c03c524d4888c927fa95b4f50a9dc219001b026c7\"" Jan 29 16:38:38.730390 containerd[1523]: time="2025-01-29T16:38:38.730173626Z" level=info msg="StartContainer for \"7c97a44b3cc29f2201398d4c03c524d4888c927fa95b4f50a9dc219001b026c7\"" Jan 29 16:38:38.748269 kubelet[1925]: I0129 16:38:38.748225 1925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fb53295ae6d8d87e43ed4bff9d5dd88ef07c58d516bb74446f47c7be99498fc" Jan 29 16:38:38.749611 containerd[1523]: time="2025-01-29T16:38:38.749544379Z" level=info msg="StopPodSandbox for \"4fb53295ae6d8d87e43ed4bff9d5dd88ef07c58d516bb74446f47c7be99498fc\"" Jan 29 16:38:38.750125 containerd[1523]: time="2025-01-29T16:38:38.750076396Z" level=info msg="Ensure that sandbox 4fb53295ae6d8d87e43ed4bff9d5dd88ef07c58d516bb74446f47c7be99498fc in task-service has been cleanup successfully" Jan 29 16:38:38.750591 containerd[1523]: time="2025-01-29T16:38:38.750562435Z" level=info msg="TearDown network for sandbox \"4fb53295ae6d8d87e43ed4bff9d5dd88ef07c58d516bb74446f47c7be99498fc\" successfully" Jan 29 16:38:38.750742 containerd[1523]: time="2025-01-29T16:38:38.750716018Z" level=info msg="StopPodSandbox for \"4fb53295ae6d8d87e43ed4bff9d5dd88ef07c58d516bb74446f47c7be99498fc\" returns successfully" Jan 29 16:38:38.751446 containerd[1523]: time="2025-01-29T16:38:38.751382046Z" level=info msg="StopPodSandbox for \"c0e11258579202d3efc31396bfbd1d950f6d62878d72cd969a376f862ff88ca8\"" Jan 29 16:38:38.753218 containerd[1523]: time="2025-01-29T16:38:38.752176657Z" level=info msg="TearDown network for sandbox \"c0e11258579202d3efc31396bfbd1d950f6d62878d72cd969a376f862ff88ca8\" successfully" Jan 29 16:38:38.753340 containerd[1523]: time="2025-01-29T16:38:38.753314441Z" level=info msg="StopPodSandbox for \"c0e11258579202d3efc31396bfbd1d950f6d62878d72cd969a376f862ff88ca8\" returns successfully" Jan 29 16:38:38.754638 containerd[1523]: time="2025-01-29T16:38:38.753953704Z" level=info msg="StopPodSandbox for \"cc5ee80badb343f0e1bd663d0069643ce837b585eb1873cf591a609969442bcb\"" Jan 29 16:38:38.754638 containerd[1523]: time="2025-01-29T16:38:38.754059542Z" level=info msg="TearDown network for sandbox \"cc5ee80badb343f0e1bd663d0069643ce837b585eb1873cf591a609969442bcb\" successfully" Jan 29 16:38:38.754638 containerd[1523]: time="2025-01-29T16:38:38.754079289Z" level=info msg="StopPodSandbox for \"cc5ee80badb343f0e1bd663d0069643ce837b585eb1873cf591a609969442bcb\" returns successfully" Jan 29 16:38:38.754638 containerd[1523]: time="2025-01-29T16:38:38.754575118Z" level=info msg="StopPodSandbox for \"c40910d4f83c9120d4501590a6953986d18f9464fe2b40137c612ae95ef61f4e\"" Jan 29 16:38:38.754910 kubelet[1925]: I0129 16:38:38.754001 1925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c40910d4f83c9120d4501590a6953986d18f9464fe2b40137c612ae95ef61f4e" Jan 29 16:38:38.754963 containerd[1523]: time="2025-01-29T16:38:38.754835839Z" level=info msg="Ensure that sandbox c40910d4f83c9120d4501590a6953986d18f9464fe2b40137c612ae95ef61f4e in task-service has been cleanup successfully" Jan 29 16:38:38.755387 containerd[1523]: time="2025-01-29T16:38:38.755348714Z" level=info msg="StopPodSandbox for \"f2677d74a531cf065342690475879b4cc0ab35885aa54345aac468f71ec78ac9\"" Jan 29 16:38:38.756048 containerd[1523]: time="2025-01-29T16:38:38.755602303Z" level=info msg="TearDown network for sandbox \"f2677d74a531cf065342690475879b4cc0ab35885aa54345aac468f71ec78ac9\" successfully" Jan 29 16:38:38.756048 containerd[1523]: time="2025-01-29T16:38:38.755630301Z" level=info msg="StopPodSandbox for \"f2677d74a531cf065342690475879b4cc0ab35885aa54345aac468f71ec78ac9\" returns successfully" Jan 29 16:38:38.756048 containerd[1523]: time="2025-01-29T16:38:38.755447605Z" level=info msg="TearDown network for sandbox \"c40910d4f83c9120d4501590a6953986d18f9464fe2b40137c612ae95ef61f4e\" successfully" Jan 29 16:38:38.756048 containerd[1523]: time="2025-01-29T16:38:38.755694706Z" level=info msg="StopPodSandbox for \"c40910d4f83c9120d4501590a6953986d18f9464fe2b40137c612ae95ef61f4e\" returns successfully" Jan 29 16:38:38.757297 containerd[1523]: time="2025-01-29T16:38:38.756826826Z" level=info msg="StopPodSandbox for \"fda0c38bc56f22a1e981bfcb1ea48337151e4bfc8bc8f7a9b6b010dd930d54ef\"" Jan 29 16:38:38.757297 containerd[1523]: time="2025-01-29T16:38:38.756938694Z" level=info msg="TearDown network for sandbox \"fda0c38bc56f22a1e981bfcb1ea48337151e4bfc8bc8f7a9b6b010dd930d54ef\" successfully" Jan 29 16:38:38.757297 containerd[1523]: time="2025-01-29T16:38:38.756956887Z" level=info msg="StopPodSandbox for \"fda0c38bc56f22a1e981bfcb1ea48337151e4bfc8bc8f7a9b6b010dd930d54ef\" returns successfully" Jan 29 16:38:38.757297 containerd[1523]: time="2025-01-29T16:38:38.757022015Z" level=info msg="StopPodSandbox for \"2efbf5d0f307b3be20fff5705f37217ad67ab6699fe3eb4f1d925817f4d98cf5\"" Jan 29 16:38:38.757297 containerd[1523]: time="2025-01-29T16:38:38.757126264Z" level=info msg="TearDown network for sandbox \"2efbf5d0f307b3be20fff5705f37217ad67ab6699fe3eb4f1d925817f4d98cf5\" successfully" Jan 29 16:38:38.757297 containerd[1523]: time="2025-01-29T16:38:38.757143666Z" level=info msg="StopPodSandbox for \"2efbf5d0f307b3be20fff5705f37217ad67ab6699fe3eb4f1d925817f4d98cf5\" returns successfully" Jan 29 16:38:38.758387 containerd[1523]: time="2025-01-29T16:38:38.758259287Z" level=info msg="StopPodSandbox for \"d7c013bfec3a2464280f3995f4d6ddd04671d9963f35776b2c6f5a315516ef8e\"" Jan 29 16:38:38.758515 containerd[1523]: time="2025-01-29T16:38:38.758489384Z" level=info msg="TearDown network for sandbox \"d7c013bfec3a2464280f3995f4d6ddd04671d9963f35776b2c6f5a315516ef8e\" successfully" Jan 29 16:38:38.758703 containerd[1523]: time="2025-01-29T16:38:38.758632371Z" level=info msg="StopPodSandbox for \"d7c013bfec3a2464280f3995f4d6ddd04671d9963f35776b2c6f5a315516ef8e\" returns successfully" Jan 29 16:38:38.759299 containerd[1523]: time="2025-01-29T16:38:38.759250656Z" level=info msg="StopPodSandbox for \"f3deae6dc24d96e2b66cf92606b0d48661fd90ad8d0cdc9edcad0847bd8927ea\"" Jan 29 16:38:38.759556 containerd[1523]: time="2025-01-29T16:38:38.759489359Z" level=info msg="TearDown network for sandbox \"f3deae6dc24d96e2b66cf92606b0d48661fd90ad8d0cdc9edcad0847bd8927ea\" successfully" Jan 29 16:38:38.759556 containerd[1523]: time="2025-01-29T16:38:38.759514037Z" level=info msg="StopPodSandbox for \"f3deae6dc24d96e2b66cf92606b0d48661fd90ad8d0cdc9edcad0847bd8927ea\" returns successfully" Jan 29 16:38:38.760148 containerd[1523]: time="2025-01-29T16:38:38.760081638Z" level=info msg="StopPodSandbox for \"dbe56831af47f5f85c6a8b5a0ccd5dc59d1dd15994bc03753a62351bd77cf38b\"" Jan 29 16:38:38.760389 containerd[1523]: time="2025-01-29T16:38:38.760259097Z" level=info msg="TearDown network for sandbox \"dbe56831af47f5f85c6a8b5a0ccd5dc59d1dd15994bc03753a62351bd77cf38b\" successfully" Jan 29 16:38:38.760389 containerd[1523]: time="2025-01-29T16:38:38.760310213Z" level=info msg="StopPodSandbox for \"dbe56831af47f5f85c6a8b5a0ccd5dc59d1dd15994bc03753a62351bd77cf38b\" returns successfully" Jan 29 16:38:38.760389 containerd[1523]: time="2025-01-29T16:38:38.760377342Z" level=info msg="StopPodSandbox for \"2eed4f29416d08e8a229b320b2680c8e4bdfa4226deebc48886d61e197c85549\"" Jan 29 16:38:38.760550 containerd[1523]: time="2025-01-29T16:38:38.760471349Z" level=info msg="TearDown network for sandbox \"2eed4f29416d08e8a229b320b2680c8e4bdfa4226deebc48886d61e197c85549\" successfully" Jan 29 16:38:38.760550 containerd[1523]: time="2025-01-29T16:38:38.760489063Z" level=info msg="StopPodSandbox for \"2eed4f29416d08e8a229b320b2680c8e4bdfa4226deebc48886d61e197c85549\" returns successfully" Jan 29 16:38:38.761779 containerd[1523]: time="2025-01-29T16:38:38.761576643Z" level=info msg="StopPodSandbox for \"405609bd3ee7c1a1167c5207814e6ad2286d312d1d0e5a15b0abc08af3720852\"" Jan 29 16:38:38.761779 containerd[1523]: time="2025-01-29T16:38:38.761717231Z" level=info msg="TearDown network for sandbox \"405609bd3ee7c1a1167c5207814e6ad2286d312d1d0e5a15b0abc08af3720852\" successfully" Jan 29 16:38:38.761779 containerd[1523]: time="2025-01-29T16:38:38.761737550Z" level=info msg="StopPodSandbox for \"405609bd3ee7c1a1167c5207814e6ad2286d312d1d0e5a15b0abc08af3720852\" returns successfully" Jan 29 16:38:38.762532 containerd[1523]: time="2025-01-29T16:38:38.762293614Z" level=info msg="StopPodSandbox for \"59eff04dced0609ea87ef538d971c1603251b18fd95be4c5314e00fe23874bcf\"" Jan 29 16:38:38.763444 containerd[1523]: time="2025-01-29T16:38:38.762745108Z" level=info msg="StopPodSandbox for \"0505bb38903c7c592ccbc828cdc9fdb6579d789547297666de5a28ea1f26b2a4\"" Jan 29 16:38:38.764033 containerd[1523]: time="2025-01-29T16:38:38.763895240Z" level=info msg="TearDown network for sandbox \"0505bb38903c7c592ccbc828cdc9fdb6579d789547297666de5a28ea1f26b2a4\" successfully" Jan 29 16:38:38.764033 containerd[1523]: time="2025-01-29T16:38:38.763919777Z" level=info msg="StopPodSandbox for \"0505bb38903c7c592ccbc828cdc9fdb6579d789547297666de5a28ea1f26b2a4\" returns successfully" Jan 29 16:38:38.764033 containerd[1523]: time="2025-01-29T16:38:38.763953550Z" level=info msg="TearDown network for sandbox \"59eff04dced0609ea87ef538d971c1603251b18fd95be4c5314e00fe23874bcf\" successfully" Jan 29 16:38:38.764033 containerd[1523]: time="2025-01-29T16:38:38.763970726Z" level=info msg="StopPodSandbox for \"59eff04dced0609ea87ef538d971c1603251b18fd95be4c5314e00fe23874bcf\" returns successfully" Jan 29 16:38:38.765085 containerd[1523]: time="2025-01-29T16:38:38.764877445Z" level=info msg="StopPodSandbox for \"e468574f2c4d61d4c62560ed065c37a21dbd8a84009c1e75ca595e748901caa3\"" Jan 29 16:38:38.765085 containerd[1523]: time="2025-01-29T16:38:38.764987060Z" level=info msg="TearDown network for sandbox \"e468574f2c4d61d4c62560ed065c37a21dbd8a84009c1e75ca595e748901caa3\" successfully" Jan 29 16:38:38.765085 containerd[1523]: time="2025-01-29T16:38:38.765006370Z" level=info msg="StopPodSandbox for \"e468574f2c4d61d4c62560ed065c37a21dbd8a84009c1e75ca595e748901caa3\" returns successfully" Jan 29 16:38:38.765967 containerd[1523]: time="2025-01-29T16:38:38.765469290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-wl4v2,Uid:3c303794-1a6e-4a37-b643-42f09e6c9f41,Namespace:default,Attempt:6,}" Jan 29 16:38:38.766357 containerd[1523]: time="2025-01-29T16:38:38.766307071Z" level=info msg="StopPodSandbox for \"e35fcdd3250769bbcc1381476a9c6c0d4f96ad243509d698877cddaf95d2c990\"" Jan 29 16:38:38.766437 containerd[1523]: time="2025-01-29T16:38:38.766420197Z" level=info msg="TearDown network for sandbox \"e35fcdd3250769bbcc1381476a9c6c0d4f96ad243509d698877cddaf95d2c990\" successfully" Jan 29 16:38:38.766639 containerd[1523]: time="2025-01-29T16:38:38.766439380Z" level=info msg="StopPodSandbox for \"e35fcdd3250769bbcc1381476a9c6c0d4f96ad243509d698877cddaf95d2c990\" returns successfully" Jan 29 16:38:38.769170 containerd[1523]: time="2025-01-29T16:38:38.768906313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ghhfd,Uid:0879b6e8-c5af-4b29-b97b-72c7b290e750,Namespace:calico-system,Attempt:10,}" Jan 29 16:38:38.838449 systemd[1]: Started cri-containerd-7c97a44b3cc29f2201398d4c03c524d4888c927fa95b4f50a9dc219001b026c7.scope - libcontainer container 7c97a44b3cc29f2201398d4c03c524d4888c927fa95b4f50a9dc219001b026c7. Jan 29 16:38:38.957544 containerd[1523]: time="2025-01-29T16:38:38.954150111Z" level=error msg="Failed to destroy network for sandbox \"3d2ec2ca2b0ed82c1b3b3d35d4abad6a2d4a9630f0257c3459c3f40010a271cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:38.957544 containerd[1523]: time="2025-01-29T16:38:38.954653267Z" level=error msg="encountered an error cleaning up failed sandbox \"3d2ec2ca2b0ed82c1b3b3d35d4abad6a2d4a9630f0257c3459c3f40010a271cc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:38.957544 containerd[1523]: time="2025-01-29T16:38:38.954729941Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-wl4v2,Uid:3c303794-1a6e-4a37-b643-42f09e6c9f41,Namespace:default,Attempt:6,} failed, error" error="failed to setup network for sandbox \"3d2ec2ca2b0ed82c1b3b3d35d4abad6a2d4a9630f0257c3459c3f40010a271cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:38.958009 kubelet[1925]: E0129 16:38:38.956234 1925 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d2ec2ca2b0ed82c1b3b3d35d4abad6a2d4a9630f0257c3459c3f40010a271cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:38.958009 kubelet[1925]: E0129 16:38:38.956309 1925 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d2ec2ca2b0ed82c1b3b3d35d4abad6a2d4a9630f0257c3459c3f40010a271cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-wl4v2" Jan 29 16:38:38.958009 kubelet[1925]: E0129 16:38:38.956338 1925 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d2ec2ca2b0ed82c1b3b3d35d4abad6a2d4a9630f0257c3459c3f40010a271cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-wl4v2" Jan 29 16:38:38.958208 kubelet[1925]: E0129 16:38:38.956393 1925 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-wl4v2_default(3c303794-1a6e-4a37-b643-42f09e6c9f41)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-wl4v2_default(3c303794-1a6e-4a37-b643-42f09e6c9f41)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3d2ec2ca2b0ed82c1b3b3d35d4abad6a2d4a9630f0257c3459c3f40010a271cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-wl4v2" podUID="3c303794-1a6e-4a37-b643-42f09e6c9f41" Jan 29 16:38:38.959632 containerd[1523]: time="2025-01-29T16:38:38.958533628Z" level=error msg="Failed to destroy network for sandbox \"35e68faa00b6dd856c6aab241b02a510968eac4171841f0bf9bb9caba676be77\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:38.959632 containerd[1523]: time="2025-01-29T16:38:38.959142304Z" level=error msg="encountered an error cleaning up failed sandbox \"35e68faa00b6dd856c6aab241b02a510968eac4171841f0bf9bb9caba676be77\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:38.959632 containerd[1523]: time="2025-01-29T16:38:38.959268698Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ghhfd,Uid:0879b6e8-c5af-4b29-b97b-72c7b290e750,Namespace:calico-system,Attempt:10,} failed, error" error="failed to setup network for sandbox \"35e68faa00b6dd856c6aab241b02a510968eac4171841f0bf9bb9caba676be77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:38.960155 kubelet[1925]: E0129 16:38:38.959813 1925 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35e68faa00b6dd856c6aab241b02a510968eac4171841f0bf9bb9caba676be77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:38:38.960155 kubelet[1925]: E0129 16:38:38.959876 1925 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35e68faa00b6dd856c6aab241b02a510968eac4171841f0bf9bb9caba676be77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ghhfd" Jan 29 16:38:38.960155 kubelet[1925]: E0129 16:38:38.959909 1925 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35e68faa00b6dd856c6aab241b02a510968eac4171841f0bf9bb9caba676be77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ghhfd" Jan 29 16:38:38.960374 kubelet[1925]: E0129 16:38:38.960082 1925 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ghhfd_calico-system(0879b6e8-c5af-4b29-b97b-72c7b290e750)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ghhfd_calico-system(0879b6e8-c5af-4b29-b97b-72c7b290e750)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"35e68faa00b6dd856c6aab241b02a510968eac4171841f0bf9bb9caba676be77\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ghhfd" podUID="0879b6e8-c5af-4b29-b97b-72c7b290e750" Jan 29 16:38:39.057745 containerd[1523]: time="2025-01-29T16:38:39.057632290Z" level=info msg="StartContainer for \"7c97a44b3cc29f2201398d4c03c524d4888c927fa95b4f50a9dc219001b026c7\" returns successfully" Jan 29 16:38:39.104131 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 29 16:38:39.104354 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 29 16:38:39.391605 kubelet[1925]: E0129 16:38:39.391530 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:38:39.601236 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4162808364.mount: Deactivated successfully. Jan 29 16:38:39.601429 systemd[1]: run-netns-cni\x2d4b78f2b7\x2da425\x2d0597\x2dc33e\x2da9bb00502d3a.mount: Deactivated successfully. Jan 29 16:38:39.601567 systemd[1]: run-netns-cni\x2d03ee6519\x2d1be5\x2d95ed\x2d00ba\x2d725aca12c929.mount: Deactivated successfully. Jan 29 16:38:39.773625 kubelet[1925]: I0129 16:38:39.772154 1925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35e68faa00b6dd856c6aab241b02a510968eac4171841f0bf9bb9caba676be77" Jan 29 16:38:39.773854 containerd[1523]: time="2025-01-29T16:38:39.773077031Z" level=info msg="StopPodSandbox for \"35e68faa00b6dd856c6aab241b02a510968eac4171841f0bf9bb9caba676be77\"" Jan 29 16:38:39.773854 containerd[1523]: time="2025-01-29T16:38:39.773413013Z" level=info msg="Ensure that sandbox 35e68faa00b6dd856c6aab241b02a510968eac4171841f0bf9bb9caba676be77 in task-service has been cleanup successfully" Jan 29 16:38:39.774894 containerd[1523]: time="2025-01-29T16:38:39.774864989Z" level=info msg="TearDown network for sandbox \"35e68faa00b6dd856c6aab241b02a510968eac4171841f0bf9bb9caba676be77\" successfully" Jan 29 16:38:39.775017 containerd[1523]: time="2025-01-29T16:38:39.774992023Z" level=info msg="StopPodSandbox for \"35e68faa00b6dd856c6aab241b02a510968eac4171841f0bf9bb9caba676be77\" returns successfully" Jan 29 16:38:39.779149 containerd[1523]: time="2025-01-29T16:38:39.778114267Z" level=info msg="StopPodSandbox for \"4fb53295ae6d8d87e43ed4bff9d5dd88ef07c58d516bb74446f47c7be99498fc\"" Jan 29 16:38:39.779149 containerd[1523]: time="2025-01-29T16:38:39.778928259Z" level=info msg="TearDown network for sandbox \"4fb53295ae6d8d87e43ed4bff9d5dd88ef07c58d516bb74446f47c7be99498fc\" successfully" Jan 29 16:38:39.779149 containerd[1523]: time="2025-01-29T16:38:39.778980211Z" level=info msg="StopPodSandbox for \"4fb53295ae6d8d87e43ed4bff9d5dd88ef07c58d516bb74446f47c7be99498fc\" returns successfully" Jan 29 16:38:39.778777 systemd[1]: run-netns-cni\x2d334d522d\x2d0ca8\x2d2669\x2d3b63\x2d91a688175b75.mount: Deactivated successfully. Jan 29 16:38:39.780771 containerd[1523]: time="2025-01-29T16:38:39.779659278Z" level=info msg="StopPodSandbox for \"c0e11258579202d3efc31396bfbd1d950f6d62878d72cd969a376f862ff88ca8\"" Jan 29 16:38:39.780771 containerd[1523]: time="2025-01-29T16:38:39.779817627Z" level=info msg="TearDown network for sandbox \"c0e11258579202d3efc31396bfbd1d950f6d62878d72cd969a376f862ff88ca8\" successfully" Jan 29 16:38:39.780771 containerd[1523]: time="2025-01-29T16:38:39.779836494Z" level=info msg="StopPodSandbox for \"c0e11258579202d3efc31396bfbd1d950f6d62878d72cd969a376f862ff88ca8\" returns successfully" Jan 29 16:38:39.781646 kubelet[1925]: I0129 16:38:39.781391 1925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d2ec2ca2b0ed82c1b3b3d35d4abad6a2d4a9630f0257c3459c3f40010a271cc" Jan 29 16:38:39.782489 containerd[1523]: time="2025-01-29T16:38:39.782455667Z" level=info msg="StopPodSandbox for \"cc5ee80badb343f0e1bd663d0069643ce837b585eb1873cf591a609969442bcb\"" Jan 29 16:38:39.782810 containerd[1523]: time="2025-01-29T16:38:39.782629634Z" level=info msg="StopPodSandbox for \"3d2ec2ca2b0ed82c1b3b3d35d4abad6a2d4a9630f0257c3459c3f40010a271cc\"" Jan 29 16:38:39.782810 containerd[1523]: time="2025-01-29T16:38:39.782694556Z" level=info msg="TearDown network for sandbox \"cc5ee80badb343f0e1bd663d0069643ce837b585eb1873cf591a609969442bcb\" successfully" Jan 29 16:38:39.782810 containerd[1523]: time="2025-01-29T16:38:39.782717163Z" level=info msg="StopPodSandbox for \"cc5ee80badb343f0e1bd663d0069643ce837b585eb1873cf591a609969442bcb\" returns successfully" Jan 29 16:38:39.783176 containerd[1523]: time="2025-01-29T16:38:39.782966011Z" level=info msg="Ensure that sandbox 3d2ec2ca2b0ed82c1b3b3d35d4abad6a2d4a9630f0257c3459c3f40010a271cc in task-service has been cleanup successfully" Jan 29 16:38:39.785161 containerd[1523]: time="2025-01-29T16:38:39.784911804Z" level=info msg="StopPodSandbox for \"f2677d74a531cf065342690475879b4cc0ab35885aa54345aac468f71ec78ac9\"" Jan 29 16:38:39.785161 containerd[1523]: time="2025-01-29T16:38:39.785023481Z" level=info msg="TearDown network for sandbox \"f2677d74a531cf065342690475879b4cc0ab35885aa54345aac468f71ec78ac9\" successfully" Jan 29 16:38:39.785161 containerd[1523]: time="2025-01-29T16:38:39.785044973Z" level=info msg="StopPodSandbox for \"f2677d74a531cf065342690475879b4cc0ab35885aa54345aac468f71ec78ac9\" returns successfully" Jan 29 16:38:39.787195 containerd[1523]: time="2025-01-29T16:38:39.785547216Z" level=info msg="TearDown network for sandbox \"3d2ec2ca2b0ed82c1b3b3d35d4abad6a2d4a9630f0257c3459c3f40010a271cc\" successfully" Jan 29 16:38:39.787195 containerd[1523]: time="2025-01-29T16:38:39.785597072Z" level=info msg="StopPodSandbox for \"3d2ec2ca2b0ed82c1b3b3d35d4abad6a2d4a9630f0257c3459c3f40010a271cc\" returns successfully" Jan 29 16:38:39.787195 containerd[1523]: time="2025-01-29T16:38:39.785716149Z" level=info msg="StopPodSandbox for \"2efbf5d0f307b3be20fff5705f37217ad67ab6699fe3eb4f1d925817f4d98cf5\"" Jan 29 16:38:39.787195 containerd[1523]: time="2025-01-29T16:38:39.786191895Z" level=info msg="StopPodSandbox for \"c40910d4f83c9120d4501590a6953986d18f9464fe2b40137c612ae95ef61f4e\"" Jan 29 16:38:39.787195 containerd[1523]: time="2025-01-29T16:38:39.786294145Z" level=info msg="TearDown network for sandbox \"c40910d4f83c9120d4501590a6953986d18f9464fe2b40137c612ae95ef61f4e\" successfully" Jan 29 16:38:39.787195 containerd[1523]: time="2025-01-29T16:38:39.786312724Z" level=info msg="StopPodSandbox for \"c40910d4f83c9120d4501590a6953986d18f9464fe2b40137c612ae95ef61f4e\" returns successfully" Jan 29 16:38:39.787195 containerd[1523]: time="2025-01-29T16:38:39.786591537Z" level=info msg="TearDown network for sandbox \"2efbf5d0f307b3be20fff5705f37217ad67ab6699fe3eb4f1d925817f4d98cf5\" successfully" Jan 29 16:38:39.787195 containerd[1523]: time="2025-01-29T16:38:39.786632767Z" level=info msg="StopPodSandbox for \"2efbf5d0f307b3be20fff5705f37217ad67ab6699fe3eb4f1d925817f4d98cf5\" returns successfully" Jan 29 16:38:39.786733 systemd[1]: run-netns-cni\x2d430f93b7\x2dee51\x2d66e3\x2dfdbb\x2de4c1c4b5e8cc.mount: Deactivated successfully. Jan 29 16:38:39.789016 containerd[1523]: time="2025-01-29T16:38:39.788350485Z" level=info msg="StopPodSandbox for \"f3deae6dc24d96e2b66cf92606b0d48661fd90ad8d0cdc9edcad0847bd8927ea\"" Jan 29 16:38:39.789016 containerd[1523]: time="2025-01-29T16:38:39.788585055Z" level=info msg="TearDown network for sandbox \"f3deae6dc24d96e2b66cf92606b0d48661fd90ad8d0cdc9edcad0847bd8927ea\" successfully" Jan 29 16:38:39.789016 containerd[1523]: time="2025-01-29T16:38:39.788609005Z" level=info msg="StopPodSandbox for \"f3deae6dc24d96e2b66cf92606b0d48661fd90ad8d0cdc9edcad0847bd8927ea\" returns successfully" Jan 29 16:38:39.789016 containerd[1523]: time="2025-01-29T16:38:39.788828837Z" level=info msg="StopPodSandbox for \"fda0c38bc56f22a1e981bfcb1ea48337151e4bfc8bc8f7a9b6b010dd930d54ef\"" Jan 29 16:38:39.790881 containerd[1523]: time="2025-01-29T16:38:39.789530832Z" level=info msg="StopPodSandbox for \"dbe56831af47f5f85c6a8b5a0ccd5dc59d1dd15994bc03753a62351bd77cf38b\"" Jan 29 16:38:39.790881 containerd[1523]: time="2025-01-29T16:38:39.789635241Z" level=info msg="TearDown network for sandbox \"fda0c38bc56f22a1e981bfcb1ea48337151e4bfc8bc8f7a9b6b010dd930d54ef\" successfully" Jan 29 16:38:39.790881 containerd[1523]: time="2025-01-29T16:38:39.789661264Z" level=info msg="StopPodSandbox for \"fda0c38bc56f22a1e981bfcb1ea48337151e4bfc8bc8f7a9b6b010dd930d54ef\" returns successfully" Jan 29 16:38:39.790881 containerd[1523]: time="2025-01-29T16:38:39.789693889Z" level=info msg="TearDown network for sandbox \"dbe56831af47f5f85c6a8b5a0ccd5dc59d1dd15994bc03753a62351bd77cf38b\" successfully" Jan 29 16:38:39.790881 containerd[1523]: time="2025-01-29T16:38:39.789717365Z" level=info msg="StopPodSandbox for \"dbe56831af47f5f85c6a8b5a0ccd5dc59d1dd15994bc03753a62351bd77cf38b\" returns successfully" Jan 29 16:38:39.790881 containerd[1523]: time="2025-01-29T16:38:39.790442167Z" level=info msg="StopPodSandbox for \"59eff04dced0609ea87ef538d971c1603251b18fd95be4c5314e00fe23874bcf\"" Jan 29 16:38:39.790881 containerd[1523]: time="2025-01-29T16:38:39.790558811Z" level=info msg="TearDown network for sandbox \"59eff04dced0609ea87ef538d971c1603251b18fd95be4c5314e00fe23874bcf\" successfully" Jan 29 16:38:39.790881 containerd[1523]: time="2025-01-29T16:38:39.790577664Z" level=info msg="StopPodSandbox for \"59eff04dced0609ea87ef538d971c1603251b18fd95be4c5314e00fe23874bcf\" returns successfully" Jan 29 16:38:39.791275 containerd[1523]: time="2025-01-29T16:38:39.791213044Z" level=info msg="StopPodSandbox for \"d7c013bfec3a2464280f3995f4d6ddd04671d9963f35776b2c6f5a315516ef8e\"" Jan 29 16:38:39.791331 containerd[1523]: time="2025-01-29T16:38:39.791315996Z" level=info msg="TearDown network for sandbox \"d7c013bfec3a2464280f3995f4d6ddd04671d9963f35776b2c6f5a315516ef8e\" successfully" Jan 29 16:38:39.791383 containerd[1523]: time="2025-01-29T16:38:39.791336728Z" level=info msg="StopPodSandbox for \"d7c013bfec3a2464280f3995f4d6ddd04671d9963f35776b2c6f5a315516ef8e\" returns successfully" Jan 29 16:38:39.793117 containerd[1523]: time="2025-01-29T16:38:39.793071958Z" level=info msg="StopPodSandbox for \"e468574f2c4d61d4c62560ed065c37a21dbd8a84009c1e75ca595e748901caa3\"" Jan 29 16:38:39.793264 containerd[1523]: time="2025-01-29T16:38:39.793196969Z" level=info msg="TearDown network for sandbox \"e468574f2c4d61d4c62560ed065c37a21dbd8a84009c1e75ca595e748901caa3\" successfully" Jan 29 16:38:39.793264 containerd[1523]: time="2025-01-29T16:38:39.793216371Z" level=info msg="StopPodSandbox for \"e468574f2c4d61d4c62560ed065c37a21dbd8a84009c1e75ca595e748901caa3\" returns successfully" Jan 29 16:38:39.795522 containerd[1523]: time="2025-01-29T16:38:39.793554558Z" level=info msg="StopPodSandbox for \"2eed4f29416d08e8a229b320b2680c8e4bdfa4226deebc48886d61e197c85549\"" Jan 29 16:38:39.795522 containerd[1523]: time="2025-01-29T16:38:39.793661609Z" level=info msg="TearDown network for sandbox \"2eed4f29416d08e8a229b320b2680c8e4bdfa4226deebc48886d61e197c85549\" successfully" Jan 29 16:38:39.795522 containerd[1523]: time="2025-01-29T16:38:39.793681397Z" level=info msg="StopPodSandbox for \"2eed4f29416d08e8a229b320b2680c8e4bdfa4226deebc48886d61e197c85549\" returns successfully" Jan 29 16:38:39.795938 containerd[1523]: time="2025-01-29T16:38:39.795898394Z" level=info msg="StopPodSandbox for \"e35fcdd3250769bbcc1381476a9c6c0d4f96ad243509d698877cddaf95d2c990\"" Jan 29 16:38:39.797149 containerd[1523]: time="2025-01-29T16:38:39.797121236Z" level=info msg="TearDown network for sandbox \"e35fcdd3250769bbcc1381476a9c6c0d4f96ad243509d698877cddaf95d2c990\" successfully" Jan 29 16:38:39.797373 containerd[1523]: time="2025-01-29T16:38:39.797250801Z" level=info msg="StopPodSandbox for \"e35fcdd3250769bbcc1381476a9c6c0d4f96ad243509d698877cddaf95d2c990\" returns successfully" Jan 29 16:38:39.800296 containerd[1523]: time="2025-01-29T16:38:39.800244416Z" level=info msg="StopPodSandbox for \"405609bd3ee7c1a1167c5207814e6ad2286d312d1d0e5a15b0abc08af3720852\"" Jan 29 16:38:39.801271 containerd[1523]: time="2025-01-29T16:38:39.800323253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ghhfd,Uid:0879b6e8-c5af-4b29-b97b-72c7b290e750,Namespace:calico-system,Attempt:11,}" Jan 29 16:38:39.802636 containerd[1523]: time="2025-01-29T16:38:39.802460726Z" level=info msg="TearDown network for sandbox \"405609bd3ee7c1a1167c5207814e6ad2286d312d1d0e5a15b0abc08af3720852\" successfully" Jan 29 16:38:39.802636 containerd[1523]: time="2025-01-29T16:38:39.802489292Z" level=info msg="StopPodSandbox for \"405609bd3ee7c1a1167c5207814e6ad2286d312d1d0e5a15b0abc08af3720852\" returns successfully" Jan 29 16:38:39.804968 containerd[1523]: time="2025-01-29T16:38:39.804713736Z" level=info msg="StopPodSandbox for \"0505bb38903c7c592ccbc828cdc9fdb6579d789547297666de5a28ea1f26b2a4\"" Jan 29 16:38:39.805771 containerd[1523]: time="2025-01-29T16:38:39.805193436Z" level=info msg="TearDown network for sandbox \"0505bb38903c7c592ccbc828cdc9fdb6579d789547297666de5a28ea1f26b2a4\" successfully" Jan 29 16:38:39.807046 containerd[1523]: time="2025-01-29T16:38:39.806727958Z" level=info msg="StopPodSandbox for \"0505bb38903c7c592ccbc828cdc9fdb6579d789547297666de5a28ea1f26b2a4\" returns successfully" Jan 29 16:38:39.809700 containerd[1523]: time="2025-01-29T16:38:39.809628097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-wl4v2,Uid:3c303794-1a6e-4a37-b643-42f09e6c9f41,Namespace:default,Attempt:7,}" Jan 29 16:38:39.818680 kubelet[1925]: I0129 16:38:39.818601 1925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-nkvn9" podStartSLOduration=4.896117623 podStartE2EDuration="25.818555553s" podCreationTimestamp="2025-01-29 16:38:14 +0000 UTC" firstStartedPulling="2025-01-29 16:38:17.256280496 +0000 UTC m=+4.134725325" lastFinishedPulling="2025-01-29 16:38:38.690132709 +0000 UTC m=+25.057163255" observedRunningTime="2025-01-29 16:38:39.815411919 +0000 UTC m=+26.182442482" watchObservedRunningTime="2025-01-29 16:38:39.818555553 +0000 UTC m=+26.185586128" Jan 29 16:38:40.096736 systemd-networkd[1440]: cali5d3771cc8df: Link UP Jan 29 16:38:40.101540 systemd-networkd[1440]: cali5d3771cc8df: Gained carrier Jan 29 16:38:40.119405 containerd[1523]: 2025-01-29 16:38:39.900 [INFO][3016] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 16:38:40.119405 containerd[1523]: 2025-01-29 16:38:39.944 [INFO][3016] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.230.24.202-k8s-csi--node--driver--ghhfd-eth0 csi-node-driver- calico-system 0879b6e8-c5af-4b29-b97b-72c7b290e750 1049 0 2025-01-29 16:38:14 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.230.24.202 csi-node-driver-ghhfd eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali5d3771cc8df [] []}} ContainerID="79be2aae8fda153aed81867acfaa46cc4c6f5f2e53a7782311021924b4f17e11" Namespace="calico-system" Pod="csi-node-driver-ghhfd" WorkloadEndpoint="10.230.24.202-k8s-csi--node--driver--ghhfd-" Jan 29 16:38:40.119405 containerd[1523]: 2025-01-29 16:38:39.944 [INFO][3016] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="79be2aae8fda153aed81867acfaa46cc4c6f5f2e53a7782311021924b4f17e11" Namespace="calico-system" Pod="csi-node-driver-ghhfd" WorkloadEndpoint="10.230.24.202-k8s-csi--node--driver--ghhfd-eth0" Jan 29 16:38:40.119405 containerd[1523]: 2025-01-29 16:38:40.010 [INFO][3055] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="79be2aae8fda153aed81867acfaa46cc4c6f5f2e53a7782311021924b4f17e11" HandleID="k8s-pod-network.79be2aae8fda153aed81867acfaa46cc4c6f5f2e53a7782311021924b4f17e11" Workload="10.230.24.202-k8s-csi--node--driver--ghhfd-eth0" Jan 29 16:38:40.119405 containerd[1523]: 2025-01-29 16:38:40.031 [INFO][3055] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="79be2aae8fda153aed81867acfaa46cc4c6f5f2e53a7782311021924b4f17e11" HandleID="k8s-pod-network.79be2aae8fda153aed81867acfaa46cc4c6f5f2e53a7782311021924b4f17e11" Workload="10.230.24.202-k8s-csi--node--driver--ghhfd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002015a0), Attrs:map[string]string{"namespace":"calico-system", "node":"10.230.24.202", "pod":"csi-node-driver-ghhfd", "timestamp":"2025-01-29 16:38:40.010134692 +0000 UTC"}, Hostname:"10.230.24.202", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 16:38:40.119405 containerd[1523]: 2025-01-29 16:38:40.031 [INFO][3055] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 16:38:40.119405 containerd[1523]: 2025-01-29 16:38:40.031 [INFO][3055] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 16:38:40.119405 containerd[1523]: 2025-01-29 16:38:40.031 [INFO][3055] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.230.24.202' Jan 29 16:38:40.119405 containerd[1523]: 2025-01-29 16:38:40.035 [INFO][3055] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.79be2aae8fda153aed81867acfaa46cc4c6f5f2e53a7782311021924b4f17e11" host="10.230.24.202" Jan 29 16:38:40.119405 containerd[1523]: 2025-01-29 16:38:40.044 [INFO][3055] ipam/ipam.go 372: Looking up existing affinities for host host="10.230.24.202" Jan 29 16:38:40.119405 containerd[1523]: 2025-01-29 16:38:40.050 [INFO][3055] ipam/ipam.go 489: Trying affinity for 192.168.28.64/26 host="10.230.24.202" Jan 29 16:38:40.119405 containerd[1523]: 2025-01-29 16:38:40.054 [INFO][3055] ipam/ipam.go 155: Attempting to load block cidr=192.168.28.64/26 host="10.230.24.202" Jan 29 16:38:40.119405 containerd[1523]: 2025-01-29 16:38:40.059 [INFO][3055] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.28.64/26 host="10.230.24.202" Jan 29 16:38:40.119405 containerd[1523]: 2025-01-29 16:38:40.059 [INFO][3055] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.28.64/26 handle="k8s-pod-network.79be2aae8fda153aed81867acfaa46cc4c6f5f2e53a7782311021924b4f17e11" host="10.230.24.202" Jan 29 16:38:40.119405 containerd[1523]: 2025-01-29 16:38:40.062 [INFO][3055] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.79be2aae8fda153aed81867acfaa46cc4c6f5f2e53a7782311021924b4f17e11 Jan 29 16:38:40.119405 containerd[1523]: 2025-01-29 16:38:40.070 [INFO][3055] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.28.64/26 handle="k8s-pod-network.79be2aae8fda153aed81867acfaa46cc4c6f5f2e53a7782311021924b4f17e11" host="10.230.24.202" Jan 29 16:38:40.119405 containerd[1523]: 2025-01-29 16:38:40.078 [INFO][3055] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.28.65/26] block=192.168.28.64/26 handle="k8s-pod-network.79be2aae8fda153aed81867acfaa46cc4c6f5f2e53a7782311021924b4f17e11" host="10.230.24.202" Jan 29 16:38:40.119405 containerd[1523]: 2025-01-29 16:38:40.078 [INFO][3055] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.28.65/26] handle="k8s-pod-network.79be2aae8fda153aed81867acfaa46cc4c6f5f2e53a7782311021924b4f17e11" host="10.230.24.202" Jan 29 16:38:40.119405 containerd[1523]: 2025-01-29 16:38:40.078 [INFO][3055] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 16:38:40.119405 containerd[1523]: 2025-01-29 16:38:40.078 [INFO][3055] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.28.65/26] IPv6=[] ContainerID="79be2aae8fda153aed81867acfaa46cc4c6f5f2e53a7782311021924b4f17e11" HandleID="k8s-pod-network.79be2aae8fda153aed81867acfaa46cc4c6f5f2e53a7782311021924b4f17e11" Workload="10.230.24.202-k8s-csi--node--driver--ghhfd-eth0" Jan 29 16:38:40.121458 containerd[1523]: 2025-01-29 16:38:40.082 [INFO][3016] cni-plugin/k8s.go 386: Populated endpoint ContainerID="79be2aae8fda153aed81867acfaa46cc4c6f5f2e53a7782311021924b4f17e11" Namespace="calico-system" Pod="csi-node-driver-ghhfd" WorkloadEndpoint="10.230.24.202-k8s-csi--node--driver--ghhfd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.24.202-k8s-csi--node--driver--ghhfd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0879b6e8-c5af-4b29-b97b-72c7b290e750", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 38, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.24.202", ContainerID:"", Pod:"csi-node-driver-ghhfd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.28.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5d3771cc8df", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:38:40.121458 containerd[1523]: 2025-01-29 16:38:40.083 [INFO][3016] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.28.65/32] ContainerID="79be2aae8fda153aed81867acfaa46cc4c6f5f2e53a7782311021924b4f17e11" Namespace="calico-system" Pod="csi-node-driver-ghhfd" WorkloadEndpoint="10.230.24.202-k8s-csi--node--driver--ghhfd-eth0" Jan 29 16:38:40.121458 containerd[1523]: 2025-01-29 16:38:40.083 [INFO][3016] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5d3771cc8df ContainerID="79be2aae8fda153aed81867acfaa46cc4c6f5f2e53a7782311021924b4f17e11" Namespace="calico-system" Pod="csi-node-driver-ghhfd" WorkloadEndpoint="10.230.24.202-k8s-csi--node--driver--ghhfd-eth0" Jan 29 16:38:40.121458 containerd[1523]: 2025-01-29 16:38:40.099 [INFO][3016] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="79be2aae8fda153aed81867acfaa46cc4c6f5f2e53a7782311021924b4f17e11" Namespace="calico-system" Pod="csi-node-driver-ghhfd" WorkloadEndpoint="10.230.24.202-k8s-csi--node--driver--ghhfd-eth0" Jan 29 16:38:40.121458 containerd[1523]: 2025-01-29 16:38:40.102 [INFO][3016] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="79be2aae8fda153aed81867acfaa46cc4c6f5f2e53a7782311021924b4f17e11" Namespace="calico-system" Pod="csi-node-driver-ghhfd" WorkloadEndpoint="10.230.24.202-k8s-csi--node--driver--ghhfd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.24.202-k8s-csi--node--driver--ghhfd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0879b6e8-c5af-4b29-b97b-72c7b290e750", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 38, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.24.202", ContainerID:"79be2aae8fda153aed81867acfaa46cc4c6f5f2e53a7782311021924b4f17e11", Pod:"csi-node-driver-ghhfd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.28.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5d3771cc8df", MAC:"1a:d5:05:fb:8f:96", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:38:40.121458 containerd[1523]: 2025-01-29 16:38:40.114 [INFO][3016] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="79be2aae8fda153aed81867acfaa46cc4c6f5f2e53a7782311021924b4f17e11" Namespace="calico-system" Pod="csi-node-driver-ghhfd" WorkloadEndpoint="10.230.24.202-k8s-csi--node--driver--ghhfd-eth0" Jan 29 16:38:40.135076 systemd-networkd[1440]: califc5d3e4a564: Link UP Jan 29 16:38:40.136631 systemd-networkd[1440]: califc5d3e4a564: Gained carrier Jan 29 16:38:40.151497 containerd[1523]: 2025-01-29 16:38:39.886 [INFO][3018] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 16:38:40.151497 containerd[1523]: 2025-01-29 16:38:39.944 [INFO][3018] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.230.24.202-k8s-nginx--deployment--85f456d6dd--wl4v2-eth0 nginx-deployment-85f456d6dd- default 3c303794-1a6e-4a37-b643-42f09e6c9f41 1141 0 2025-01-29 16:38:32 +0000 UTC map[app:nginx pod-template-hash:85f456d6dd projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.230.24.202 nginx-deployment-85f456d6dd-wl4v2 eth0 default [] [] [kns.default ksa.default.default] califc5d3e4a564 [] []}} ContainerID="706794175df8af72dd615b837aeb530d0789496425e6be2f4df170e44cf6af0c" Namespace="default" Pod="nginx-deployment-85f456d6dd-wl4v2" WorkloadEndpoint="10.230.24.202-k8s-nginx--deployment--85f456d6dd--wl4v2-" Jan 29 16:38:40.151497 containerd[1523]: 2025-01-29 16:38:39.944 [INFO][3018] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="706794175df8af72dd615b837aeb530d0789496425e6be2f4df170e44cf6af0c" Namespace="default" Pod="nginx-deployment-85f456d6dd-wl4v2" WorkloadEndpoint="10.230.24.202-k8s-nginx--deployment--85f456d6dd--wl4v2-eth0" Jan 29 16:38:40.151497 containerd[1523]: 2025-01-29 16:38:40.012 [INFO][3054] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="706794175df8af72dd615b837aeb530d0789496425e6be2f4df170e44cf6af0c" HandleID="k8s-pod-network.706794175df8af72dd615b837aeb530d0789496425e6be2f4df170e44cf6af0c" Workload="10.230.24.202-k8s-nginx--deployment--85f456d6dd--wl4v2-eth0" Jan 29 16:38:40.151497 containerd[1523]: 2025-01-29 16:38:40.037 [INFO][3054] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="706794175df8af72dd615b837aeb530d0789496425e6be2f4df170e44cf6af0c" HandleID="k8s-pod-network.706794175df8af72dd615b837aeb530d0789496425e6be2f4df170e44cf6af0c" Workload="10.230.24.202-k8s-nginx--deployment--85f456d6dd--wl4v2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003e5820), Attrs:map[string]string{"namespace":"default", "node":"10.230.24.202", "pod":"nginx-deployment-85f456d6dd-wl4v2", "timestamp":"2025-01-29 16:38:40.012014389 +0000 UTC"}, Hostname:"10.230.24.202", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 16:38:40.151497 containerd[1523]: 2025-01-29 16:38:40.037 [INFO][3054] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 16:38:40.151497 containerd[1523]: 2025-01-29 16:38:40.078 [INFO][3054] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 16:38:40.151497 containerd[1523]: 2025-01-29 16:38:40.078 [INFO][3054] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.230.24.202' Jan 29 16:38:40.151497 containerd[1523]: 2025-01-29 16:38:40.082 [INFO][3054] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.706794175df8af72dd615b837aeb530d0789496425e6be2f4df170e44cf6af0c" host="10.230.24.202" Jan 29 16:38:40.151497 containerd[1523]: 2025-01-29 16:38:40.089 [INFO][3054] ipam/ipam.go 372: Looking up existing affinities for host host="10.230.24.202" Jan 29 16:38:40.151497 containerd[1523]: 2025-01-29 16:38:40.096 [INFO][3054] ipam/ipam.go 489: Trying affinity for 192.168.28.64/26 host="10.230.24.202" Jan 29 16:38:40.151497 containerd[1523]: 2025-01-29 16:38:40.102 [INFO][3054] ipam/ipam.go 155: Attempting to load block cidr=192.168.28.64/26 host="10.230.24.202" Jan 29 16:38:40.151497 containerd[1523]: 2025-01-29 16:38:40.106 [INFO][3054] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.28.64/26 host="10.230.24.202" Jan 29 16:38:40.151497 containerd[1523]: 2025-01-29 16:38:40.106 [INFO][3054] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.28.64/26 handle="k8s-pod-network.706794175df8af72dd615b837aeb530d0789496425e6be2f4df170e44cf6af0c" host="10.230.24.202" Jan 29 16:38:40.151497 containerd[1523]: 2025-01-29 16:38:40.108 [INFO][3054] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.706794175df8af72dd615b837aeb530d0789496425e6be2f4df170e44cf6af0c Jan 29 16:38:40.151497 containerd[1523]: 2025-01-29 16:38:40.119 [INFO][3054] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.28.64/26 handle="k8s-pod-network.706794175df8af72dd615b837aeb530d0789496425e6be2f4df170e44cf6af0c" host="10.230.24.202" Jan 29 16:38:40.151497 containerd[1523]: 2025-01-29 16:38:40.128 [INFO][3054] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.28.66/26] block=192.168.28.64/26 handle="k8s-pod-network.706794175df8af72dd615b837aeb530d0789496425e6be2f4df170e44cf6af0c" host="10.230.24.202" Jan 29 16:38:40.151497 containerd[1523]: 2025-01-29 16:38:40.128 [INFO][3054] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.28.66/26] handle="k8s-pod-network.706794175df8af72dd615b837aeb530d0789496425e6be2f4df170e44cf6af0c" host="10.230.24.202" Jan 29 16:38:40.151497 containerd[1523]: 2025-01-29 16:38:40.128 [INFO][3054] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 16:38:40.151497 containerd[1523]: 2025-01-29 16:38:40.128 [INFO][3054] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.28.66/26] IPv6=[] ContainerID="706794175df8af72dd615b837aeb530d0789496425e6be2f4df170e44cf6af0c" HandleID="k8s-pod-network.706794175df8af72dd615b837aeb530d0789496425e6be2f4df170e44cf6af0c" Workload="10.230.24.202-k8s-nginx--deployment--85f456d6dd--wl4v2-eth0" Jan 29 16:38:40.152643 containerd[1523]: 2025-01-29 16:38:40.130 [INFO][3018] cni-plugin/k8s.go 386: Populated endpoint ContainerID="706794175df8af72dd615b837aeb530d0789496425e6be2f4df170e44cf6af0c" Namespace="default" Pod="nginx-deployment-85f456d6dd-wl4v2" WorkloadEndpoint="10.230.24.202-k8s-nginx--deployment--85f456d6dd--wl4v2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.24.202-k8s-nginx--deployment--85f456d6dd--wl4v2-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"3c303794-1a6e-4a37-b643-42f09e6c9f41", ResourceVersion:"1141", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 38, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.24.202", ContainerID:"", Pod:"nginx-deployment-85f456d6dd-wl4v2", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.28.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"califc5d3e4a564", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:38:40.152643 containerd[1523]: 2025-01-29 16:38:40.130 [INFO][3018] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.28.66/32] ContainerID="706794175df8af72dd615b837aeb530d0789496425e6be2f4df170e44cf6af0c" Namespace="default" Pod="nginx-deployment-85f456d6dd-wl4v2" WorkloadEndpoint="10.230.24.202-k8s-nginx--deployment--85f456d6dd--wl4v2-eth0" Jan 29 16:38:40.152643 containerd[1523]: 2025-01-29 16:38:40.131 [INFO][3018] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califc5d3e4a564 ContainerID="706794175df8af72dd615b837aeb530d0789496425e6be2f4df170e44cf6af0c" Namespace="default" Pod="nginx-deployment-85f456d6dd-wl4v2" WorkloadEndpoint="10.230.24.202-k8s-nginx--deployment--85f456d6dd--wl4v2-eth0" Jan 29 16:38:40.152643 containerd[1523]: 2025-01-29 16:38:40.135 [INFO][3018] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="706794175df8af72dd615b837aeb530d0789496425e6be2f4df170e44cf6af0c" Namespace="default" Pod="nginx-deployment-85f456d6dd-wl4v2" WorkloadEndpoint="10.230.24.202-k8s-nginx--deployment--85f456d6dd--wl4v2-eth0" Jan 29 16:38:40.152643 containerd[1523]: 2025-01-29 16:38:40.136 [INFO][3018] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="706794175df8af72dd615b837aeb530d0789496425e6be2f4df170e44cf6af0c" Namespace="default" Pod="nginx-deployment-85f456d6dd-wl4v2" WorkloadEndpoint="10.230.24.202-k8s-nginx--deployment--85f456d6dd--wl4v2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.24.202-k8s-nginx--deployment--85f456d6dd--wl4v2-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"3c303794-1a6e-4a37-b643-42f09e6c9f41", ResourceVersion:"1141", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 38, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.24.202", ContainerID:"706794175df8af72dd615b837aeb530d0789496425e6be2f4df170e44cf6af0c", Pod:"nginx-deployment-85f456d6dd-wl4v2", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.28.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"califc5d3e4a564", MAC:"5e:7e:ae:94:b1:62", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:38:40.152643 containerd[1523]: 2025-01-29 16:38:40.149 [INFO][3018] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="706794175df8af72dd615b837aeb530d0789496425e6be2f4df170e44cf6af0c" Namespace="default" Pod="nginx-deployment-85f456d6dd-wl4v2" WorkloadEndpoint="10.230.24.202-k8s-nginx--deployment--85f456d6dd--wl4v2-eth0" Jan 29 16:38:40.164672 containerd[1523]: time="2025-01-29T16:38:40.164471845Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:38:40.164672 containerd[1523]: time="2025-01-29T16:38:40.164580902Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:38:40.164672 containerd[1523]: time="2025-01-29T16:38:40.164604715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:38:40.165041 containerd[1523]: time="2025-01-29T16:38:40.164741442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:38:40.189473 containerd[1523]: time="2025-01-29T16:38:40.186039956Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:38:40.189473 containerd[1523]: time="2025-01-29T16:38:40.189389967Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:38:40.189473 containerd[1523]: time="2025-01-29T16:38:40.189414256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:38:40.190269 containerd[1523]: time="2025-01-29T16:38:40.190022610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:38:40.202320 systemd[1]: Started cri-containerd-79be2aae8fda153aed81867acfaa46cc4c6f5f2e53a7782311021924b4f17e11.scope - libcontainer container 79be2aae8fda153aed81867acfaa46cc4c6f5f2e53a7782311021924b4f17e11. Jan 29 16:38:40.224968 systemd[1]: Started cri-containerd-706794175df8af72dd615b837aeb530d0789496425e6be2f4df170e44cf6af0c.scope - libcontainer container 706794175df8af72dd615b837aeb530d0789496425e6be2f4df170e44cf6af0c. Jan 29 16:38:40.258128 containerd[1523]: time="2025-01-29T16:38:40.257241614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ghhfd,Uid:0879b6e8-c5af-4b29-b97b-72c7b290e750,Namespace:calico-system,Attempt:11,} returns sandbox id \"79be2aae8fda153aed81867acfaa46cc4c6f5f2e53a7782311021924b4f17e11\"" Jan 29 16:38:40.261671 containerd[1523]: time="2025-01-29T16:38:40.261628715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 29 16:38:40.302602 containerd[1523]: time="2025-01-29T16:38:40.302551735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-wl4v2,Uid:3c303794-1a6e-4a37-b643-42f09e6c9f41,Namespace:default,Attempt:7,} returns sandbox id \"706794175df8af72dd615b837aeb530d0789496425e6be2f4df170e44cf6af0c\"" Jan 29 16:38:40.391987 kubelet[1925]: E0129 16:38:40.391772 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:38:40.809946 kernel: bpftool[3277]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 29 16:38:41.198321 systemd-networkd[1440]: vxlan.calico: Link UP Jan 29 16:38:41.198337 systemd-networkd[1440]: vxlan.calico: Gained carrier Jan 29 16:38:41.392745 kubelet[1925]: E0129 16:38:41.392646 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:38:41.670350 systemd-networkd[1440]: cali5d3771cc8df: Gained IPv6LL Jan 29 16:38:41.803196 containerd[1523]: time="2025-01-29T16:38:41.803127029Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:38:41.804985 containerd[1523]: time="2025-01-29T16:38:41.804907740Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 29 16:38:41.805441 containerd[1523]: time="2025-01-29T16:38:41.805376943Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:38:41.808379 containerd[1523]: time="2025-01-29T16:38:41.808309994Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:38:41.809985 containerd[1523]: time="2025-01-29T16:38:41.809791878Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.548113938s" Jan 29 16:38:41.809985 containerd[1523]: time="2025-01-29T16:38:41.809849449Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 29 16:38:41.811799 containerd[1523]: time="2025-01-29T16:38:41.811730203Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 29 16:38:41.813438 containerd[1523]: time="2025-01-29T16:38:41.813218432Z" level=info msg="CreateContainer within sandbox \"79be2aae8fda153aed81867acfaa46cc4c6f5f2e53a7782311021924b4f17e11\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 29 16:38:41.836205 containerd[1523]: time="2025-01-29T16:38:41.836126909Z" level=info msg="CreateContainer within sandbox \"79be2aae8fda153aed81867acfaa46cc4c6f5f2e53a7782311021924b4f17e11\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"35cbf9744c763f4e90bfc3130dbbc7906b6f57d249c4a4fbdd464267eda6b0fd\"" Jan 29 16:38:41.837803 containerd[1523]: time="2025-01-29T16:38:41.836700815Z" level=info msg="StartContainer for \"35cbf9744c763f4e90bfc3130dbbc7906b6f57d249c4a4fbdd464267eda6b0fd\"" Jan 29 16:38:41.904981 systemd[1]: Started cri-containerd-35cbf9744c763f4e90bfc3130dbbc7906b6f57d249c4a4fbdd464267eda6b0fd.scope - libcontainer container 35cbf9744c763f4e90bfc3130dbbc7906b6f57d249c4a4fbdd464267eda6b0fd. Jan 29 16:38:41.963713 containerd[1523]: time="2025-01-29T16:38:41.963664971Z" level=info msg="StartContainer for \"35cbf9744c763f4e90bfc3130dbbc7906b6f57d249c4a4fbdd464267eda6b0fd\" returns successfully" Jan 29 16:38:42.118227 systemd-networkd[1440]: califc5d3e4a564: Gained IPv6LL Jan 29 16:38:42.393271 kubelet[1925]: E0129 16:38:42.392851 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:38:42.950072 systemd-networkd[1440]: vxlan.calico: Gained IPv6LL Jan 29 16:38:43.393842 kubelet[1925]: E0129 16:38:43.393718 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:38:44.394269 kubelet[1925]: E0129 16:38:44.394119 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:38:45.395277 kubelet[1925]: E0129 16:38:45.395228 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:38:46.214489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2993866498.mount: Deactivated successfully. Jan 29 16:38:46.396366 kubelet[1925]: E0129 16:38:46.396300 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:38:47.397456 kubelet[1925]: E0129 16:38:47.397392 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:38:47.872424 containerd[1523]: time="2025-01-29T16:38:47.872350098Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:38:47.874886 containerd[1523]: time="2025-01-29T16:38:47.874831169Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71015561" Jan 29 16:38:47.881519 containerd[1523]: time="2025-01-29T16:38:47.881433989Z" level=info msg="ImageCreate event name:\"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:38:47.885614 containerd[1523]: time="2025-01-29T16:38:47.885533305Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:38:47.887279 containerd[1523]: time="2025-01-29T16:38:47.887129755Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 6.075336275s" Jan 29 16:38:47.887279 containerd[1523]: time="2025-01-29T16:38:47.887170948Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 29 16:38:47.889212 containerd[1523]: time="2025-01-29T16:38:47.888945259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 29 16:38:47.900127 containerd[1523]: time="2025-01-29T16:38:47.900078476Z" level=info msg="CreateContainer within sandbox \"706794175df8af72dd615b837aeb530d0789496425e6be2f4df170e44cf6af0c\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 29 16:38:47.916418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1703035309.mount: Deactivated successfully. Jan 29 16:38:47.920815 containerd[1523]: time="2025-01-29T16:38:47.920698590Z" level=info msg="CreateContainer within sandbox \"706794175df8af72dd615b837aeb530d0789496425e6be2f4df170e44cf6af0c\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"6260df98d86cb932a4f9ecf1310a5c38da6b8924380fe031ed059b73f5cd9f3d\"" Jan 29 16:38:47.922519 containerd[1523]: time="2025-01-29T16:38:47.921463889Z" level=info msg="StartContainer for \"6260df98d86cb932a4f9ecf1310a5c38da6b8924380fe031ed059b73f5cd9f3d\"" Jan 29 16:38:47.964040 systemd[1]: Started cri-containerd-6260df98d86cb932a4f9ecf1310a5c38da6b8924380fe031ed059b73f5cd9f3d.scope - libcontainer container 6260df98d86cb932a4f9ecf1310a5c38da6b8924380fe031ed059b73f5cd9f3d. Jan 29 16:38:47.997685 containerd[1523]: time="2025-01-29T16:38:47.997609323Z" level=info msg="StartContainer for \"6260df98d86cb932a4f9ecf1310a5c38da6b8924380fe031ed059b73f5cd9f3d\" returns successfully" Jan 29 16:38:48.398611 kubelet[1925]: E0129 16:38:48.398478 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:38:49.399035 kubelet[1925]: E0129 16:38:49.398972 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:38:49.599713 containerd[1523]: time="2025-01-29T16:38:49.599649423Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:38:49.602229 containerd[1523]: time="2025-01-29T16:38:49.602176704Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 29 16:38:49.603419 containerd[1523]: time="2025-01-29T16:38:49.603387798Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:38:49.606261 containerd[1523]: time="2025-01-29T16:38:49.606187762Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:38:49.607597 containerd[1523]: time="2025-01-29T16:38:49.607411128Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.718421941s" Jan 29 16:38:49.607597 containerd[1523]: time="2025-01-29T16:38:49.607454280Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 29 16:38:49.610776 containerd[1523]: time="2025-01-29T16:38:49.610626188Z" level=info msg="CreateContainer within sandbox \"79be2aae8fda153aed81867acfaa46cc4c6f5f2e53a7782311021924b4f17e11\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 29 16:38:49.649696 containerd[1523]: time="2025-01-29T16:38:49.649561392Z" level=info msg="CreateContainer within sandbox \"79be2aae8fda153aed81867acfaa46cc4c6f5f2e53a7782311021924b4f17e11\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"6109bc30e2ba213d728e87d9c0eb0d26f2aadbfd1e8d6b3ffc807ad389dca1cd\"" Jan 29 16:38:49.650117 containerd[1523]: time="2025-01-29T16:38:49.650079346Z" level=info msg="StartContainer for \"6109bc30e2ba213d728e87d9c0eb0d26f2aadbfd1e8d6b3ffc807ad389dca1cd\"" Jan 29 16:38:49.700016 systemd[1]: Started cri-containerd-6109bc30e2ba213d728e87d9c0eb0d26f2aadbfd1e8d6b3ffc807ad389dca1cd.scope - libcontainer container 6109bc30e2ba213d728e87d9c0eb0d26f2aadbfd1e8d6b3ffc807ad389dca1cd. Jan 29 16:38:49.744913 containerd[1523]: time="2025-01-29T16:38:49.744852217Z" level=info msg="StartContainer for \"6109bc30e2ba213d728e87d9c0eb0d26f2aadbfd1e8d6b3ffc807ad389dca1cd\" returns successfully" Jan 29 16:38:49.948913 kubelet[1925]: I0129 16:38:49.948683 1925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-wl4v2" podStartSLOduration=10.364542642 podStartE2EDuration="17.948654741s" podCreationTimestamp="2025-01-29 16:38:32 +0000 UTC" firstStartedPulling="2025-01-29 16:38:40.304627253 +0000 UTC m=+26.671657793" lastFinishedPulling="2025-01-29 16:38:47.888739341 +0000 UTC m=+34.255769892" observedRunningTime="2025-01-29 16:38:48.885962555 +0000 UTC m=+35.252993106" watchObservedRunningTime="2025-01-29 16:38:49.948654741 +0000 UTC m=+36.315685311" Jan 29 16:38:49.949208 kubelet[1925]: I0129 16:38:49.948914 1925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-ghhfd" podStartSLOduration=26.600574492 podStartE2EDuration="35.94890575s" podCreationTimestamp="2025-01-29 16:38:14 +0000 UTC" firstStartedPulling="2025-01-29 16:38:40.260649037 +0000 UTC m=+26.627679576" lastFinishedPulling="2025-01-29 16:38:49.608980283 +0000 UTC m=+35.976010834" observedRunningTime="2025-01-29 16:38:49.944783716 +0000 UTC m=+36.311814273" watchObservedRunningTime="2025-01-29 16:38:49.94890575 +0000 UTC m=+36.315936310" Jan 29 16:38:50.399385 kubelet[1925]: E0129 16:38:50.399300 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:38:50.516797 kubelet[1925]: I0129 16:38:50.516655 1925 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 29 16:38:50.516797 kubelet[1925]: I0129 16:38:50.516708 1925 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 29 16:38:51.400227 kubelet[1925]: E0129 16:38:51.400125 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:38:52.400823 kubelet[1925]: E0129 16:38:52.400701 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:38:53.401931 kubelet[1925]: E0129 16:38:53.401840 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:38:54.371173 kubelet[1925]: E0129 16:38:54.371090 1925 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:38:54.402227 kubelet[1925]: E0129 16:38:54.402157 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:38:55.402932 kubelet[1925]: E0129 16:38:55.402853 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:38:56.403824 kubelet[1925]: E0129 16:38:56.403713 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:38:57.404230 kubelet[1925]: E0129 16:38:57.404107 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:38:58.405232 kubelet[1925]: E0129 16:38:58.405149 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:38:59.406247 kubelet[1925]: E0129 16:38:59.406168 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:39:00.406519 kubelet[1925]: E0129 16:39:00.406443 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:39:01.159112 kubelet[1925]: I0129 16:39:01.159012 1925 topology_manager.go:215] "Topology Admit Handler" podUID="b3eee2eb-cad1-43bc-93a4-36ec26b962e3" podNamespace="default" podName="nfs-server-provisioner-0" Jan 29 16:39:01.167726 systemd[1]: Created slice kubepods-besteffort-podb3eee2eb_cad1_43bc_93a4_36ec26b962e3.slice - libcontainer container kubepods-besteffort-podb3eee2eb_cad1_43bc_93a4_36ec26b962e3.slice. Jan 29 16:39:01.220996 kubelet[1925]: I0129 16:39:01.220729 1925 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sstpj\" (UniqueName: \"kubernetes.io/projected/b3eee2eb-cad1-43bc-93a4-36ec26b962e3-kube-api-access-sstpj\") pod \"nfs-server-provisioner-0\" (UID: \"b3eee2eb-cad1-43bc-93a4-36ec26b962e3\") " pod="default/nfs-server-provisioner-0" Jan 29 16:39:01.220996 kubelet[1925]: I0129 16:39:01.220936 1925 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/b3eee2eb-cad1-43bc-93a4-36ec26b962e3-data\") pod \"nfs-server-provisioner-0\" (UID: \"b3eee2eb-cad1-43bc-93a4-36ec26b962e3\") " pod="default/nfs-server-provisioner-0" Jan 29 16:39:01.407504 kubelet[1925]: E0129 16:39:01.407435 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:39:01.473073 containerd[1523]: time="2025-01-29T16:39:01.473015626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:b3eee2eb-cad1-43bc-93a4-36ec26b962e3,Namespace:default,Attempt:0,}" Jan 29 16:39:01.687311 systemd-networkd[1440]: cali60e51b789ff: Link UP Jan 29 16:39:01.688304 systemd-networkd[1440]: cali60e51b789ff: Gained carrier Jan 29 16:39:01.706714 containerd[1523]: 2025-01-29 16:39:01.561 [INFO][3628] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.230.24.202-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default b3eee2eb-cad1-43bc-93a4-36ec26b962e3 1283 0 2025-01-29 16:39:01 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.230.24.202 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="1434cdab743dfc3a5a084df4df2974ad8b325b9ad1976264fa07f7397e05bd89" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.24.202-k8s-nfs--server--provisioner--0-" Jan 29 16:39:01.706714 containerd[1523]: 2025-01-29 16:39:01.561 [INFO][3628] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1434cdab743dfc3a5a084df4df2974ad8b325b9ad1976264fa07f7397e05bd89" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.24.202-k8s-nfs--server--provisioner--0-eth0" Jan 29 16:39:01.706714 containerd[1523]: 2025-01-29 16:39:01.603 [INFO][3638] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1434cdab743dfc3a5a084df4df2974ad8b325b9ad1976264fa07f7397e05bd89" HandleID="k8s-pod-network.1434cdab743dfc3a5a084df4df2974ad8b325b9ad1976264fa07f7397e05bd89" Workload="10.230.24.202-k8s-nfs--server--provisioner--0-eth0" Jan 29 16:39:01.706714 containerd[1523]: 2025-01-29 16:39:01.627 [INFO][3638] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1434cdab743dfc3a5a084df4df2974ad8b325b9ad1976264fa07f7397e05bd89" HandleID="k8s-pod-network.1434cdab743dfc3a5a084df4df2974ad8b325b9ad1976264fa07f7397e05bd89" Workload="10.230.24.202-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290720), Attrs:map[string]string{"namespace":"default", "node":"10.230.24.202", "pod":"nfs-server-provisioner-0", "timestamp":"2025-01-29 16:39:01.603426389 +0000 UTC"}, Hostname:"10.230.24.202", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 16:39:01.706714 containerd[1523]: 2025-01-29 16:39:01.628 [INFO][3638] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 16:39:01.706714 containerd[1523]: 2025-01-29 16:39:01.628 [INFO][3638] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 16:39:01.706714 containerd[1523]: 2025-01-29 16:39:01.628 [INFO][3638] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.230.24.202' Jan 29 16:39:01.706714 containerd[1523]: 2025-01-29 16:39:01.632 [INFO][3638] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1434cdab743dfc3a5a084df4df2974ad8b325b9ad1976264fa07f7397e05bd89" host="10.230.24.202" Jan 29 16:39:01.706714 containerd[1523]: 2025-01-29 16:39:01.641 [INFO][3638] ipam/ipam.go 372: Looking up existing affinities for host host="10.230.24.202" Jan 29 16:39:01.706714 containerd[1523]: 2025-01-29 16:39:01.647 [INFO][3638] ipam/ipam.go 489: Trying affinity for 192.168.28.64/26 host="10.230.24.202" Jan 29 16:39:01.706714 containerd[1523]: 2025-01-29 16:39:01.652 [INFO][3638] ipam/ipam.go 155: Attempting to load block cidr=192.168.28.64/26 host="10.230.24.202" Jan 29 16:39:01.706714 containerd[1523]: 2025-01-29 16:39:01.656 [INFO][3638] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.28.64/26 host="10.230.24.202" Jan 29 16:39:01.706714 containerd[1523]: 2025-01-29 16:39:01.657 [INFO][3638] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.28.64/26 handle="k8s-pod-network.1434cdab743dfc3a5a084df4df2974ad8b325b9ad1976264fa07f7397e05bd89" host="10.230.24.202" Jan 29 16:39:01.706714 containerd[1523]: 2025-01-29 16:39:01.660 [INFO][3638] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1434cdab743dfc3a5a084df4df2974ad8b325b9ad1976264fa07f7397e05bd89 Jan 29 16:39:01.706714 containerd[1523]: 2025-01-29 16:39:01.670 [INFO][3638] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.28.64/26 handle="k8s-pod-network.1434cdab743dfc3a5a084df4df2974ad8b325b9ad1976264fa07f7397e05bd89" host="10.230.24.202" Jan 29 16:39:01.706714 containerd[1523]: 2025-01-29 16:39:01.676 [INFO][3638] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.28.67/26] block=192.168.28.64/26 handle="k8s-pod-network.1434cdab743dfc3a5a084df4df2974ad8b325b9ad1976264fa07f7397e05bd89" host="10.230.24.202" Jan 29 16:39:01.706714 containerd[1523]: 2025-01-29 16:39:01.676 [INFO][3638] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.28.67/26] handle="k8s-pod-network.1434cdab743dfc3a5a084df4df2974ad8b325b9ad1976264fa07f7397e05bd89" host="10.230.24.202" Jan 29 16:39:01.706714 containerd[1523]: 2025-01-29 16:39:01.676 [INFO][3638] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 16:39:01.706714 containerd[1523]: 2025-01-29 16:39:01.676 [INFO][3638] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.28.67/26] IPv6=[] ContainerID="1434cdab743dfc3a5a084df4df2974ad8b325b9ad1976264fa07f7397e05bd89" HandleID="k8s-pod-network.1434cdab743dfc3a5a084df4df2974ad8b325b9ad1976264fa07f7397e05bd89" Workload="10.230.24.202-k8s-nfs--server--provisioner--0-eth0" Jan 29 16:39:01.709314 containerd[1523]: 2025-01-29 16:39:01.679 [INFO][3628] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1434cdab743dfc3a5a084df4df2974ad8b325b9ad1976264fa07f7397e05bd89" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.24.202-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.24.202-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"b3eee2eb-cad1-43bc-93a4-36ec26b962e3", ResourceVersion:"1283", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 39, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.24.202", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.28.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:39:01.709314 containerd[1523]: 2025-01-29 16:39:01.679 [INFO][3628] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.28.67/32] ContainerID="1434cdab743dfc3a5a084df4df2974ad8b325b9ad1976264fa07f7397e05bd89" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.24.202-k8s-nfs--server--provisioner--0-eth0" Jan 29 16:39:01.709314 containerd[1523]: 2025-01-29 16:39:01.679 [INFO][3628] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="1434cdab743dfc3a5a084df4df2974ad8b325b9ad1976264fa07f7397e05bd89" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.24.202-k8s-nfs--server--provisioner--0-eth0" Jan 29 16:39:01.709314 containerd[1523]: 2025-01-29 16:39:01.689 [INFO][3628] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1434cdab743dfc3a5a084df4df2974ad8b325b9ad1976264fa07f7397e05bd89" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.24.202-k8s-nfs--server--provisioner--0-eth0" Jan 29 16:39:01.710065 containerd[1523]: 2025-01-29 16:39:01.692 [INFO][3628] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1434cdab743dfc3a5a084df4df2974ad8b325b9ad1976264fa07f7397e05bd89" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.24.202-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.24.202-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"b3eee2eb-cad1-43bc-93a4-36ec26b962e3", ResourceVersion:"1283", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 39, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.24.202", ContainerID:"1434cdab743dfc3a5a084df4df2974ad8b325b9ad1976264fa07f7397e05bd89", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.28.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"0a:a9:da:2b:0b:4b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:39:01.710065 containerd[1523]: 2025-01-29 16:39:01.703 [INFO][3628] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1434cdab743dfc3a5a084df4df2974ad8b325b9ad1976264fa07f7397e05bd89" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.24.202-k8s-nfs--server--provisioner--0-eth0" Jan 29 16:39:01.741807 containerd[1523]: time="2025-01-29T16:39:01.740818651Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:39:01.741807 containerd[1523]: time="2025-01-29T16:39:01.740967225Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:39:01.741807 containerd[1523]: time="2025-01-29T16:39:01.741047266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:39:01.742510 containerd[1523]: time="2025-01-29T16:39:01.742439309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:39:01.769466 systemd[1]: run-containerd-runc-k8s.io-1434cdab743dfc3a5a084df4df2974ad8b325b9ad1976264fa07f7397e05bd89-runc.bfQDt4.mount: Deactivated successfully. Jan 29 16:39:01.780991 systemd[1]: Started cri-containerd-1434cdab743dfc3a5a084df4df2974ad8b325b9ad1976264fa07f7397e05bd89.scope - libcontainer container 1434cdab743dfc3a5a084df4df2974ad8b325b9ad1976264fa07f7397e05bd89. Jan 29 16:39:01.837528 containerd[1523]: time="2025-01-29T16:39:01.837431007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:b3eee2eb-cad1-43bc-93a4-36ec26b962e3,Namespace:default,Attempt:0,} returns sandbox id \"1434cdab743dfc3a5a084df4df2974ad8b325b9ad1976264fa07f7397e05bd89\"" Jan 29 16:39:01.840077 containerd[1523]: time="2025-01-29T16:39:01.840019652Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 29 16:39:02.407724 kubelet[1925]: E0129 16:39:02.407650 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:39:02.983133 systemd-networkd[1440]: cali60e51b789ff: Gained IPv6LL Jan 29 16:39:03.408785 kubelet[1925]: E0129 16:39:03.408265 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:39:04.409454 kubelet[1925]: E0129 16:39:04.409280 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:39:05.021144 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1294462382.mount: Deactivated successfully. Jan 29 16:39:05.410495 kubelet[1925]: E0129 16:39:05.410341 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:39:06.411945 kubelet[1925]: E0129 16:39:06.411880 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:39:07.412191 kubelet[1925]: E0129 16:39:07.412106 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:39:07.897792 containerd[1523]: time="2025-01-29T16:39:07.897688244Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:39:07.899175 containerd[1523]: time="2025-01-29T16:39:07.899129866Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039414" Jan 29 16:39:07.899972 containerd[1523]: time="2025-01-29T16:39:07.899903320Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:39:07.930855 containerd[1523]: time="2025-01-29T16:39:07.930732188Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:39:07.932670 containerd[1523]: time="2025-01-29T16:39:07.932461540Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 6.092337247s" Jan 29 16:39:07.932670 containerd[1523]: time="2025-01-29T16:39:07.932512538Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 29 16:39:07.937895 containerd[1523]: time="2025-01-29T16:39:07.937845107Z" level=info msg="CreateContainer within sandbox \"1434cdab743dfc3a5a084df4df2974ad8b325b9ad1976264fa07f7397e05bd89\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 29 16:39:08.157922 containerd[1523]: time="2025-01-29T16:39:08.157626399Z" level=info msg="CreateContainer within sandbox \"1434cdab743dfc3a5a084df4df2974ad8b325b9ad1976264fa07f7397e05bd89\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"fb45b6eb7cc444687bfe0d7eda13bef2aec6cc92b62402727065a329e8b5e6e8\"" Jan 29 16:39:08.158847 containerd[1523]: time="2025-01-29T16:39:08.158367710Z" level=info msg="StartContainer for \"fb45b6eb7cc444687bfe0d7eda13bef2aec6cc92b62402727065a329e8b5e6e8\"" Jan 29 16:39:08.207010 systemd[1]: Started cri-containerd-fb45b6eb7cc444687bfe0d7eda13bef2aec6cc92b62402727065a329e8b5e6e8.scope - libcontainer container fb45b6eb7cc444687bfe0d7eda13bef2aec6cc92b62402727065a329e8b5e6e8. Jan 29 16:39:08.243438 containerd[1523]: time="2025-01-29T16:39:08.243388649Z" level=info msg="StartContainer for \"fb45b6eb7cc444687bfe0d7eda13bef2aec6cc92b62402727065a329e8b5e6e8\" returns successfully" Jan 29 16:39:08.412912 kubelet[1925]: E0129 16:39:08.412725 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:39:09.018314 kubelet[1925]: I0129 16:39:09.018201 1925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.92276992 podStartE2EDuration="8.018165706s" podCreationTimestamp="2025-01-29 16:39:01 +0000 UTC" firstStartedPulling="2025-01-29 16:39:01.839434699 +0000 UTC m=+48.206465242" lastFinishedPulling="2025-01-29 16:39:07.934830477 +0000 UTC m=+54.301861028" observedRunningTime="2025-01-29 16:39:09.016910556 +0000 UTC m=+55.383941107" watchObservedRunningTime="2025-01-29 16:39:09.018165706 +0000 UTC m=+55.385196252" Jan 29 16:39:09.413628 kubelet[1925]: E0129 16:39:09.413408 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:39:10.414075 kubelet[1925]: E0129 16:39:10.413962 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:39:11.415052 kubelet[1925]: E0129 16:39:11.414970 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:39:12.416098 kubelet[1925]: E0129 16:39:12.415992 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:39:13.417314 kubelet[1925]: E0129 16:39:13.417210 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:39:14.370938 kubelet[1925]: E0129 16:39:14.370837 1925 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:39:14.411362 containerd[1523]: time="2025-01-29T16:39:14.410765585Z" level=info msg="StopPodSandbox for \"e35fcdd3250769bbcc1381476a9c6c0d4f96ad243509d698877cddaf95d2c990\"" Jan 29 16:39:14.411362 containerd[1523]: time="2025-01-29T16:39:14.411030663Z" level=info msg="TearDown network for sandbox \"e35fcdd3250769bbcc1381476a9c6c0d4f96ad243509d698877cddaf95d2c990\" successfully" Jan 29 16:39:14.411362 containerd[1523]: time="2025-01-29T16:39:14.411138403Z" level=info msg="StopPodSandbox for \"e35fcdd3250769bbcc1381476a9c6c0d4f96ad243509d698877cddaf95d2c990\" returns successfully" Jan 29 16:39:14.416089 containerd[1523]: time="2025-01-29T16:39:14.416018114Z" level=info msg="RemovePodSandbox for \"e35fcdd3250769bbcc1381476a9c6c0d4f96ad243509d698877cddaf95d2c990\"" Jan 29 16:39:14.417435 kubelet[1925]: E0129 16:39:14.417380 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:39:14.424146 containerd[1523]: time="2025-01-29T16:39:14.424100550Z" level=info msg="Forcibly stopping sandbox \"e35fcdd3250769bbcc1381476a9c6c0d4f96ad243509d698877cddaf95d2c990\"" Jan 29 16:39:14.424370 containerd[1523]: time="2025-01-29T16:39:14.424256822Z" level=info msg="TearDown network for sandbox \"e35fcdd3250769bbcc1381476a9c6c0d4f96ad243509d698877cddaf95d2c990\" successfully" Jan 29 16:39:14.442599 containerd[1523]: time="2025-01-29T16:39:14.442500742Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e35fcdd3250769bbcc1381476a9c6c0d4f96ad243509d698877cddaf95d2c990\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:39:14.442852 containerd[1523]: time="2025-01-29T16:39:14.442608550Z" level=info msg="RemovePodSandbox \"e35fcdd3250769bbcc1381476a9c6c0d4f96ad243509d698877cddaf95d2c990\" returns successfully" Jan 29 16:39:14.444794 containerd[1523]: time="2025-01-29T16:39:14.444629818Z" level=info msg="StopPodSandbox for \"e468574f2c4d61d4c62560ed065c37a21dbd8a84009c1e75ca595e748901caa3\"" Jan 29 16:39:14.444954 containerd[1523]: time="2025-01-29T16:39:14.444823655Z" level=info msg="TearDown network for sandbox \"e468574f2c4d61d4c62560ed065c37a21dbd8a84009c1e75ca595e748901caa3\" successfully" Jan 29 16:39:14.444954 containerd[1523]: time="2025-01-29T16:39:14.444879648Z" level=info msg="StopPodSandbox for \"e468574f2c4d61d4c62560ed065c37a21dbd8a84009c1e75ca595e748901caa3\" returns successfully" Jan 29 16:39:14.445786 containerd[1523]: time="2025-01-29T16:39:14.445623933Z" level=info msg="RemovePodSandbox for \"e468574f2c4d61d4c62560ed065c37a21dbd8a84009c1e75ca595e748901caa3\"" Jan 29 16:39:14.445786 containerd[1523]: time="2025-01-29T16:39:14.445668155Z" level=info msg="Forcibly stopping sandbox \"e468574f2c4d61d4c62560ed065c37a21dbd8a84009c1e75ca595e748901caa3\"" Jan 29 16:39:14.445951 containerd[1523]: time="2025-01-29T16:39:14.445787197Z" level=info msg="TearDown network for sandbox \"e468574f2c4d61d4c62560ed065c37a21dbd8a84009c1e75ca595e748901caa3\" successfully" Jan 29 16:39:14.448438 containerd[1523]: time="2025-01-29T16:39:14.448383852Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e468574f2c4d61d4c62560ed065c37a21dbd8a84009c1e75ca595e748901caa3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:39:14.448941 containerd[1523]: time="2025-01-29T16:39:14.448438777Z" level=info msg="RemovePodSandbox \"e468574f2c4d61d4c62560ed065c37a21dbd8a84009c1e75ca595e748901caa3\" returns successfully" Jan 29 16:39:14.449492 containerd[1523]: time="2025-01-29T16:39:14.449132671Z" level=info msg="StopPodSandbox for \"59eff04dced0609ea87ef538d971c1603251b18fd95be4c5314e00fe23874bcf\"" Jan 29 16:39:14.449492 containerd[1523]: time="2025-01-29T16:39:14.449341840Z" level=info msg="TearDown network for sandbox \"59eff04dced0609ea87ef538d971c1603251b18fd95be4c5314e00fe23874bcf\" successfully" Jan 29 16:39:14.449492 containerd[1523]: time="2025-01-29T16:39:14.449371619Z" level=info msg="StopPodSandbox for \"59eff04dced0609ea87ef538d971c1603251b18fd95be4c5314e00fe23874bcf\" returns successfully" Jan 29 16:39:14.450865 containerd[1523]: time="2025-01-29T16:39:14.449851127Z" level=info msg="RemovePodSandbox for \"59eff04dced0609ea87ef538d971c1603251b18fd95be4c5314e00fe23874bcf\"" Jan 29 16:39:14.450865 containerd[1523]: time="2025-01-29T16:39:14.449888313Z" level=info msg="Forcibly stopping sandbox \"59eff04dced0609ea87ef538d971c1603251b18fd95be4c5314e00fe23874bcf\"" Jan 29 16:39:14.450865 containerd[1523]: time="2025-01-29T16:39:14.449975744Z" level=info msg="TearDown network for sandbox \"59eff04dced0609ea87ef538d971c1603251b18fd95be4c5314e00fe23874bcf\" successfully" Jan 29 16:39:14.452839 containerd[1523]: time="2025-01-29T16:39:14.452780060Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"59eff04dced0609ea87ef538d971c1603251b18fd95be4c5314e00fe23874bcf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:39:14.453010 containerd[1523]: time="2025-01-29T16:39:14.452838887Z" level=info msg="RemovePodSandbox \"59eff04dced0609ea87ef538d971c1603251b18fd95be4c5314e00fe23874bcf\" returns successfully" Jan 29 16:39:14.453359 containerd[1523]: time="2025-01-29T16:39:14.453317633Z" level=info msg="StopPodSandbox for \"dbe56831af47f5f85c6a8b5a0ccd5dc59d1dd15994bc03753a62351bd77cf38b\"" Jan 29 16:39:14.453471 containerd[1523]: time="2025-01-29T16:39:14.453432939Z" level=info msg="TearDown network for sandbox \"dbe56831af47f5f85c6a8b5a0ccd5dc59d1dd15994bc03753a62351bd77cf38b\" successfully" Jan 29 16:39:14.453792 containerd[1523]: time="2025-01-29T16:39:14.453470311Z" level=info msg="StopPodSandbox for \"dbe56831af47f5f85c6a8b5a0ccd5dc59d1dd15994bc03753a62351bd77cf38b\" returns successfully" Jan 29 16:39:14.454778 containerd[1523]: time="2025-01-29T16:39:14.454040968Z" level=info msg="RemovePodSandbox for \"dbe56831af47f5f85c6a8b5a0ccd5dc59d1dd15994bc03753a62351bd77cf38b\"" Jan 29 16:39:14.454778 containerd[1523]: time="2025-01-29T16:39:14.454080605Z" level=info msg="Forcibly stopping sandbox \"dbe56831af47f5f85c6a8b5a0ccd5dc59d1dd15994bc03753a62351bd77cf38b\"" Jan 29 16:39:14.454778 containerd[1523]: time="2025-01-29T16:39:14.454169353Z" level=info msg="TearDown network for sandbox \"dbe56831af47f5f85c6a8b5a0ccd5dc59d1dd15994bc03753a62351bd77cf38b\" successfully" Jan 29 16:39:14.456654 containerd[1523]: time="2025-01-29T16:39:14.456609030Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dbe56831af47f5f85c6a8b5a0ccd5dc59d1dd15994bc03753a62351bd77cf38b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:39:14.456654 containerd[1523]: time="2025-01-29T16:39:14.456658870Z" level=info msg="RemovePodSandbox \"dbe56831af47f5f85c6a8b5a0ccd5dc59d1dd15994bc03753a62351bd77cf38b\" returns successfully" Jan 29 16:39:14.457329 containerd[1523]: time="2025-01-29T16:39:14.457133715Z" level=info msg="StopPodSandbox for \"f3deae6dc24d96e2b66cf92606b0d48661fd90ad8d0cdc9edcad0847bd8927ea\"" Jan 29 16:39:14.457329 containerd[1523]: time="2025-01-29T16:39:14.457242387Z" level=info msg="TearDown network for sandbox \"f3deae6dc24d96e2b66cf92606b0d48661fd90ad8d0cdc9edcad0847bd8927ea\" successfully" Jan 29 16:39:14.457329 containerd[1523]: time="2025-01-29T16:39:14.457262201Z" level=info msg="StopPodSandbox for \"f3deae6dc24d96e2b66cf92606b0d48661fd90ad8d0cdc9edcad0847bd8927ea\" returns successfully" Jan 29 16:39:14.458114 containerd[1523]: time="2025-01-29T16:39:14.457827418Z" level=info msg="RemovePodSandbox for \"f3deae6dc24d96e2b66cf92606b0d48661fd90ad8d0cdc9edcad0847bd8927ea\"" Jan 29 16:39:14.458114 containerd[1523]: time="2025-01-29T16:39:14.457873616Z" level=info msg="Forcibly stopping sandbox \"f3deae6dc24d96e2b66cf92606b0d48661fd90ad8d0cdc9edcad0847bd8927ea\"" Jan 29 16:39:14.458114 containerd[1523]: time="2025-01-29T16:39:14.457976777Z" level=info msg="TearDown network for sandbox \"f3deae6dc24d96e2b66cf92606b0d48661fd90ad8d0cdc9edcad0847bd8927ea\" successfully" Jan 29 16:39:14.460396 containerd[1523]: time="2025-01-29T16:39:14.460351058Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f3deae6dc24d96e2b66cf92606b0d48661fd90ad8d0cdc9edcad0847bd8927ea\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:39:14.460761 containerd[1523]: time="2025-01-29T16:39:14.460405805Z" level=info msg="RemovePodSandbox \"f3deae6dc24d96e2b66cf92606b0d48661fd90ad8d0cdc9edcad0847bd8927ea\" returns successfully" Jan 29 16:39:14.461375 containerd[1523]: time="2025-01-29T16:39:14.460976122Z" level=info msg="StopPodSandbox for \"2efbf5d0f307b3be20fff5705f37217ad67ab6699fe3eb4f1d925817f4d98cf5\"" Jan 29 16:39:14.461375 containerd[1523]: time="2025-01-29T16:39:14.461086292Z" level=info msg="TearDown network for sandbox \"2efbf5d0f307b3be20fff5705f37217ad67ab6699fe3eb4f1d925817f4d98cf5\" successfully" Jan 29 16:39:14.461375 containerd[1523]: time="2025-01-29T16:39:14.461108157Z" level=info msg="StopPodSandbox for \"2efbf5d0f307b3be20fff5705f37217ad67ab6699fe3eb4f1d925817f4d98cf5\" returns successfully" Jan 29 16:39:14.462448 containerd[1523]: time="2025-01-29T16:39:14.461730863Z" level=info msg="RemovePodSandbox for \"2efbf5d0f307b3be20fff5705f37217ad67ab6699fe3eb4f1d925817f4d98cf5\"" Jan 29 16:39:14.462448 containerd[1523]: time="2025-01-29T16:39:14.461830093Z" level=info msg="Forcibly stopping sandbox \"2efbf5d0f307b3be20fff5705f37217ad67ab6699fe3eb4f1d925817f4d98cf5\"" Jan 29 16:39:14.462448 containerd[1523]: time="2025-01-29T16:39:14.461919690Z" level=info msg="TearDown network for sandbox \"2efbf5d0f307b3be20fff5705f37217ad67ab6699fe3eb4f1d925817f4d98cf5\" successfully" Jan 29 16:39:14.464512 containerd[1523]: time="2025-01-29T16:39:14.464450982Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2efbf5d0f307b3be20fff5705f37217ad67ab6699fe3eb4f1d925817f4d98cf5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:39:14.464512 containerd[1523]: time="2025-01-29T16:39:14.464501971Z" level=info msg="RemovePodSandbox \"2efbf5d0f307b3be20fff5705f37217ad67ab6699fe3eb4f1d925817f4d98cf5\" returns successfully" Jan 29 16:39:14.465432 containerd[1523]: time="2025-01-29T16:39:14.465146495Z" level=info msg="StopPodSandbox for \"f2677d74a531cf065342690475879b4cc0ab35885aa54345aac468f71ec78ac9\"" Jan 29 16:39:14.465432 containerd[1523]: time="2025-01-29T16:39:14.465281574Z" level=info msg="TearDown network for sandbox \"f2677d74a531cf065342690475879b4cc0ab35885aa54345aac468f71ec78ac9\" successfully" Jan 29 16:39:14.465432 containerd[1523]: time="2025-01-29T16:39:14.465335111Z" level=info msg="StopPodSandbox for \"f2677d74a531cf065342690475879b4cc0ab35885aa54345aac468f71ec78ac9\" returns successfully" Jan 29 16:39:14.466235 containerd[1523]: time="2025-01-29T16:39:14.465946227Z" level=info msg="RemovePodSandbox for \"f2677d74a531cf065342690475879b4cc0ab35885aa54345aac468f71ec78ac9\"" Jan 29 16:39:14.466235 containerd[1523]: time="2025-01-29T16:39:14.465992389Z" level=info msg="Forcibly stopping sandbox \"f2677d74a531cf065342690475879b4cc0ab35885aa54345aac468f71ec78ac9\"" Jan 29 16:39:14.466235 containerd[1523]: time="2025-01-29T16:39:14.466087968Z" level=info msg="TearDown network for sandbox \"f2677d74a531cf065342690475879b4cc0ab35885aa54345aac468f71ec78ac9\" successfully" Jan 29 16:39:14.468718 containerd[1523]: time="2025-01-29T16:39:14.468672190Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f2677d74a531cf065342690475879b4cc0ab35885aa54345aac468f71ec78ac9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:39:14.468848 containerd[1523]: time="2025-01-29T16:39:14.468724224Z" level=info msg="RemovePodSandbox \"f2677d74a531cf065342690475879b4cc0ab35885aa54345aac468f71ec78ac9\" returns successfully" Jan 29 16:39:14.469696 containerd[1523]: time="2025-01-29T16:39:14.469446523Z" level=info msg="StopPodSandbox for \"cc5ee80badb343f0e1bd663d0069643ce837b585eb1873cf591a609969442bcb\"" Jan 29 16:39:14.469696 containerd[1523]: time="2025-01-29T16:39:14.469587454Z" level=info msg="TearDown network for sandbox \"cc5ee80badb343f0e1bd663d0069643ce837b585eb1873cf591a609969442bcb\" successfully" Jan 29 16:39:14.469696 containerd[1523]: time="2025-01-29T16:39:14.469608968Z" level=info msg="StopPodSandbox for \"cc5ee80badb343f0e1bd663d0069643ce837b585eb1873cf591a609969442bcb\" returns successfully" Jan 29 16:39:14.470049 containerd[1523]: time="2025-01-29T16:39:14.470002080Z" level=info msg="RemovePodSandbox for \"cc5ee80badb343f0e1bd663d0069643ce837b585eb1873cf591a609969442bcb\"" Jan 29 16:39:14.470159 containerd[1523]: time="2025-01-29T16:39:14.470048343Z" level=info msg="Forcibly stopping sandbox \"cc5ee80badb343f0e1bd663d0069643ce837b585eb1873cf591a609969442bcb\"" Jan 29 16:39:14.470242 containerd[1523]: time="2025-01-29T16:39:14.470153439Z" level=info msg="TearDown network for sandbox \"cc5ee80badb343f0e1bd663d0069643ce837b585eb1873cf591a609969442bcb\" successfully" Jan 29 16:39:14.472588 containerd[1523]: time="2025-01-29T16:39:14.472542696Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cc5ee80badb343f0e1bd663d0069643ce837b585eb1873cf591a609969442bcb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:39:14.472785 containerd[1523]: time="2025-01-29T16:39:14.472593983Z" level=info msg="RemovePodSandbox \"cc5ee80badb343f0e1bd663d0069643ce837b585eb1873cf591a609969442bcb\" returns successfully" Jan 29 16:39:14.473447 containerd[1523]: time="2025-01-29T16:39:14.473187055Z" level=info msg="StopPodSandbox for \"c0e11258579202d3efc31396bfbd1d950f6d62878d72cd969a376f862ff88ca8\"" Jan 29 16:39:14.473447 containerd[1523]: time="2025-01-29T16:39:14.473298331Z" level=info msg="TearDown network for sandbox \"c0e11258579202d3efc31396bfbd1d950f6d62878d72cd969a376f862ff88ca8\" successfully" Jan 29 16:39:14.473447 containerd[1523]: time="2025-01-29T16:39:14.473318036Z" level=info msg="StopPodSandbox for \"c0e11258579202d3efc31396bfbd1d950f6d62878d72cd969a376f862ff88ca8\" returns successfully" Jan 29 16:39:14.473858 containerd[1523]: time="2025-01-29T16:39:14.473823625Z" level=info msg="RemovePodSandbox for \"c0e11258579202d3efc31396bfbd1d950f6d62878d72cd969a376f862ff88ca8\"" Jan 29 16:39:14.473950 containerd[1523]: time="2025-01-29T16:39:14.473860620Z" level=info msg="Forcibly stopping sandbox \"c0e11258579202d3efc31396bfbd1d950f6d62878d72cd969a376f862ff88ca8\"" Jan 29 16:39:14.474068 containerd[1523]: time="2025-01-29T16:39:14.474019459Z" level=info msg="TearDown network for sandbox \"c0e11258579202d3efc31396bfbd1d950f6d62878d72cd969a376f862ff88ca8\" successfully" Jan 29 16:39:14.476443 containerd[1523]: time="2025-01-29T16:39:14.476397645Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c0e11258579202d3efc31396bfbd1d950f6d62878d72cd969a376f862ff88ca8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:39:14.476443 containerd[1523]: time="2025-01-29T16:39:14.476446902Z" level=info msg="RemovePodSandbox \"c0e11258579202d3efc31396bfbd1d950f6d62878d72cd969a376f862ff88ca8\" returns successfully" Jan 29 16:39:14.477329 containerd[1523]: time="2025-01-29T16:39:14.476993373Z" level=info msg="StopPodSandbox for \"4fb53295ae6d8d87e43ed4bff9d5dd88ef07c58d516bb74446f47c7be99498fc\"" Jan 29 16:39:14.477329 containerd[1523]: time="2025-01-29T16:39:14.477147155Z" level=info msg="TearDown network for sandbox \"4fb53295ae6d8d87e43ed4bff9d5dd88ef07c58d516bb74446f47c7be99498fc\" successfully" Jan 29 16:39:14.477329 containerd[1523]: time="2025-01-29T16:39:14.477177232Z" level=info msg="StopPodSandbox for \"4fb53295ae6d8d87e43ed4bff9d5dd88ef07c58d516bb74446f47c7be99498fc\" returns successfully" Jan 29 16:39:14.477667 containerd[1523]: time="2025-01-29T16:39:14.477572778Z" level=info msg="RemovePodSandbox for \"4fb53295ae6d8d87e43ed4bff9d5dd88ef07c58d516bb74446f47c7be99498fc\"" Jan 29 16:39:14.477667 containerd[1523]: time="2025-01-29T16:39:14.477606093Z" level=info msg="Forcibly stopping sandbox \"4fb53295ae6d8d87e43ed4bff9d5dd88ef07c58d516bb74446f47c7be99498fc\"" Jan 29 16:39:14.477800 containerd[1523]: time="2025-01-29T16:39:14.477721484Z" level=info msg="TearDown network for sandbox \"4fb53295ae6d8d87e43ed4bff9d5dd88ef07c58d516bb74446f47c7be99498fc\" successfully" Jan 29 16:39:14.480425 containerd[1523]: time="2025-01-29T16:39:14.480304783Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4fb53295ae6d8d87e43ed4bff9d5dd88ef07c58d516bb74446f47c7be99498fc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:39:14.480425 containerd[1523]: time="2025-01-29T16:39:14.480364805Z" level=info msg="RemovePodSandbox \"4fb53295ae6d8d87e43ed4bff9d5dd88ef07c58d516bb74446f47c7be99498fc\" returns successfully" Jan 29 16:39:14.480872 containerd[1523]: time="2025-01-29T16:39:14.480819341Z" level=info msg="StopPodSandbox for \"35e68faa00b6dd856c6aab241b02a510968eac4171841f0bf9bb9caba676be77\"" Jan 29 16:39:14.480969 containerd[1523]: time="2025-01-29T16:39:14.480944707Z" level=info msg="TearDown network for sandbox \"35e68faa00b6dd856c6aab241b02a510968eac4171841f0bf9bb9caba676be77\" successfully" Jan 29 16:39:14.480969 containerd[1523]: time="2025-01-29T16:39:14.480965091Z" level=info msg="StopPodSandbox for \"35e68faa00b6dd856c6aab241b02a510968eac4171841f0bf9bb9caba676be77\" returns successfully" Jan 29 16:39:14.481539 containerd[1523]: time="2025-01-29T16:39:14.481479056Z" level=info msg="RemovePodSandbox for \"35e68faa00b6dd856c6aab241b02a510968eac4171841f0bf9bb9caba676be77\"" Jan 29 16:39:14.481611 containerd[1523]: time="2025-01-29T16:39:14.481543259Z" level=info msg="Forcibly stopping sandbox \"35e68faa00b6dd856c6aab241b02a510968eac4171841f0bf9bb9caba676be77\"" Jan 29 16:39:14.481675 containerd[1523]: time="2025-01-29T16:39:14.481627917Z" level=info msg="TearDown network for sandbox \"35e68faa00b6dd856c6aab241b02a510968eac4171841f0bf9bb9caba676be77\" successfully" Jan 29 16:39:14.483977 containerd[1523]: time="2025-01-29T16:39:14.483919548Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"35e68faa00b6dd856c6aab241b02a510968eac4171841f0bf9bb9caba676be77\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:39:14.484173 containerd[1523]: time="2025-01-29T16:39:14.483979059Z" level=info msg="RemovePodSandbox \"35e68faa00b6dd856c6aab241b02a510968eac4171841f0bf9bb9caba676be77\" returns successfully" Jan 29 16:39:14.484824 containerd[1523]: time="2025-01-29T16:39:14.484627702Z" level=info msg="StopPodSandbox for \"0505bb38903c7c592ccbc828cdc9fdb6579d789547297666de5a28ea1f26b2a4\"" Jan 29 16:39:14.484824 containerd[1523]: time="2025-01-29T16:39:14.484787500Z" level=info msg="TearDown network for sandbox \"0505bb38903c7c592ccbc828cdc9fdb6579d789547297666de5a28ea1f26b2a4\" successfully" Jan 29 16:39:14.484824 containerd[1523]: time="2025-01-29T16:39:14.484818768Z" level=info msg="StopPodSandbox for \"0505bb38903c7c592ccbc828cdc9fdb6579d789547297666de5a28ea1f26b2a4\" returns successfully" Jan 29 16:39:14.486015 containerd[1523]: time="2025-01-29T16:39:14.485394496Z" level=info msg="RemovePodSandbox for \"0505bb38903c7c592ccbc828cdc9fdb6579d789547297666de5a28ea1f26b2a4\"" Jan 29 16:39:14.486015 containerd[1523]: time="2025-01-29T16:39:14.485438466Z" level=info msg="Forcibly stopping sandbox \"0505bb38903c7c592ccbc828cdc9fdb6579d789547297666de5a28ea1f26b2a4\"" Jan 29 16:39:14.486015 containerd[1523]: time="2025-01-29T16:39:14.485609021Z" level=info msg="TearDown network for sandbox \"0505bb38903c7c592ccbc828cdc9fdb6579d789547297666de5a28ea1f26b2a4\" successfully" Jan 29 16:39:14.488033 containerd[1523]: time="2025-01-29T16:39:14.487977662Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0505bb38903c7c592ccbc828cdc9fdb6579d789547297666de5a28ea1f26b2a4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:39:14.488033 containerd[1523]: time="2025-01-29T16:39:14.488030327Z" level=info msg="RemovePodSandbox \"0505bb38903c7c592ccbc828cdc9fdb6579d789547297666de5a28ea1f26b2a4\" returns successfully" Jan 29 16:39:14.489067 containerd[1523]: time="2025-01-29T16:39:14.488630396Z" level=info msg="StopPodSandbox for \"405609bd3ee7c1a1167c5207814e6ad2286d312d1d0e5a15b0abc08af3720852\"" Jan 29 16:39:14.489067 containerd[1523]: time="2025-01-29T16:39:14.488768474Z" level=info msg="TearDown network for sandbox \"405609bd3ee7c1a1167c5207814e6ad2286d312d1d0e5a15b0abc08af3720852\" successfully" Jan 29 16:39:14.489067 containerd[1523]: time="2025-01-29T16:39:14.488791940Z" level=info msg="StopPodSandbox for \"405609bd3ee7c1a1167c5207814e6ad2286d312d1d0e5a15b0abc08af3720852\" returns successfully" Jan 29 16:39:14.489248 containerd[1523]: time="2025-01-29T16:39:14.489126380Z" level=info msg="RemovePodSandbox for \"405609bd3ee7c1a1167c5207814e6ad2286d312d1d0e5a15b0abc08af3720852\"" Jan 29 16:39:14.489248 containerd[1523]: time="2025-01-29T16:39:14.489159895Z" level=info msg="Forcibly stopping sandbox \"405609bd3ee7c1a1167c5207814e6ad2286d312d1d0e5a15b0abc08af3720852\"" Jan 29 16:39:14.489353 containerd[1523]: time="2025-01-29T16:39:14.489250746Z" level=info msg="TearDown network for sandbox \"405609bd3ee7c1a1167c5207814e6ad2286d312d1d0e5a15b0abc08af3720852\" successfully" Jan 29 16:39:14.491655 containerd[1523]: time="2025-01-29T16:39:14.491597214Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"405609bd3ee7c1a1167c5207814e6ad2286d312d1d0e5a15b0abc08af3720852\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:39:14.491655 containerd[1523]: time="2025-01-29T16:39:14.491647132Z" level=info msg="RemovePodSandbox \"405609bd3ee7c1a1167c5207814e6ad2286d312d1d0e5a15b0abc08af3720852\" returns successfully" Jan 29 16:39:14.492504 containerd[1523]: time="2025-01-29T16:39:14.492048436Z" level=info msg="StopPodSandbox for \"2eed4f29416d08e8a229b320b2680c8e4bdfa4226deebc48886d61e197c85549\"" Jan 29 16:39:14.492504 containerd[1523]: time="2025-01-29T16:39:14.492166231Z" level=info msg="TearDown network for sandbox \"2eed4f29416d08e8a229b320b2680c8e4bdfa4226deebc48886d61e197c85549\" successfully" Jan 29 16:39:14.492504 containerd[1523]: time="2025-01-29T16:39:14.492228820Z" level=info msg="StopPodSandbox for \"2eed4f29416d08e8a229b320b2680c8e4bdfa4226deebc48886d61e197c85549\" returns successfully" Jan 29 16:39:14.493166 containerd[1523]: time="2025-01-29T16:39:14.492858596Z" level=info msg="RemovePodSandbox for \"2eed4f29416d08e8a229b320b2680c8e4bdfa4226deebc48886d61e197c85549\"" Jan 29 16:39:14.493166 containerd[1523]: time="2025-01-29T16:39:14.492897816Z" level=info msg="Forcibly stopping sandbox \"2eed4f29416d08e8a229b320b2680c8e4bdfa4226deebc48886d61e197c85549\"" Jan 29 16:39:14.493166 containerd[1523]: time="2025-01-29T16:39:14.492988159Z" level=info msg="TearDown network for sandbox \"2eed4f29416d08e8a229b320b2680c8e4bdfa4226deebc48886d61e197c85549\" successfully" Jan 29 16:39:14.495299 containerd[1523]: time="2025-01-29T16:39:14.495257186Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2eed4f29416d08e8a229b320b2680c8e4bdfa4226deebc48886d61e197c85549\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:39:14.495411 containerd[1523]: time="2025-01-29T16:39:14.495307922Z" level=info msg="RemovePodSandbox \"2eed4f29416d08e8a229b320b2680c8e4bdfa4226deebc48886d61e197c85549\" returns successfully" Jan 29 16:39:14.496232 containerd[1523]: time="2025-01-29T16:39:14.495832118Z" level=info msg="StopPodSandbox for \"d7c013bfec3a2464280f3995f4d6ddd04671d9963f35776b2c6f5a315516ef8e\"" Jan 29 16:39:14.496232 containerd[1523]: time="2025-01-29T16:39:14.495933329Z" level=info msg="TearDown network for sandbox \"d7c013bfec3a2464280f3995f4d6ddd04671d9963f35776b2c6f5a315516ef8e\" successfully" Jan 29 16:39:14.496232 containerd[1523]: time="2025-01-29T16:39:14.495951871Z" level=info msg="StopPodSandbox for \"d7c013bfec3a2464280f3995f4d6ddd04671d9963f35776b2c6f5a315516ef8e\" returns successfully" Jan 29 16:39:14.496815 containerd[1523]: time="2025-01-29T16:39:14.496730582Z" level=info msg="RemovePodSandbox for \"d7c013bfec3a2464280f3995f4d6ddd04671d9963f35776b2c6f5a315516ef8e\"" Jan 29 16:39:14.496893 containerd[1523]: time="2025-01-29T16:39:14.496819951Z" level=info msg="Forcibly stopping sandbox \"d7c013bfec3a2464280f3995f4d6ddd04671d9963f35776b2c6f5a315516ef8e\"" Jan 29 16:39:14.496947 containerd[1523]: time="2025-01-29T16:39:14.496907668Z" level=info msg="TearDown network for sandbox \"d7c013bfec3a2464280f3995f4d6ddd04671d9963f35776b2c6f5a315516ef8e\" successfully" Jan 29 16:39:14.501690 containerd[1523]: time="2025-01-29T16:39:14.501195801Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d7c013bfec3a2464280f3995f4d6ddd04671d9963f35776b2c6f5a315516ef8e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:39:14.501690 containerd[1523]: time="2025-01-29T16:39:14.501251964Z" level=info msg="RemovePodSandbox \"d7c013bfec3a2464280f3995f4d6ddd04671d9963f35776b2c6f5a315516ef8e\" returns successfully" Jan 29 16:39:14.502087 containerd[1523]: time="2025-01-29T16:39:14.501806902Z" level=info msg="StopPodSandbox for \"fda0c38bc56f22a1e981bfcb1ea48337151e4bfc8bc8f7a9b6b010dd930d54ef\"" Jan 29 16:39:14.502386 containerd[1523]: time="2025-01-29T16:39:14.502353047Z" level=info msg="TearDown network for sandbox \"fda0c38bc56f22a1e981bfcb1ea48337151e4bfc8bc8f7a9b6b010dd930d54ef\" successfully" Jan 29 16:39:14.502386 containerd[1523]: time="2025-01-29T16:39:14.502382533Z" level=info msg="StopPodSandbox for \"fda0c38bc56f22a1e981bfcb1ea48337151e4bfc8bc8f7a9b6b010dd930d54ef\" returns successfully" Jan 29 16:39:14.502771 containerd[1523]: time="2025-01-29T16:39:14.502727972Z" level=info msg="RemovePodSandbox for \"fda0c38bc56f22a1e981bfcb1ea48337151e4bfc8bc8f7a9b6b010dd930d54ef\"" Jan 29 16:39:14.502936 containerd[1523]: time="2025-01-29T16:39:14.502843892Z" level=info msg="Forcibly stopping sandbox \"fda0c38bc56f22a1e981bfcb1ea48337151e4bfc8bc8f7a9b6b010dd930d54ef\"" Jan 29 16:39:14.503114 containerd[1523]: time="2025-01-29T16:39:14.503063929Z" level=info msg="TearDown network for sandbox \"fda0c38bc56f22a1e981bfcb1ea48337151e4bfc8bc8f7a9b6b010dd930d54ef\" successfully" Jan 29 16:39:14.505594 containerd[1523]: time="2025-01-29T16:39:14.505546662Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fda0c38bc56f22a1e981bfcb1ea48337151e4bfc8bc8f7a9b6b010dd930d54ef\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:39:14.505720 containerd[1523]: time="2025-01-29T16:39:14.505595060Z" level=info msg="RemovePodSandbox \"fda0c38bc56f22a1e981bfcb1ea48337151e4bfc8bc8f7a9b6b010dd930d54ef\" returns successfully" Jan 29 16:39:14.506341 containerd[1523]: time="2025-01-29T16:39:14.506101268Z" level=info msg="StopPodSandbox for \"c40910d4f83c9120d4501590a6953986d18f9464fe2b40137c612ae95ef61f4e\"" Jan 29 16:39:14.506341 containerd[1523]: time="2025-01-29T16:39:14.506235316Z" level=info msg="TearDown network for sandbox \"c40910d4f83c9120d4501590a6953986d18f9464fe2b40137c612ae95ef61f4e\" successfully" Jan 29 16:39:14.506341 containerd[1523]: time="2025-01-29T16:39:14.506256077Z" level=info msg="StopPodSandbox for \"c40910d4f83c9120d4501590a6953986d18f9464fe2b40137c612ae95ef61f4e\" returns successfully" Jan 29 16:39:14.507115 containerd[1523]: time="2025-01-29T16:39:14.506686002Z" level=info msg="RemovePodSandbox for \"c40910d4f83c9120d4501590a6953986d18f9464fe2b40137c612ae95ef61f4e\"" Jan 29 16:39:14.507115 containerd[1523]: time="2025-01-29T16:39:14.506722024Z" level=info msg="Forcibly stopping sandbox \"c40910d4f83c9120d4501590a6953986d18f9464fe2b40137c612ae95ef61f4e\"" Jan 29 16:39:14.507115 containerd[1523]: time="2025-01-29T16:39:14.506837552Z" level=info msg="TearDown network for sandbox \"c40910d4f83c9120d4501590a6953986d18f9464fe2b40137c612ae95ef61f4e\" successfully" Jan 29 16:39:14.509483 containerd[1523]: time="2025-01-29T16:39:14.509427886Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c40910d4f83c9120d4501590a6953986d18f9464fe2b40137c612ae95ef61f4e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:39:14.509483 containerd[1523]: time="2025-01-29T16:39:14.509482052Z" level=info msg="RemovePodSandbox \"c40910d4f83c9120d4501590a6953986d18f9464fe2b40137c612ae95ef61f4e\" returns successfully" Jan 29 16:39:14.510253 containerd[1523]: time="2025-01-29T16:39:14.509928281Z" level=info msg="StopPodSandbox for \"3d2ec2ca2b0ed82c1b3b3d35d4abad6a2d4a9630f0257c3459c3f40010a271cc\"" Jan 29 16:39:14.510253 containerd[1523]: time="2025-01-29T16:39:14.510065626Z" level=info msg="TearDown network for sandbox \"3d2ec2ca2b0ed82c1b3b3d35d4abad6a2d4a9630f0257c3459c3f40010a271cc\" successfully" Jan 29 16:39:14.510253 containerd[1523]: time="2025-01-29T16:39:14.510083437Z" level=info msg="StopPodSandbox for \"3d2ec2ca2b0ed82c1b3b3d35d4abad6a2d4a9630f0257c3459c3f40010a271cc\" returns successfully" Jan 29 16:39:14.516481 containerd[1523]: time="2025-01-29T16:39:14.516026350Z" level=info msg="RemovePodSandbox for \"3d2ec2ca2b0ed82c1b3b3d35d4abad6a2d4a9630f0257c3459c3f40010a271cc\"" Jan 29 16:39:14.516481 containerd[1523]: time="2025-01-29T16:39:14.516059595Z" level=info msg="Forcibly stopping sandbox \"3d2ec2ca2b0ed82c1b3b3d35d4abad6a2d4a9630f0257c3459c3f40010a271cc\"" Jan 29 16:39:14.516481 containerd[1523]: time="2025-01-29T16:39:14.516170059Z" level=info msg="TearDown network for sandbox \"3d2ec2ca2b0ed82c1b3b3d35d4abad6a2d4a9630f0257c3459c3f40010a271cc\" successfully" Jan 29 16:39:14.519839 containerd[1523]: time="2025-01-29T16:39:14.518945179Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3d2ec2ca2b0ed82c1b3b3d35d4abad6a2d4a9630f0257c3459c3f40010a271cc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:39:14.519839 containerd[1523]: time="2025-01-29T16:39:14.519129324Z" level=info msg="RemovePodSandbox \"3d2ec2ca2b0ed82c1b3b3d35d4abad6a2d4a9630f0257c3459c3f40010a271cc\" returns successfully" Jan 29 16:39:15.417609 kubelet[1925]: E0129 16:39:15.417533 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:39:16.418662 kubelet[1925]: E0129 16:39:16.418593 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:39:17.418968 kubelet[1925]: E0129 16:39:17.418819 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:39:18.265664 kubelet[1925]: I0129 16:39:18.265606 1925 topology_manager.go:215] "Topology Admit Handler" podUID="5d3ad734-0c45-4f3a-8668-638c9a20c538" podNamespace="default" podName="test-pod-1" Jan 29 16:39:18.274414 systemd[1]: Created slice kubepods-besteffort-pod5d3ad734_0c45_4f3a_8668_638c9a20c538.slice - libcontainer container kubepods-besteffort-pod5d3ad734_0c45_4f3a_8668_638c9a20c538.slice. Jan 29 16:39:18.419158 kubelet[1925]: E0129 16:39:18.419108 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:39:18.433465 kubelet[1925]: I0129 16:39:18.433392 1925 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-df53613f-db50-4aeb-9e74-25b6765ae9f6\" (UniqueName: \"kubernetes.io/nfs/5d3ad734-0c45-4f3a-8668-638c9a20c538-pvc-df53613f-db50-4aeb-9e74-25b6765ae9f6\") pod \"test-pod-1\" (UID: \"5d3ad734-0c45-4f3a-8668-638c9a20c538\") " pod="default/test-pod-1" Jan 29 16:39:18.433557 kubelet[1925]: I0129 16:39:18.433453 1925 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vdjc\" (UniqueName: \"kubernetes.io/projected/5d3ad734-0c45-4f3a-8668-638c9a20c538-kube-api-access-6vdjc\") pod \"test-pod-1\" (UID: \"5d3ad734-0c45-4f3a-8668-638c9a20c538\") " pod="default/test-pod-1" Jan 29 16:39:18.583075 kernel: FS-Cache: Loaded Jan 29 16:39:18.665942 kernel: RPC: Registered named UNIX socket transport module. Jan 29 16:39:18.666172 kernel: RPC: Registered udp transport module. Jan 29 16:39:18.667023 kernel: RPC: Registered tcp transport module. Jan 29 16:39:18.668047 kernel: RPC: Registered tcp-with-tls transport module. Jan 29 16:39:18.669338 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 29 16:39:18.948248 kernel: NFS: Registering the id_resolver key type Jan 29 16:39:18.948491 kernel: Key type id_resolver registered Jan 29 16:39:18.950798 kernel: Key type id_legacy registered Jan 29 16:39:19.001693 nfsidmap[3830]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Jan 29 16:39:19.009648 nfsidmap[3833]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Jan 29 16:39:19.179888 containerd[1523]: time="2025-01-29T16:39:19.179808531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:5d3ad734-0c45-4f3a-8668-638c9a20c538,Namespace:default,Attempt:0,}" Jan 29 16:39:19.392663 systemd-networkd[1440]: cali5ec59c6bf6e: Link UP Jan 29 16:39:19.394279 systemd-networkd[1440]: cali5ec59c6bf6e: Gained carrier Jan 29 16:39:19.411891 containerd[1523]: 2025-01-29 16:39:19.265 [INFO][3836] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.230.24.202-k8s-test--pod--1-eth0 default 5d3ad734-0c45-4f3a-8668-638c9a20c538 1349 0 2025-01-29 16:39:03 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.230.24.202 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="536310037e6b344c965b85dca1bedc8eb75a1d7df39f086dfd18955892cfd197" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.24.202-k8s-test--pod--1-" Jan 29 16:39:19.411891 containerd[1523]: 2025-01-29 16:39:19.266 [INFO][3836] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="536310037e6b344c965b85dca1bedc8eb75a1d7df39f086dfd18955892cfd197" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.24.202-k8s-test--pod--1-eth0" Jan 29 16:39:19.411891 containerd[1523]: 2025-01-29 16:39:19.314 [INFO][3847] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="536310037e6b344c965b85dca1bedc8eb75a1d7df39f086dfd18955892cfd197" HandleID="k8s-pod-network.536310037e6b344c965b85dca1bedc8eb75a1d7df39f086dfd18955892cfd197" Workload="10.230.24.202-k8s-test--pod--1-eth0" Jan 29 16:39:19.411891 containerd[1523]: 2025-01-29 16:39:19.333 [INFO][3847] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="536310037e6b344c965b85dca1bedc8eb75a1d7df39f086dfd18955892cfd197" HandleID="k8s-pod-network.536310037e6b344c965b85dca1bedc8eb75a1d7df39f086dfd18955892cfd197" Workload="10.230.24.202-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318a80), Attrs:map[string]string{"namespace":"default", "node":"10.230.24.202", "pod":"test-pod-1", "timestamp":"2025-01-29 16:39:19.314118084 +0000 UTC"}, Hostname:"10.230.24.202", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 16:39:19.411891 containerd[1523]: 2025-01-29 16:39:19.333 [INFO][3847] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 16:39:19.411891 containerd[1523]: 2025-01-29 16:39:19.333 [INFO][3847] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 16:39:19.411891 containerd[1523]: 2025-01-29 16:39:19.334 [INFO][3847] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.230.24.202' Jan 29 16:39:19.411891 containerd[1523]: 2025-01-29 16:39:19.337 [INFO][3847] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.536310037e6b344c965b85dca1bedc8eb75a1d7df39f086dfd18955892cfd197" host="10.230.24.202" Jan 29 16:39:19.411891 containerd[1523]: 2025-01-29 16:39:19.343 [INFO][3847] ipam/ipam.go 372: Looking up existing affinities for host host="10.230.24.202" Jan 29 16:39:19.411891 containerd[1523]: 2025-01-29 16:39:19.348 [INFO][3847] ipam/ipam.go 489: Trying affinity for 192.168.28.64/26 host="10.230.24.202" Jan 29 16:39:19.411891 containerd[1523]: 2025-01-29 16:39:19.353 [INFO][3847] ipam/ipam.go 155: Attempting to load block cidr=192.168.28.64/26 host="10.230.24.202" Jan 29 16:39:19.411891 containerd[1523]: 2025-01-29 16:39:19.358 [INFO][3847] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.28.64/26 host="10.230.24.202" Jan 29 16:39:19.411891 containerd[1523]: 2025-01-29 16:39:19.358 [INFO][3847] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.28.64/26 handle="k8s-pod-network.536310037e6b344c965b85dca1bedc8eb75a1d7df39f086dfd18955892cfd197" host="10.230.24.202" Jan 29 16:39:19.411891 containerd[1523]: 2025-01-29 16:39:19.361 [INFO][3847] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.536310037e6b344c965b85dca1bedc8eb75a1d7df39f086dfd18955892cfd197 Jan 29 16:39:19.411891 containerd[1523]: 2025-01-29 16:39:19.372 [INFO][3847] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.28.64/26 handle="k8s-pod-network.536310037e6b344c965b85dca1bedc8eb75a1d7df39f086dfd18955892cfd197" host="10.230.24.202" Jan 29 16:39:19.411891 containerd[1523]: 2025-01-29 16:39:19.384 [INFO][3847] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.28.68/26] block=192.168.28.64/26 handle="k8s-pod-network.536310037e6b344c965b85dca1bedc8eb75a1d7df39f086dfd18955892cfd197" host="10.230.24.202" Jan 29 16:39:19.411891 containerd[1523]: 2025-01-29 16:39:19.384 [INFO][3847] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.28.68/26] handle="k8s-pod-network.536310037e6b344c965b85dca1bedc8eb75a1d7df39f086dfd18955892cfd197" host="10.230.24.202" Jan 29 16:39:19.411891 containerd[1523]: 2025-01-29 16:39:19.384 [INFO][3847] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 16:39:19.411891 containerd[1523]: 2025-01-29 16:39:19.384 [INFO][3847] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.28.68/26] IPv6=[] ContainerID="536310037e6b344c965b85dca1bedc8eb75a1d7df39f086dfd18955892cfd197" HandleID="k8s-pod-network.536310037e6b344c965b85dca1bedc8eb75a1d7df39f086dfd18955892cfd197" Workload="10.230.24.202-k8s-test--pod--1-eth0" Jan 29 16:39:19.411891 containerd[1523]: 2025-01-29 16:39:19.386 [INFO][3836] cni-plugin/k8s.go 386: Populated endpoint ContainerID="536310037e6b344c965b85dca1bedc8eb75a1d7df39f086dfd18955892cfd197" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.24.202-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.24.202-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"5d3ad734-0c45-4f3a-8668-638c9a20c538", ResourceVersion:"1349", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 39, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.24.202", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.28.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:39:19.413205 containerd[1523]: 2025-01-29 16:39:19.387 [INFO][3836] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.28.68/32] ContainerID="536310037e6b344c965b85dca1bedc8eb75a1d7df39f086dfd18955892cfd197" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.24.202-k8s-test--pod--1-eth0" Jan 29 16:39:19.413205 containerd[1523]: 2025-01-29 16:39:19.387 [INFO][3836] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="536310037e6b344c965b85dca1bedc8eb75a1d7df39f086dfd18955892cfd197" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.24.202-k8s-test--pod--1-eth0" Jan 29 16:39:19.413205 containerd[1523]: 2025-01-29 16:39:19.395 [INFO][3836] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="536310037e6b344c965b85dca1bedc8eb75a1d7df39f086dfd18955892cfd197" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.24.202-k8s-test--pod--1-eth0" Jan 29 16:39:19.413205 containerd[1523]: 2025-01-29 16:39:19.395 [INFO][3836] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="536310037e6b344c965b85dca1bedc8eb75a1d7df39f086dfd18955892cfd197" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.24.202-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.24.202-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"5d3ad734-0c45-4f3a-8668-638c9a20c538", ResourceVersion:"1349", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 39, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.24.202", ContainerID:"536310037e6b344c965b85dca1bedc8eb75a1d7df39f086dfd18955892cfd197", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.28.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"d6:d2:a6:d3:11:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:39:19.413205 containerd[1523]: 2025-01-29 16:39:19.406 [INFO][3836] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="536310037e6b344c965b85dca1bedc8eb75a1d7df39f086dfd18955892cfd197" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.24.202-k8s-test--pod--1-eth0" Jan 29 16:39:19.419657 kubelet[1925]: E0129 16:39:19.419604 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:39:19.445320 containerd[1523]: time="2025-01-29T16:39:19.443976592Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:39:19.445320 containerd[1523]: time="2025-01-29T16:39:19.444094842Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:39:19.445320 containerd[1523]: time="2025-01-29T16:39:19.444119494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:39:19.445320 containerd[1523]: time="2025-01-29T16:39:19.444297312Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:39:19.475058 systemd[1]: Started cri-containerd-536310037e6b344c965b85dca1bedc8eb75a1d7df39f086dfd18955892cfd197.scope - libcontainer container 536310037e6b344c965b85dca1bedc8eb75a1d7df39f086dfd18955892cfd197. Jan 29 16:39:19.540024 containerd[1523]: time="2025-01-29T16:39:19.539874492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:5d3ad734-0c45-4f3a-8668-638c9a20c538,Namespace:default,Attempt:0,} returns sandbox id \"536310037e6b344c965b85dca1bedc8eb75a1d7df39f086dfd18955892cfd197\"" Jan 29 16:39:19.543374 containerd[1523]: time="2025-01-29T16:39:19.543275643Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 29 16:39:19.935807 containerd[1523]: time="2025-01-29T16:39:19.935234667Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 29 16:39:19.938682 containerd[1523]: time="2025-01-29T16:39:19.938643103Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 395.267926ms" Jan 29 16:39:19.938909 containerd[1523]: time="2025-01-29T16:39:19.938880979Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 29 16:39:19.941380 containerd[1523]: time="2025-01-29T16:39:19.941207000Z" level=info msg="CreateContainer within sandbox \"536310037e6b344c965b85dca1bedc8eb75a1d7df39f086dfd18955892cfd197\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 29 16:39:19.943573 containerd[1523]: time="2025-01-29T16:39:19.943509048Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:39:19.959669 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1050326227.mount: Deactivated successfully. Jan 29 16:39:19.962208 containerd[1523]: time="2025-01-29T16:39:19.962057766Z" level=info msg="CreateContainer within sandbox \"536310037e6b344c965b85dca1bedc8eb75a1d7df39f086dfd18955892cfd197\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"8cd3fdcc59c0c1ec9976fce17961fd1c132d2405d8cc49ae3f87bb57a9dd36b6\"" Jan 29 16:39:19.963443 containerd[1523]: time="2025-01-29T16:39:19.963363048Z" level=info msg="StartContainer for \"8cd3fdcc59c0c1ec9976fce17961fd1c132d2405d8cc49ae3f87bb57a9dd36b6\"" Jan 29 16:39:20.008984 systemd[1]: Started cri-containerd-8cd3fdcc59c0c1ec9976fce17961fd1c132d2405d8cc49ae3f87bb57a9dd36b6.scope - libcontainer container 8cd3fdcc59c0c1ec9976fce17961fd1c132d2405d8cc49ae3f87bb57a9dd36b6. Jan 29 16:39:20.050640 containerd[1523]: time="2025-01-29T16:39:20.050564433Z" level=info msg="StartContainer for \"8cd3fdcc59c0c1ec9976fce17961fd1c132d2405d8cc49ae3f87bb57a9dd36b6\" returns successfully" Jan 29 16:39:20.419957 kubelet[1925]: E0129 16:39:20.419869 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:39:20.582144 systemd-networkd[1440]: cali5ec59c6bf6e: Gained IPv6LL Jan 29 16:39:21.420440 kubelet[1925]: E0129 16:39:21.420357 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:39:22.421358 kubelet[1925]: E0129 16:39:22.421283 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:39:23.421561 kubelet[1925]: E0129 16:39:23.421486 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:39:24.423099 kubelet[1925]: E0129 16:39:24.423002 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:39:25.423813 kubelet[1925]: E0129 16:39:25.423706 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:39:26.425027 kubelet[1925]: E0129 16:39:26.424947 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:39:27.425736 kubelet[1925]: E0129 16:39:27.425646 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:39:28.425925 kubelet[1925]: E0129 16:39:28.425847 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:39:29.427022 kubelet[1925]: E0129 16:39:29.426946 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:39:30.427924 kubelet[1925]: E0129 16:39:30.427809 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"