Dec 13 13:32:53.053967 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 13 11:52:04 -00 2024 Dec 13 13:32:53.054018 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:32:53.054034 kernel: BIOS-provided physical RAM map: Dec 13 13:32:53.054051 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 13:32:53.054062 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 13:32:53.054073 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 13:32:53.054085 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Dec 13 13:32:53.054097 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Dec 13 13:32:53.054115 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 13:32:53.054126 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 13:32:53.054138 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 13:32:53.054149 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 13:32:53.054164 kernel: NX (Execute Disable) protection: active Dec 13 13:32:53.054181 kernel: APIC: Static calls initialized Dec 13 13:32:53.054194 kernel: SMBIOS 2.8 present. Dec 13 13:32:53.054207 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Dec 13 13:32:53.054219 kernel: Hypervisor detected: KVM Dec 13 13:32:53.054235 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 13:32:53.054247 kernel: kvm-clock: using sched offset of 4458323976 cycles Dec 13 13:32:53.054260 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 13:32:53.054874 kernel: tsc: Detected 2499.998 MHz processor Dec 13 13:32:53.054891 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 13:32:53.054909 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 13:32:53.054921 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Dec 13 13:32:53.054934 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 13:32:53.054946 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 13:32:53.054972 kernel: Using GB pages for direct mapping Dec 13 13:32:53.054985 kernel: ACPI: Early table checksum verification disabled Dec 13 13:32:53.054997 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Dec 13 13:32:53.055009 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:32:53.055022 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:32:53.055034 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:32:53.055046 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Dec 13 13:32:53.055058 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:32:53.055071 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:32:53.055087 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:32:53.055100 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:32:53.055112 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Dec 13 13:32:53.055124 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Dec 13 13:32:53.055137 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Dec 13 13:32:53.055155 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Dec 13 13:32:53.055168 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Dec 13 13:32:53.055185 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Dec 13 13:32:53.055200 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Dec 13 13:32:53.055212 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 13:32:53.055225 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 13:32:53.055238 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Dec 13 13:32:53.055250 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Dec 13 13:32:53.055262 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Dec 13 13:32:53.055275 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Dec 13 13:32:53.055303 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Dec 13 13:32:53.055315 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Dec 13 13:32:53.055327 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Dec 13 13:32:53.055339 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Dec 13 13:32:53.055364 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Dec 13 13:32:53.055377 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Dec 13 13:32:53.055389 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Dec 13 13:32:53.055402 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Dec 13 13:32:53.055415 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Dec 13 13:32:53.055427 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Dec 13 13:32:53.055444 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 13:32:53.055457 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 13 13:32:53.055470 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Dec 13 13:32:53.055482 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Dec 13 13:32:53.055507 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Dec 13 13:32:53.055521 kernel: Zone ranges: Dec 13 13:32:53.055534 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 13:32:53.055547 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Dec 13 13:32:53.055559 kernel: Normal empty Dec 13 13:32:53.055577 kernel: Movable zone start for each node Dec 13 13:32:53.055590 kernel: Early memory node ranges Dec 13 13:32:53.055603 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 13:32:53.055616 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Dec 13 13:32:53.055628 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Dec 13 13:32:53.055641 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 13:32:53.055654 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 13:32:53.055667 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Dec 13 13:32:53.055679 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 13:32:53.055696 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 13:32:53.055709 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 13:32:53.055722 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 13:32:53.055735 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 13:32:53.055748 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 13:32:53.055760 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 13:32:53.055804 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 13:32:53.055817 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 13:32:53.055829 kernel: TSC deadline timer available Dec 13 13:32:53.055846 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Dec 13 13:32:53.055858 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 13:32:53.055870 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 13:32:53.055882 kernel: Booting paravirtualized kernel on KVM Dec 13 13:32:53.055894 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 13:32:53.055906 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Dec 13 13:32:53.055918 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Dec 13 13:32:53.055930 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Dec 13 13:32:53.055942 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Dec 13 13:32:53.055958 kernel: kvm-guest: PV spinlocks enabled Dec 13 13:32:53.055970 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 13:32:53.055995 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:32:53.056009 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 13:32:53.056021 kernel: random: crng init done Dec 13 13:32:53.056034 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 13:32:53.056047 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 13:32:53.056059 kernel: Fallback order for Node 0: 0 Dec 13 13:32:53.056076 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Dec 13 13:32:53.056089 kernel: Policy zone: DMA32 Dec 13 13:32:53.056109 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 13:32:53.056122 kernel: software IO TLB: area num 16. Dec 13 13:32:53.056135 kernel: Memory: 1899480K/2096616K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43328K init, 1748K bss, 196876K reserved, 0K cma-reserved) Dec 13 13:32:53.056148 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Dec 13 13:32:53.056160 kernel: Kernel/User page tables isolation: enabled Dec 13 13:32:53.056173 kernel: ftrace: allocating 37874 entries in 148 pages Dec 13 13:32:53.056186 kernel: ftrace: allocated 148 pages with 3 groups Dec 13 13:32:53.056203 kernel: Dynamic Preempt: voluntary Dec 13 13:32:53.056216 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 13:32:53.056235 kernel: rcu: RCU event tracing is enabled. Dec 13 13:32:53.056248 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Dec 13 13:32:53.056261 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 13:32:53.056287 kernel: Rude variant of Tasks RCU enabled. Dec 13 13:32:53.056316 kernel: Tracing variant of Tasks RCU enabled. Dec 13 13:32:53.056329 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 13:32:53.056342 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Dec 13 13:32:53.056355 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Dec 13 13:32:53.056368 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 13:32:53.056380 kernel: Console: colour VGA+ 80x25 Dec 13 13:32:53.056398 kernel: printk: console [tty0] enabled Dec 13 13:32:53.056411 kernel: printk: console [ttyS0] enabled Dec 13 13:32:53.056423 kernel: ACPI: Core revision 20230628 Dec 13 13:32:53.056436 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 13:32:53.056449 kernel: x2apic enabled Dec 13 13:32:53.056466 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 13:32:53.056503 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Dec 13 13:32:53.056518 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Dec 13 13:32:53.056532 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 13:32:53.056545 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 13 13:32:53.056558 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 13 13:32:53.056571 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 13:32:53.056584 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 13:32:53.056597 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 13:32:53.056610 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 13:32:53.056629 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 13 13:32:53.056642 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 13:32:53.056656 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 13:32:53.056669 kernel: MDS: Mitigation: Clear CPU buffers Dec 13 13:32:53.056681 kernel: MMIO Stale Data: Unknown: No mitigations Dec 13 13:32:53.056694 kernel: SRBDS: Unknown: Dependent on hypervisor status Dec 13 13:32:53.056708 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 13:32:53.056721 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 13:32:53.056734 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 13:32:53.056747 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 13:32:53.056839 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 13:32:53.056870 kernel: Freeing SMP alternatives memory: 32K Dec 13 13:32:53.056883 kernel: pid_max: default: 32768 minimum: 301 Dec 13 13:32:53.056896 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 13:32:53.056908 kernel: landlock: Up and running. Dec 13 13:32:53.056920 kernel: SELinux: Initializing. Dec 13 13:32:53.056932 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 13:32:53.056945 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 13:32:53.056957 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Dec 13 13:32:53.056970 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 13 13:32:53.056982 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 13 13:32:53.057013 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 13 13:32:53.057027 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Dec 13 13:32:53.057041 kernel: signal: max sigframe size: 1776 Dec 13 13:32:53.057054 kernel: rcu: Hierarchical SRCU implementation. Dec 13 13:32:53.057068 kernel: rcu: Max phase no-delay instances is 400. Dec 13 13:32:53.057081 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 13:32:53.057094 kernel: smp: Bringing up secondary CPUs ... Dec 13 13:32:53.057108 kernel: smpboot: x86: Booting SMP configuration: Dec 13 13:32:53.057121 kernel: .... node #0, CPUs: #1 Dec 13 13:32:53.057139 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Dec 13 13:32:53.057152 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 13:32:53.057165 kernel: smpboot: Max logical packages: 16 Dec 13 13:32:53.057178 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Dec 13 13:32:53.057192 kernel: devtmpfs: initialized Dec 13 13:32:53.057206 kernel: x86/mm: Memory block size: 128MB Dec 13 13:32:53.057219 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 13:32:53.057233 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Dec 13 13:32:53.057246 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 13:32:53.057264 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 13:32:53.057277 kernel: audit: initializing netlink subsys (disabled) Dec 13 13:32:53.057291 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 13:32:53.057304 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 13:32:53.057318 kernel: audit: type=2000 audit(1734096771.811:1): state=initialized audit_enabled=0 res=1 Dec 13 13:32:53.057342 kernel: cpuidle: using governor menu Dec 13 13:32:53.057355 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 13:32:53.057367 kernel: dca service started, version 1.12.1 Dec 13 13:32:53.057380 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 13:32:53.057408 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 13 13:32:53.057421 kernel: PCI: Using configuration type 1 for base access Dec 13 13:32:53.057433 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 13:32:53.057445 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 13:32:53.057470 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 13:32:53.057483 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 13:32:53.057510 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 13:32:53.057537 kernel: ACPI: Added _OSI(Module Device) Dec 13 13:32:53.057550 kernel: ACPI: Added _OSI(Processor Device) Dec 13 13:32:53.057569 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 13:32:53.057583 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 13:32:53.057596 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 13:32:53.057609 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 13:32:53.057623 kernel: ACPI: Interpreter enabled Dec 13 13:32:53.057636 kernel: ACPI: PM: (supports S0 S5) Dec 13 13:32:53.057649 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 13:32:53.057662 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 13:32:53.057676 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 13:32:53.057694 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 13:32:53.057708 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 13:32:53.058059 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 13:32:53.058255 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 13:32:53.058434 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 13:32:53.058453 kernel: PCI host bridge to bus 0000:00 Dec 13 13:32:53.058661 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 13:32:53.058868 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 13:32:53.059039 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 13:32:53.059253 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Dec 13 13:32:53.059507 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 13:32:53.059672 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Dec 13 13:32:53.059884 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 13:32:53.060081 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 13:32:53.060290 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Dec 13 13:32:53.060484 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Dec 13 13:32:53.060685 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Dec 13 13:32:53.060912 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Dec 13 13:32:53.061109 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 13:32:53.061303 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Dec 13 13:32:53.061506 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Dec 13 13:32:53.061749 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Dec 13 13:32:53.062002 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Dec 13 13:32:53.062207 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Dec 13 13:32:53.062385 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Dec 13 13:32:53.062617 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Dec 13 13:32:53.062825 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Dec 13 13:32:53.063044 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Dec 13 13:32:53.063254 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Dec 13 13:32:53.063458 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Dec 13 13:32:53.063646 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Dec 13 13:32:53.063876 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Dec 13 13:32:53.064066 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Dec 13 13:32:53.064257 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Dec 13 13:32:53.064439 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Dec 13 13:32:53.064653 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Dec 13 13:32:53.065775 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Dec 13 13:32:53.066010 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Dec 13 13:32:53.066172 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Dec 13 13:32:53.066364 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Dec 13 13:32:53.066571 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Dec 13 13:32:53.066746 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 13:32:53.066944 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Dec 13 13:32:53.067133 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Dec 13 13:32:53.068910 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 13:32:53.069095 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 13:32:53.069298 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 13:32:53.069508 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Dec 13 13:32:53.069683 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Dec 13 13:32:53.070941 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 13:32:53.071143 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 13:32:53.071338 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Dec 13 13:32:53.071557 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Dec 13 13:32:53.071741 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 13 13:32:53.072975 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 13 13:32:53.073155 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 13:32:53.073351 kernel: pci_bus 0000:02: extended config space not accessible Dec 13 13:32:53.073590 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Dec 13 13:32:53.074842 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Dec 13 13:32:53.075035 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 13 13:32:53.075216 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 13:32:53.075403 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Dec 13 13:32:53.075596 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Dec 13 13:32:53.077088 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 13 13:32:53.077311 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 13:32:53.077503 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 13:32:53.077708 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Dec 13 13:32:53.078960 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Dec 13 13:32:53.079144 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 13 13:32:53.079342 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 13:32:53.079535 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 13:32:53.079712 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 13 13:32:53.079959 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 13:32:53.080135 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 13:32:53.080312 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 13 13:32:53.080485 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 13:32:53.080675 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 13:32:53.083904 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 13 13:32:53.084101 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 13:32:53.084289 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 13:32:53.084507 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 13 13:32:53.084693 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 13:32:53.084885 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 13:32:53.085066 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 13 13:32:53.085261 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 13:32:53.085431 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 13:32:53.085464 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 13:32:53.085478 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 13:32:53.085523 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 13:32:53.085537 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 13:32:53.085558 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 13:32:53.085572 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 13:32:53.085586 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 13:32:53.085599 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 13:32:53.085613 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 13:32:53.085626 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 13:32:53.085640 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 13:32:53.085654 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 13:32:53.085667 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 13:32:53.085686 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 13:32:53.085700 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 13:32:53.085713 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 13:32:53.085727 kernel: iommu: Default domain type: Translated Dec 13 13:32:53.085741 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 13:32:53.085754 kernel: PCI: Using ACPI for IRQ routing Dec 13 13:32:53.085768 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 13:32:53.087800 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 13:32:53.087820 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Dec 13 13:32:53.088012 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 13:32:53.088215 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 13:32:53.088380 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 13:32:53.088399 kernel: vgaarb: loaded Dec 13 13:32:53.088412 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 13:32:53.088426 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 13:32:53.088439 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 13:32:53.088451 kernel: pnp: PnP ACPI init Dec 13 13:32:53.088690 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 13:32:53.088713 kernel: pnp: PnP ACPI: found 5 devices Dec 13 13:32:53.088727 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 13:32:53.088741 kernel: NET: Registered PF_INET protocol family Dec 13 13:32:53.088754 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 13:32:53.088781 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 13:32:53.088797 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 13:32:53.088818 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 13:32:53.088839 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 13:32:53.088853 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 13:32:53.088867 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 13:32:53.088888 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 13:32:53.088902 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 13:32:53.088915 kernel: NET: Registered PF_XDP protocol family Dec 13 13:32:53.089106 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Dec 13 13:32:53.089281 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 13:32:53.089512 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 13:32:53.089704 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Dec 13 13:32:53.094048 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 13 13:32:53.094240 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 13 13:32:53.094426 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 13 13:32:53.094626 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 13 13:32:53.094856 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Dec 13 13:32:53.095044 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Dec 13 13:32:53.095215 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Dec 13 13:32:53.095390 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Dec 13 13:32:53.095578 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Dec 13 13:32:53.095752 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Dec 13 13:32:53.099139 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Dec 13 13:32:53.099394 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Dec 13 13:32:53.099635 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 13 13:32:53.099867 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 13:32:53.100058 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 13 13:32:53.100231 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Dec 13 13:32:53.100405 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 13 13:32:53.100602 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 13:32:53.102809 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 13 13:32:53.103002 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Dec 13 13:32:53.103193 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 13:32:53.103369 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 13:32:53.103570 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 13 13:32:53.103745 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Dec 13 13:32:53.105962 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 13:32:53.106176 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 13:32:53.106356 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 13 13:32:53.106546 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Dec 13 13:32:53.106718 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 13:32:53.106908 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 13:32:53.107088 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 13 13:32:53.107260 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Dec 13 13:32:53.107447 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 13:32:53.107630 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 13:32:53.107836 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 13 13:32:53.108041 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Dec 13 13:32:53.108229 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 13:32:53.108400 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 13:32:53.108585 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 13 13:32:53.108770 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Dec 13 13:32:53.110999 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 13:32:53.111175 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 13:32:53.111348 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 13 13:32:53.111542 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Dec 13 13:32:53.111716 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 13:32:53.112568 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 13:32:53.112751 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 13:32:53.112963 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 13:32:53.113117 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 13:32:53.113293 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Dec 13 13:32:53.113459 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 13:32:53.113637 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Dec 13 13:32:53.113889 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Dec 13 13:32:53.114047 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Dec 13 13:32:53.114201 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 13:32:53.114365 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Dec 13 13:32:53.114582 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Dec 13 13:32:53.114746 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Dec 13 13:32:53.114927 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 13:32:53.115118 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Dec 13 13:32:53.115290 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Dec 13 13:32:53.115468 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 13:32:53.115680 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Dec 13 13:32:53.115913 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Dec 13 13:32:53.116081 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 13:32:53.116251 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Dec 13 13:32:53.116425 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Dec 13 13:32:53.116608 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 13:32:53.116799 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Dec 13 13:32:53.116973 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Dec 13 13:32:53.117147 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 13:32:53.117326 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Dec 13 13:32:53.117514 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Dec 13 13:32:53.117680 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 13:32:53.117927 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Dec 13 13:32:53.118092 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Dec 13 13:32:53.118258 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 13:32:53.118279 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 13:32:53.118294 kernel: PCI: CLS 0 bytes, default 64 Dec 13 13:32:53.118319 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 13:32:53.118333 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Dec 13 13:32:53.118346 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 13:32:53.118360 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Dec 13 13:32:53.118373 kernel: Initialise system trusted keyrings Dec 13 13:32:53.118395 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 13:32:53.118409 kernel: Key type asymmetric registered Dec 13 13:32:53.118421 kernel: Asymmetric key parser 'x509' registered Dec 13 13:32:53.118434 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 13:32:53.118461 kernel: io scheduler mq-deadline registered Dec 13 13:32:53.118484 kernel: io scheduler kyber registered Dec 13 13:32:53.118510 kernel: io scheduler bfq registered Dec 13 13:32:53.118704 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Dec 13 13:32:53.118943 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Dec 13 13:32:53.119133 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 13:32:53.119305 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Dec 13 13:32:53.119476 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Dec 13 13:32:53.119679 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 13:32:53.119874 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Dec 13 13:32:53.120047 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Dec 13 13:32:53.120227 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 13:32:53.120399 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Dec 13 13:32:53.120585 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Dec 13 13:32:53.120756 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 13:32:53.120960 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Dec 13 13:32:53.121131 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Dec 13 13:32:53.121312 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 13:32:53.121528 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Dec 13 13:32:53.121700 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Dec 13 13:32:53.121896 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 13:32:53.122100 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Dec 13 13:32:53.122282 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Dec 13 13:32:53.122464 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 13:32:53.122656 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Dec 13 13:32:53.122881 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Dec 13 13:32:53.123053 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 13:32:53.123075 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 13:32:53.123091 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 13:32:53.123112 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 13:32:53.123127 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 13:32:53.123142 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 13:32:53.123156 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 13:32:53.123170 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 13:32:53.123184 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 13:32:53.123198 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 13:32:53.123373 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 13 13:32:53.123585 kernel: rtc_cmos 00:03: registered as rtc0 Dec 13 13:32:53.123756 kernel: rtc_cmos 00:03: setting system clock to 2024-12-13T13:32:52 UTC (1734096772) Dec 13 13:32:53.123946 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Dec 13 13:32:53.123968 kernel: intel_pstate: CPU model not supported Dec 13 13:32:53.123982 kernel: NET: Registered PF_INET6 protocol family Dec 13 13:32:53.123996 kernel: Segment Routing with IPv6 Dec 13 13:32:53.124023 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 13:32:53.124036 kernel: NET: Registered PF_PACKET protocol family Dec 13 13:32:53.124050 kernel: Key type dns_resolver registered Dec 13 13:32:53.124070 kernel: IPI shorthand broadcast: enabled Dec 13 13:32:53.124084 kernel: sched_clock: Marking stable (1125027572, 239928974)->(1604215517, -239258971) Dec 13 13:32:53.124110 kernel: registered taskstats version 1 Dec 13 13:32:53.124124 kernel: Loading compiled-in X.509 certificates Dec 13 13:32:53.124137 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: 87a680e70013684f1bdd04e047addefc714bd162' Dec 13 13:32:53.124150 kernel: Key type .fscrypt registered Dec 13 13:32:53.124163 kernel: Key type fscrypt-provisioning registered Dec 13 13:32:53.124176 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 13:32:53.124189 kernel: ima: Allocated hash algorithm: sha1 Dec 13 13:32:53.124209 kernel: ima: No architecture policies found Dec 13 13:32:53.124223 kernel: clk: Disabling unused clocks Dec 13 13:32:53.124236 kernel: Freeing unused kernel image (initmem) memory: 43328K Dec 13 13:32:53.124249 kernel: Write protecting the kernel read-only data: 38912k Dec 13 13:32:53.124263 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Dec 13 13:32:53.124278 kernel: Run /init as init process Dec 13 13:32:53.124291 kernel: with arguments: Dec 13 13:32:53.124304 kernel: /init Dec 13 13:32:53.124317 kernel: with environment: Dec 13 13:32:53.124334 kernel: HOME=/ Dec 13 13:32:53.124347 kernel: TERM=linux Dec 13 13:32:53.124365 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 13:32:53.124402 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:32:53.124421 systemd[1]: Detected virtualization kvm. Dec 13 13:32:53.124436 systemd[1]: Detected architecture x86-64. Dec 13 13:32:53.124450 systemd[1]: Running in initrd. Dec 13 13:32:53.124465 systemd[1]: No hostname configured, using default hostname. Dec 13 13:32:53.124520 systemd[1]: Hostname set to . Dec 13 13:32:53.124535 systemd[1]: Initializing machine ID from VM UUID. Dec 13 13:32:53.124550 systemd[1]: Queued start job for default target initrd.target. Dec 13 13:32:53.124565 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:32:53.124580 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:32:53.124596 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 13:32:53.124611 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:32:53.124627 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 13:32:53.124648 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 13:32:53.124665 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 13:32:53.124680 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 13:32:53.124695 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:32:53.124711 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:32:53.124725 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:32:53.124746 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:32:53.124761 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:32:53.124801 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:32:53.124816 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:32:53.124831 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:32:53.124847 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 13:32:53.124861 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 13:32:53.124877 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:32:53.124892 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:32:53.124913 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:32:53.124928 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:32:53.124943 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 13:32:53.124959 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:32:53.124986 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 13:32:53.125000 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 13:32:53.125015 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:32:53.125029 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:32:53.125056 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:32:53.125131 systemd-journald[202]: Collecting audit messages is disabled. Dec 13 13:32:53.125165 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 13:32:53.125181 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:32:53.125195 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 13:32:53.125217 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 13:32:53.125232 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 13:32:53.125246 kernel: Bridge firewalling registered Dec 13 13:32:53.125276 systemd-journald[202]: Journal started Dec 13 13:32:53.125309 systemd-journald[202]: Runtime Journal (/run/log/journal/c5ce0cc95e274956b86ad167fe5aa67f) is 4.7M, max 37.9M, 33.2M free. Dec 13 13:32:53.059522 systemd-modules-load[203]: Inserted module 'overlay' Dec 13 13:32:53.102209 systemd-modules-load[203]: Inserted module 'br_netfilter' Dec 13 13:32:53.179658 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:32:53.181057 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:32:53.182144 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:32:53.183646 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:32:53.200017 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:32:53.202548 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:32:53.205167 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:32:53.211390 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:32:53.222080 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:32:53.236918 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:32:53.238024 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:32:53.247035 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 13:32:53.249900 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:32:53.261001 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:32:53.262662 dracut-cmdline[235]: dracut-dracut-053 Dec 13 13:32:53.265068 dracut-cmdline[235]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:32:53.312529 systemd-resolved[241]: Positive Trust Anchors: Dec 13 13:32:53.312563 systemd-resolved[241]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:32:53.312608 systemd-resolved[241]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:32:53.317824 systemd-resolved[241]: Defaulting to hostname 'linux'. Dec 13 13:32:53.319949 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:32:53.321021 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:32:53.386841 kernel: SCSI subsystem initialized Dec 13 13:32:53.398797 kernel: Loading iSCSI transport class v2.0-870. Dec 13 13:32:53.412852 kernel: iscsi: registered transport (tcp) Dec 13 13:32:53.439860 kernel: iscsi: registered transport (qla4xxx) Dec 13 13:32:53.439910 kernel: QLogic iSCSI HBA Driver Dec 13 13:32:53.508656 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 13:32:53.516007 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 13:32:53.556468 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 13:32:53.556546 kernel: device-mapper: uevent: version 1.0.3 Dec 13 13:32:53.556570 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 13:32:53.607856 kernel: raid6: sse2x4 gen() 12512 MB/s Dec 13 13:32:53.625836 kernel: raid6: sse2x2 gen() 9139 MB/s Dec 13 13:32:53.644637 kernel: raid6: sse2x1 gen() 9123 MB/s Dec 13 13:32:53.644679 kernel: raid6: using algorithm sse2x4 gen() 12512 MB/s Dec 13 13:32:53.663595 kernel: raid6: .... xor() 7115 MB/s, rmw enabled Dec 13 13:32:53.663656 kernel: raid6: using ssse3x2 recovery algorithm Dec 13 13:32:53.691880 kernel: xor: automatically using best checksumming function avx Dec 13 13:32:53.870833 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 13:32:53.887671 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:32:53.894016 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:32:53.922481 systemd-udevd[420]: Using default interface naming scheme 'v255'. Dec 13 13:32:53.930012 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:32:53.936941 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 13:32:53.963217 dracut-pre-trigger[426]: rd.md=0: removing MD RAID activation Dec 13 13:32:54.004615 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:32:54.011960 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:32:54.127807 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:32:54.135071 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 13:32:54.164677 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 13:32:54.166877 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:32:54.168745 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:32:54.172160 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:32:54.180433 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 13:32:54.214033 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:32:54.254388 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Dec 13 13:32:54.332587 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 13:32:54.332615 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Dec 13 13:32:54.332843 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 13:32:54.332877 kernel: GPT:17805311 != 125829119 Dec 13 13:32:54.332905 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 13:32:54.332923 kernel: GPT:17805311 != 125829119 Dec 13 13:32:54.332940 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 13:32:54.332970 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:32:54.332988 kernel: AVX version of gcm_enc/dec engaged. Dec 13 13:32:54.333006 kernel: AES CTR mode by8 optimization enabled Dec 13 13:32:54.333024 kernel: ACPI: bus type USB registered Dec 13 13:32:54.333041 kernel: usbcore: registered new interface driver usbfs Dec 13 13:32:54.333060 kernel: usbcore: registered new interface driver hub Dec 13 13:32:54.333082 kernel: usbcore: registered new device driver usb Dec 13 13:32:54.306306 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:32:54.306508 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:32:54.311081 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:32:54.312163 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:32:54.312731 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:32:54.328125 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:32:54.339555 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:32:54.359810 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 13 13:32:54.416326 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Dec 13 13:32:54.416582 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 13 13:32:54.416834 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 13 13:32:54.417049 kernel: libata version 3.00 loaded. Dec 13 13:32:54.417071 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Dec 13 13:32:54.417312 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Dec 13 13:32:54.417549 kernel: hub 1-0:1.0: USB hub found Dec 13 13:32:54.420796 kernel: hub 1-0:1.0: 4 ports detected Dec 13 13:32:54.421048 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 13 13:32:54.421275 kernel: hub 2-0:1.0: USB hub found Dec 13 13:32:54.421528 kernel: hub 2-0:1.0: 4 ports detected Dec 13 13:32:54.428792 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 13:32:54.504141 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 13:32:54.504184 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 13:32:54.504415 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 13:32:54.504637 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (475) Dec 13 13:32:54.504660 kernel: BTRFS: device fsid 79c74448-2326-4c98-b9ff-09542b30ea52 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (467) Dec 13 13:32:54.504679 kernel: scsi host0: ahci Dec 13 13:32:54.505583 kernel: scsi host1: ahci Dec 13 13:32:54.505838 kernel: scsi host2: ahci Dec 13 13:32:54.506026 kernel: scsi host3: ahci Dec 13 13:32:54.506246 kernel: scsi host4: ahci Dec 13 13:32:54.506480 kernel: scsi host5: ahci Dec 13 13:32:54.506681 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Dec 13 13:32:54.506711 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Dec 13 13:32:54.506731 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Dec 13 13:32:54.506760 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Dec 13 13:32:54.506791 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Dec 13 13:32:54.506820 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Dec 13 13:32:54.464079 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 13:32:54.513026 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:32:54.521425 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 13:32:54.528109 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 13:32:54.528918 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 13:32:54.537187 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 13:32:54.546060 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 13:32:54.549593 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:32:54.555638 disk-uuid[563]: Primary Header is updated. Dec 13 13:32:54.555638 disk-uuid[563]: Secondary Entries is updated. Dec 13 13:32:54.555638 disk-uuid[563]: Secondary Header is updated. Dec 13 13:32:54.561806 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:32:54.589215 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:32:54.658802 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 13 13:32:54.799872 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 13:32:54.811335 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 13:32:54.811383 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 13:32:54.812226 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 13:32:54.813895 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 13:32:54.816713 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 13 13:32:54.818467 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 13:32:54.829483 kernel: usbcore: registered new interface driver usbhid Dec 13 13:32:54.829523 kernel: usbhid: USB HID core driver Dec 13 13:32:54.837582 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Dec 13 13:32:54.837624 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Dec 13 13:32:55.576392 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:32:55.577264 disk-uuid[564]: The operation has completed successfully. Dec 13 13:32:55.626372 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 13:32:55.626554 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 13:32:55.656986 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 13:32:55.662889 sh[583]: Success Dec 13 13:32:55.679949 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Dec 13 13:32:55.743502 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 13:32:55.752026 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 13:32:55.755293 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 13:32:55.784485 kernel: BTRFS info (device dm-0): first mount of filesystem 79c74448-2326-4c98-b9ff-09542b30ea52 Dec 13 13:32:55.784545 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:32:55.788321 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 13:32:55.788359 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 13:32:55.791592 kernel: BTRFS info (device dm-0): using free space tree Dec 13 13:32:55.801710 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 13:32:55.803309 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 13:32:55.814043 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 13:32:55.817964 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 13:32:55.836867 kernel: BTRFS info (device vda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:32:55.836930 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:32:55.836956 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:32:55.844804 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 13:32:55.859300 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 13:32:55.860580 kernel: BTRFS info (device vda6): last unmount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:32:55.867657 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 13:32:55.876057 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 13:32:56.000909 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:32:56.008093 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:32:56.018735 ignition[682]: Ignition 2.20.0 Dec 13 13:32:56.020306 ignition[682]: Stage: fetch-offline Dec 13 13:32:56.020442 ignition[682]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:32:56.020463 ignition[682]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 13:32:56.020644 ignition[682]: parsed url from cmdline: "" Dec 13 13:32:56.020652 ignition[682]: no config URL provided Dec 13 13:32:56.020662 ignition[682]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 13:32:56.023986 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:32:56.020678 ignition[682]: no config at "/usr/lib/ignition/user.ign" Dec 13 13:32:56.020693 ignition[682]: failed to fetch config: resource requires networking Dec 13 13:32:56.021003 ignition[682]: Ignition finished successfully Dec 13 13:32:56.049746 systemd-networkd[769]: lo: Link UP Dec 13 13:32:56.049761 systemd-networkd[769]: lo: Gained carrier Dec 13 13:32:56.052345 systemd-networkd[769]: Enumeration completed Dec 13 13:32:56.052495 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:32:56.052961 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:32:56.052967 systemd-networkd[769]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:32:56.054616 systemd-networkd[769]: eth0: Link UP Dec 13 13:32:56.054622 systemd-networkd[769]: eth0: Gained carrier Dec 13 13:32:56.054633 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:32:56.054997 systemd[1]: Reached target network.target - Network. Dec 13 13:32:56.059971 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 13:32:56.081332 ignition[773]: Ignition 2.20.0 Dec 13 13:32:56.081354 ignition[773]: Stage: fetch Dec 13 13:32:56.081600 ignition[773]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:32:56.081619 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 13:32:56.084125 systemd-networkd[769]: eth0: DHCPv4 address 10.230.34.102/30, gateway 10.230.34.101 acquired from 10.230.34.101 Dec 13 13:32:56.081743 ignition[773]: parsed url from cmdline: "" Dec 13 13:32:56.081758 ignition[773]: no config URL provided Dec 13 13:32:56.081767 ignition[773]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 13:32:56.081801 ignition[773]: no config at "/usr/lib/ignition/user.ign" Dec 13 13:32:56.081980 ignition[773]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Dec 13 13:32:56.082122 ignition[773]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Dec 13 13:32:56.082159 ignition[773]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Dec 13 13:32:56.082267 ignition[773]: GET error: Get "http://169.254.169.254/openstack/latest/user_data": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 13 13:32:56.283273 ignition[773]: GET http://169.254.169.254/openstack/latest/user_data: attempt #2 Dec 13 13:32:56.299482 ignition[773]: GET result: OK Dec 13 13:32:56.299596 ignition[773]: parsing config with SHA512: ce904e249039f7421761bf29c48eff0f327a9139db24d466ac9d8e04e361b0313943d759cede649cf8db6489c64a818fa539fd882080d84d0630cfcf30582873 Dec 13 13:32:56.303379 unknown[773]: fetched base config from "system" Dec 13 13:32:56.303428 unknown[773]: fetched base config from "system" Dec 13 13:32:56.303716 ignition[773]: fetch: fetch complete Dec 13 13:32:56.303437 unknown[773]: fetched user config from "openstack" Dec 13 13:32:56.303725 ignition[773]: fetch: fetch passed Dec 13 13:32:56.306033 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 13:32:56.303964 ignition[773]: Ignition finished successfully Dec 13 13:32:56.318983 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 13:32:56.339547 ignition[780]: Ignition 2.20.0 Dec 13 13:32:56.339566 ignition[780]: Stage: kargs Dec 13 13:32:56.339844 ignition[780]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:32:56.339876 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 13:32:56.343041 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 13:32:56.340944 ignition[780]: kargs: kargs passed Dec 13 13:32:56.341012 ignition[780]: Ignition finished successfully Dec 13 13:32:56.349972 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 13:32:56.367623 ignition[786]: Ignition 2.20.0 Dec 13 13:32:56.367643 ignition[786]: Stage: disks Dec 13 13:32:56.367911 ignition[786]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:32:56.370087 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 13:32:56.367931 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 13:32:56.371394 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 13:32:56.368757 ignition[786]: disks: disks passed Dec 13 13:32:56.372637 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 13:32:56.368853 ignition[786]: Ignition finished successfully Dec 13 13:32:56.374257 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:32:56.376903 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:32:56.378200 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:32:56.385977 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 13:32:56.406097 systemd-fsck[794]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 13 13:32:56.411105 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 13:32:56.416927 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 13:32:56.526791 kernel: EXT4-fs (vda9): mounted filesystem 8801d4fe-2f40-4e12-9140-c192f2e7d668 r/w with ordered data mode. Quota mode: none. Dec 13 13:32:56.527930 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 13:32:56.529995 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 13:32:56.541878 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:32:56.545069 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 13:32:56.546434 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 13:32:56.548832 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Dec 13 13:32:56.549598 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 13:32:56.549640 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:32:56.560817 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (802) Dec 13 13:32:56.566787 kernel: BTRFS info (device vda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:32:56.566837 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:32:56.566860 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:32:56.569554 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 13:32:56.582143 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 13:32:56.582385 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 13:32:56.586577 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:32:56.674460 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 13:32:56.682634 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Dec 13 13:32:56.691286 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 13:32:56.699181 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 13:32:56.821891 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 13:32:56.829940 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 13:32:56.834003 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 13:32:56.843104 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 13:32:56.845745 kernel: BTRFS info (device vda6): last unmount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:32:56.869897 ignition[922]: INFO : Ignition 2.20.0 Dec 13 13:32:56.871886 ignition[922]: INFO : Stage: mount Dec 13 13:32:56.875340 ignition[922]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:32:56.876220 ignition[922]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 13:32:56.877310 ignition[922]: INFO : mount: mount passed Dec 13 13:32:56.877310 ignition[922]: INFO : Ignition finished successfully Dec 13 13:32:56.876964 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 13:32:56.879416 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 13:32:58.018191 systemd-networkd[769]: eth0: Gained IPv6LL Dec 13 13:32:59.529127 systemd-networkd[769]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8899:24:19ff:fee6:2266/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8899:24:19ff:fee6:2266/64 assigned by NDisc. Dec 13 13:32:59.529162 systemd-networkd[769]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 13 13:33:03.750241 coreos-metadata[804]: Dec 13 13:33:03.750 WARN failed to locate config-drive, using the metadata service API instead Dec 13 13:33:03.775978 coreos-metadata[804]: Dec 13 13:33:03.775 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 13:33:03.789255 coreos-metadata[804]: Dec 13 13:33:03.789 INFO Fetch successful Dec 13 13:33:03.790835 coreos-metadata[804]: Dec 13 13:33:03.790 INFO wrote hostname srv-qxyl2.gb1.brightbox.com to /sysroot/etc/hostname Dec 13 13:33:03.792543 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Dec 13 13:33:03.792862 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Dec 13 13:33:03.800964 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 13:33:03.831968 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:33:03.846872 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (941) Dec 13 13:33:03.852876 kernel: BTRFS info (device vda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:33:03.852917 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:33:03.855407 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:33:03.859829 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 13:33:03.863291 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:33:03.888884 ignition[958]: INFO : Ignition 2.20.0 Dec 13 13:33:03.888884 ignition[958]: INFO : Stage: files Dec 13 13:33:03.890529 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:33:03.890529 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 13:33:03.890529 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Dec 13 13:33:03.893281 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 13:33:03.893281 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 13:33:03.895293 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 13:33:03.896255 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 13:33:03.896255 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 13:33:03.895924 unknown[958]: wrote ssh authorized keys file for user: core Dec 13 13:33:03.899233 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Dec 13 13:33:03.899233 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 13:33:03.899233 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:33:03.899233 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:33:03.899233 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 13:33:03.905250 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 13:33:03.905250 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 13:33:03.905250 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 13:33:04.498709 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Dec 13 13:33:06.336987 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 13:33:06.340209 ignition[958]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:33:06.340209 ignition[958]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:33:06.340209 ignition[958]: INFO : files: files passed Dec 13 13:33:06.340209 ignition[958]: INFO : Ignition finished successfully Dec 13 13:33:06.340947 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 13:33:06.353189 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 13:33:06.358915 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 13:33:06.365407 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 13:33:06.366209 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 13:33:06.378018 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:33:06.378018 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:33:06.380519 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:33:06.381857 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:33:06.383311 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 13:33:06.392014 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 13:33:06.430401 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 13:33:06.430601 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 13:33:06.432669 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 13:33:06.433964 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 13:33:06.435665 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 13:33:06.441058 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 13:33:06.460637 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:33:06.468990 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 13:33:06.482505 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:33:06.484420 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:33:06.485369 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 13:33:06.487025 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 13:33:06.487238 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:33:06.489035 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 13:33:06.490033 systemd[1]: Stopped target basic.target - Basic System. Dec 13 13:33:06.491567 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 13:33:06.493061 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:33:06.494388 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 13:33:06.496006 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 13:33:06.497567 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:33:06.499207 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 13:33:06.500676 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 13:33:06.502277 systemd[1]: Stopped target swap.target - Swaps. Dec 13 13:33:06.503640 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 13:33:06.503850 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:33:06.505628 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:33:06.506592 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:33:06.508091 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 13:33:06.508277 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:33:06.509789 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 13:33:06.510058 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 13:33:06.511968 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 13:33:06.512139 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:33:06.513791 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 13:33:06.513949 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 13:33:06.521096 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 13:33:06.524095 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 13:33:06.524852 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 13:33:06.525247 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:33:06.530955 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 13:33:06.531134 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:33:06.540136 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 13:33:06.540818 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 13:33:06.553806 ignition[1011]: INFO : Ignition 2.20.0 Dec 13 13:33:06.553806 ignition[1011]: INFO : Stage: umount Dec 13 13:33:06.553806 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:33:06.553806 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 13:33:06.559183 ignition[1011]: INFO : umount: umount passed Dec 13 13:33:06.559183 ignition[1011]: INFO : Ignition finished successfully Dec 13 13:33:06.557241 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 13:33:06.558061 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 13:33:06.558241 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 13:33:06.561903 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 13:33:06.562057 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 13:33:06.563061 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 13:33:06.563155 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 13:33:06.564416 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 13:33:06.564496 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 13:33:06.565757 systemd[1]: Stopped target network.target - Network. Dec 13 13:33:06.567074 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 13:33:06.567179 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:33:06.568533 systemd[1]: Stopped target paths.target - Path Units. Dec 13 13:33:06.569837 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 13:33:06.573851 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:33:06.575372 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 13:33:06.576924 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 13:33:06.584371 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 13:33:06.584450 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:33:06.585903 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 13:33:06.585974 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:33:06.587492 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 13:33:06.587563 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 13:33:06.588857 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 13:33:06.588922 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 13:33:06.590432 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 13:33:06.591907 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 13:33:06.593665 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 13:33:06.593845 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 13:33:06.595084 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 13:33:06.595240 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 13:33:06.596047 systemd-networkd[769]: eth0: DHCPv6 lease lost Dec 13 13:33:06.598986 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 13:33:06.599179 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 13:33:06.601299 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 13:33:06.601412 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:33:06.608949 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 13:33:06.611219 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 13:33:06.611294 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:33:06.612251 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:33:06.614168 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 13:33:06.614338 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 13:33:06.620674 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 13:33:06.620773 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:33:06.622023 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 13:33:06.622096 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 13:33:06.623613 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 13:33:06.623681 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:33:06.625721 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 13:33:06.626023 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:33:06.633107 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 13:33:06.633227 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 13:33:06.636051 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 13:33:06.636120 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:33:06.637517 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 13:33:06.637597 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:33:06.639711 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 13:33:06.639817 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 13:33:06.641105 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:33:06.641212 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:33:06.644950 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 13:33:06.648911 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 13:33:06.648995 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:33:06.651055 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 13:33:06.651126 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:33:06.652808 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 13:33:06.652892 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:33:06.654443 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:33:06.654516 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:33:06.658153 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 13:33:06.658307 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 13:33:06.667219 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 13:33:06.667378 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 13:33:06.669148 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 13:33:06.677009 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 13:33:06.687950 systemd[1]: Switching root. Dec 13 13:33:06.725272 systemd-journald[202]: Journal stopped Dec 13 13:33:08.273713 systemd-journald[202]: Received SIGTERM from PID 1 (systemd). Dec 13 13:33:08.273858 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 13:33:08.273909 kernel: SELinux: policy capability open_perms=1 Dec 13 13:33:08.273944 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 13:33:08.273963 kernel: SELinux: policy capability always_check_network=0 Dec 13 13:33:08.273999 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 13:33:08.274022 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 13:33:08.274056 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 13:33:08.274083 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 13:33:08.274120 kernel: audit: type=1403 audit(1734096787.055:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 13:33:08.274151 systemd[1]: Successfully loaded SELinux policy in 50.881ms. Dec 13 13:33:08.274183 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.414ms. Dec 13 13:33:08.274206 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:33:08.274228 systemd[1]: Detected virtualization kvm. Dec 13 13:33:08.274268 systemd[1]: Detected architecture x86-64. Dec 13 13:33:08.274291 systemd[1]: Detected first boot. Dec 13 13:33:08.274312 systemd[1]: Hostname set to . Dec 13 13:33:08.274340 systemd[1]: Initializing machine ID from VM UUID. Dec 13 13:33:08.274362 zram_generator::config[1053]: No configuration found. Dec 13 13:33:08.274424 systemd[1]: Populated /etc with preset unit settings. Dec 13 13:33:08.274460 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 13:33:08.274485 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 13:33:08.274523 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 13:33:08.274551 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 13:33:08.274573 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 13:33:08.274595 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 13:33:08.274623 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 13:33:08.274657 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 13:33:08.274680 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 13:33:08.274701 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 13:33:08.274722 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 13:33:08.274744 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:33:08.277797 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:33:08.277835 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 13:33:08.277867 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 13:33:08.277908 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 13:33:08.277944 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:33:08.277966 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 13:33:08.277987 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:33:08.278007 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 13:33:08.278028 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 13:33:08.278061 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 13:33:08.278102 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 13:33:08.278129 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:33:08.278151 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:33:08.278172 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:33:08.278193 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:33:08.278213 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 13:33:08.278235 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 13:33:08.278256 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:33:08.278276 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:33:08.278310 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:33:08.278334 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 13:33:08.278355 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 13:33:08.278376 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 13:33:08.278407 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 13:33:08.278428 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:33:08.278461 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 13:33:08.278481 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 13:33:08.278519 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 13:33:08.278553 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 13:33:08.278588 systemd[1]: Reached target machines.target - Containers. Dec 13 13:33:08.278610 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 13:33:08.278639 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:33:08.278661 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:33:08.278702 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 13:33:08.278738 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:33:08.278791 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 13:33:08.278818 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:33:08.278840 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 13:33:08.278861 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:33:08.278883 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 13:33:08.278904 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 13:33:08.278939 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 13:33:08.278962 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 13:33:08.278984 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 13:33:08.279017 kernel: loop: module loaded Dec 13 13:33:08.279036 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:33:08.279055 kernel: fuse: init (API version 7.39) Dec 13 13:33:08.279087 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:33:08.279124 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 13:33:08.279153 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 13:33:08.279191 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:33:08.279215 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 13:33:08.279238 systemd[1]: Stopped verity-setup.service. Dec 13 13:33:08.279260 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:33:08.279281 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 13:33:08.279303 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 13:33:08.279325 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 13:33:08.279346 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 13:33:08.279380 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 13:33:08.279403 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 13:33:08.279433 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:33:08.279484 systemd-journald[1145]: Collecting audit messages is disabled. Dec 13 13:33:08.279522 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 13:33:08.279571 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 13:33:08.279594 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:33:08.279643 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:33:08.279693 systemd-journald[1145]: Journal started Dec 13 13:33:08.279758 systemd-journald[1145]: Runtime Journal (/run/log/journal/c5ce0cc95e274956b86ad167fe5aa67f) is 4.7M, max 37.9M, 33.2M free. Dec 13 13:33:07.865260 systemd[1]: Queued start job for default target multi-user.target. Dec 13 13:33:07.893787 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 13:33:07.894591 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 13:33:08.289801 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:33:08.296796 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:33:08.299807 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:33:08.303182 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 13:33:08.303880 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 13:33:08.305170 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:33:08.306886 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:33:08.308244 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:33:08.309478 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 13:33:08.311287 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 13:33:08.346866 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 13:33:08.350797 kernel: ACPI: bus type drm_connector registered Dec 13 13:33:08.360164 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 13:33:08.370909 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 13:33:08.372184 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 13:33:08.372260 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:33:08.375240 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 13:33:08.386025 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 13:33:08.389847 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 13:33:08.390825 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:33:08.397179 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 13:33:08.408049 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 13:33:08.409271 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:33:08.411532 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 13:33:08.414618 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:33:08.421989 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:33:08.426234 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 13:33:08.431983 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 13:33:08.438947 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 13:33:08.440229 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 13:33:08.441310 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 13:33:08.443358 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 13:33:08.446170 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 13:33:08.447378 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 13:33:08.493341 systemd-journald[1145]: Time spent on flushing to /var/log/journal/c5ce0cc95e274956b86ad167fe5aa67f is 139.220ms for 1125 entries. Dec 13 13:33:08.493341 systemd-journald[1145]: System Journal (/var/log/journal/c5ce0cc95e274956b86ad167fe5aa67f) is 8.0M, max 584.8M, 576.8M free. Dec 13 13:33:08.660062 systemd-journald[1145]: Received client request to flush runtime journal. Dec 13 13:33:08.660156 kernel: loop0: detected capacity change from 0 to 8 Dec 13 13:33:08.660187 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 13:33:08.660211 kernel: loop1: detected capacity change from 0 to 211296 Dec 13 13:33:08.516936 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 13:33:08.520214 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 13:33:08.531669 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 13:33:08.570506 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:33:08.590002 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 13:33:08.591034 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 13:33:08.600345 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:33:08.607073 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 13:33:08.623418 udevadm[1200]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 13:33:08.634747 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Dec 13 13:33:08.634818 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Dec 13 13:33:08.641900 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:33:08.649014 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 13:33:08.668715 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 13:33:08.707820 kernel: loop2: detected capacity change from 0 to 138184 Dec 13 13:33:08.705533 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 13:33:08.716970 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:33:08.778842 kernel: loop3: detected capacity change from 0 to 141000 Dec 13 13:33:08.786904 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Dec 13 13:33:08.786935 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Dec 13 13:33:08.804818 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:33:08.847596 kernel: loop4: detected capacity change from 0 to 8 Dec 13 13:33:08.854886 kernel: loop5: detected capacity change from 0 to 211296 Dec 13 13:33:08.889003 kernel: loop6: detected capacity change from 0 to 138184 Dec 13 13:33:08.915829 kernel: loop7: detected capacity change from 0 to 141000 Dec 13 13:33:08.942870 (sd-merge)[1215]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Dec 13 13:33:08.944812 (sd-merge)[1215]: Merged extensions into '/usr'. Dec 13 13:33:08.953444 systemd[1]: Reloading requested from client PID 1184 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 13:33:08.953492 systemd[1]: Reloading... Dec 13 13:33:09.095368 zram_generator::config[1240]: No configuration found. Dec 13 13:33:09.260919 ldconfig[1179]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 13:33:09.307062 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:33:09.379660 systemd[1]: Reloading finished in 423 ms. Dec 13 13:33:09.412246 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 13:33:09.414621 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 13:33:09.430984 systemd[1]: Starting ensure-sysext.service... Dec 13 13:33:09.434037 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:33:09.454508 systemd[1]: Reloading requested from client PID 1297 ('systemctl') (unit ensure-sysext.service)... Dec 13 13:33:09.454531 systemd[1]: Reloading... Dec 13 13:33:09.508228 systemd-tmpfiles[1298]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 13:33:09.508751 systemd-tmpfiles[1298]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 13:33:09.514390 systemd-tmpfiles[1298]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 13:33:09.514861 systemd-tmpfiles[1298]: ACLs are not supported, ignoring. Dec 13 13:33:09.514976 systemd-tmpfiles[1298]: ACLs are not supported, ignoring. Dec 13 13:33:09.522271 systemd-tmpfiles[1298]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 13:33:09.522289 systemd-tmpfiles[1298]: Skipping /boot Dec 13 13:33:09.545638 systemd-tmpfiles[1298]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 13:33:09.545658 systemd-tmpfiles[1298]: Skipping /boot Dec 13 13:33:09.563829 zram_generator::config[1325]: No configuration found. Dec 13 13:33:09.775024 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:33:09.847814 systemd[1]: Reloading finished in 392 ms. Dec 13 13:33:09.875886 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 13:33:09.881318 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:33:09.898052 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:33:09.907959 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 13:33:09.912970 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 13:33:09.917017 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:33:09.921919 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:33:09.931934 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 13:33:09.942502 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:33:09.943756 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:33:09.957872 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:33:09.970093 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:33:09.979095 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:33:09.981025 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:33:09.981216 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:33:09.986477 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:33:09.986764 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:33:09.988006 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:33:09.988162 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:33:09.991716 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:33:09.992114 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:33:10.007170 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 13:33:10.015981 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 13:33:10.029307 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:33:10.031998 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:33:10.039897 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:33:10.048964 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 13:33:10.050881 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:33:10.059160 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 13:33:10.060847 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:33:10.064208 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 13:33:10.072481 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:33:10.073131 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:33:10.075723 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:33:10.076036 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:33:10.084759 systemd[1]: Finished ensure-sysext.service. Dec 13 13:33:10.088501 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 13:33:10.092043 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 13:33:10.093504 systemd-udevd[1387]: Using default interface naming scheme 'v255'. Dec 13 13:33:10.094426 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:33:10.095581 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:33:10.098180 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 13:33:10.099148 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 13:33:10.107784 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 13:33:10.120594 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:33:10.121130 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:33:10.130009 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 13:33:10.131417 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 13:33:10.150710 augenrules[1431]: No rules Dec 13 13:33:10.153477 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:33:10.154485 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:33:10.170133 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:33:10.181992 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:33:10.268882 systemd-resolved[1386]: Positive Trust Anchors: Dec 13 13:33:10.268904 systemd-resolved[1386]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:33:10.268951 systemd-resolved[1386]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:33:10.288142 systemd-resolved[1386]: Using system hostname 'srv-qxyl2.gb1.brightbox.com'. Dec 13 13:33:10.292618 systemd-networkd[1441]: lo: Link UP Dec 13 13:33:10.293084 systemd-networkd[1441]: lo: Gained carrier Dec 13 13:33:10.294309 systemd-networkd[1441]: Enumeration completed Dec 13 13:33:10.294568 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:33:10.295693 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:33:10.296841 systemd[1]: Reached target network.target - Network. Dec 13 13:33:10.297644 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:33:10.307408 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 13:33:10.312438 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 13:33:10.313622 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 13:33:10.357822 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 13:33:10.375059 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1451) Dec 13 13:33:10.379811 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1451) Dec 13 13:33:10.448828 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1439) Dec 13 13:33:10.459066 systemd-networkd[1441]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:33:10.459411 systemd-networkd[1441]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:33:10.464253 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 13:33:10.464629 systemd-networkd[1441]: eth0: Link UP Dec 13 13:33:10.464643 systemd-networkd[1441]: eth0: Gained carrier Dec 13 13:33:10.464662 systemd-networkd[1441]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:33:10.489120 systemd-networkd[1441]: eth0: DHCPv4 address 10.230.34.102/30, gateway 10.230.34.101 acquired from 10.230.34.101 Dec 13 13:33:10.490657 systemd-timesyncd[1428]: Network configuration changed, trying to establish connection. Dec 13 13:33:10.525810 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 13:33:10.539834 kernel: ACPI: button: Power Button [PWRF] Dec 13 13:33:10.545649 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 13:33:10.562147 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 13:33:10.591809 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 13:33:10.600050 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 13:33:10.600354 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 13:33:10.604029 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 13:33:10.631813 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 13:33:10.690174 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:33:11.559336 systemd-resolved[1386]: Clock change detected. Flushing caches. Dec 13 13:33:11.559767 systemd-timesyncd[1428]: Contacted time server 162.159.200.123:123 (0.flatcar.pool.ntp.org). Dec 13 13:33:11.560034 systemd-timesyncd[1428]: Initial clock synchronization to Fri 2024-12-13 13:33:11.559234 UTC. Dec 13 13:33:11.656492 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 13:33:11.683118 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:33:11.690032 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 13:33:11.712575 lvm[1475]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 13:33:11.746001 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 13:33:11.747739 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:33:11.748595 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:33:11.749480 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 13:33:11.750373 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 13:33:11.751754 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 13:33:11.752685 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 13:33:11.753484 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 13:33:11.754309 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 13:33:11.754377 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:33:11.755068 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:33:11.757486 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 13:33:11.760102 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 13:33:11.766990 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 13:33:11.769665 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 13:33:11.771204 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 13:33:11.772087 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:33:11.772790 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:33:11.773504 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 13:33:11.773578 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 13:33:11.776813 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 13:33:11.784785 lvm[1480]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 13:33:11.785878 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 13:33:11.792321 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 13:33:11.807835 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 13:33:11.812318 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 13:33:11.813319 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 13:33:11.822898 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 13:33:11.827175 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 13:33:11.835874 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 13:33:11.843256 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 13:33:11.855886 jq[1484]: false Dec 13 13:33:11.845188 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 13:33:11.847931 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 13:33:11.851198 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 13:33:11.859809 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 13:33:11.864224 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 13:33:11.872216 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 13:33:11.872458 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 13:33:11.886951 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 13:33:11.889742 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 13:33:11.897364 jq[1493]: true Dec 13 13:33:11.924719 jq[1505]: true Dec 13 13:33:11.943404 dbus-daemon[1483]: [system] SELinux support is enabled Dec 13 13:33:11.943809 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 13:33:11.949923 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 13:33:11.949975 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 13:33:11.951840 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 13:33:11.951881 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 13:33:11.954980 dbus-daemon[1483]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1441 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 13:33:11.956534 extend-filesystems[1487]: Found loop4 Dec 13 13:33:11.965256 extend-filesystems[1487]: Found loop5 Dec 13 13:33:11.965256 extend-filesystems[1487]: Found loop6 Dec 13 13:33:11.965256 extend-filesystems[1487]: Found loop7 Dec 13 13:33:11.965256 extend-filesystems[1487]: Found vda Dec 13 13:33:11.965256 extend-filesystems[1487]: Found vda1 Dec 13 13:33:11.965256 extend-filesystems[1487]: Found vda2 Dec 13 13:33:11.965256 extend-filesystems[1487]: Found vda3 Dec 13 13:33:11.965256 extend-filesystems[1487]: Found usr Dec 13 13:33:11.965256 extend-filesystems[1487]: Found vda4 Dec 13 13:33:11.965256 extend-filesystems[1487]: Found vda6 Dec 13 13:33:11.965256 extend-filesystems[1487]: Found vda7 Dec 13 13:33:11.965256 extend-filesystems[1487]: Found vda9 Dec 13 13:33:11.965256 extend-filesystems[1487]: Checking size of /dev/vda9 Dec 13 13:33:11.963792 dbus-daemon[1483]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 13:33:12.007587 update_engine[1492]: I20241213 13:33:11.985415 1492 main.cc:92] Flatcar Update Engine starting Dec 13 13:33:12.007587 update_engine[1492]: I20241213 13:33:11.997846 1492 update_check_scheduler.cc:74] Next update check in 6m31s Dec 13 13:33:11.970139 (ntainerd)[1509]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 13:33:11.978850 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 13 13:33:11.980134 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 13:33:11.980458 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 13:33:11.994240 systemd[1]: Started update-engine.service - Update Engine. Dec 13 13:33:12.006872 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 13:33:12.011272 extend-filesystems[1487]: Resized partition /dev/vda9 Dec 13 13:33:12.017694 extend-filesystems[1526]: resize2fs 1.47.1 (20-May-2024) Dec 13 13:33:12.035806 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Dec 13 13:33:12.076017 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 13:33:12.214282 bash[1536]: Updated "/home/core/.ssh/authorized_keys" Dec 13 13:33:12.218157 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 13:33:12.243551 systemd-logind[1491]: Watching system buttons on /dev/input/event2 (Power Button) Dec 13 13:33:12.243679 systemd-logind[1491]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 13:33:12.244235 systemd[1]: Starting sshkeys.service... Dec 13 13:33:12.247659 systemd-logind[1491]: New seat seat0. Dec 13 13:33:12.250075 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 13:33:12.275365 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1452) Dec 13 13:33:12.324448 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 13:33:12.336126 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 13:33:12.370018 dbus-daemon[1483]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 13:33:12.370196 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 13 13:33:12.376673 dbus-daemon[1483]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1517 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 13:33:12.387037 systemd[1]: Starting polkit.service - Authorization Manager... Dec 13 13:33:12.410238 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Dec 13 13:33:12.430452 locksmithd[1525]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 13:33:12.434851 extend-filesystems[1526]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 13:33:12.434851 extend-filesystems[1526]: old_desc_blocks = 1, new_desc_blocks = 8 Dec 13 13:33:12.434851 extend-filesystems[1526]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Dec 13 13:33:12.444069 extend-filesystems[1487]: Resized filesystem in /dev/vda9 Dec 13 13:33:12.440278 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 13:33:12.437559 polkitd[1550]: Started polkitd version 121 Dec 13 13:33:12.440588 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 13:33:12.455440 polkitd[1550]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 13:33:12.455548 polkitd[1550]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 13:33:12.457596 polkitd[1550]: Finished loading, compiling and executing 2 rules Dec 13 13:33:12.465774 dbus-daemon[1483]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 13:33:12.466144 polkitd[1550]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 13:33:12.466377 systemd[1]: Started polkit.service - Authorization Manager. Dec 13 13:33:12.491470 systemd-hostnamed[1517]: Hostname set to (static) Dec 13 13:33:12.529721 containerd[1509]: time="2024-12-13T13:33:12.529341297Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Dec 13 13:33:12.583577 containerd[1509]: time="2024-12-13T13:33:12.583465500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:33:12.588334 containerd[1509]: time="2024-12-13T13:33:12.586327956Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:33:12.588334 containerd[1509]: time="2024-12-13T13:33:12.586381402Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 13:33:12.588334 containerd[1509]: time="2024-12-13T13:33:12.586407960Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 13:33:12.588334 containerd[1509]: time="2024-12-13T13:33:12.586721250Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 13:33:12.588334 containerd[1509]: time="2024-12-13T13:33:12.586749747Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 13:33:12.588334 containerd[1509]: time="2024-12-13T13:33:12.586848962Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:33:12.588334 containerd[1509]: time="2024-12-13T13:33:12.586879075Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:33:12.588334 containerd[1509]: time="2024-12-13T13:33:12.587104462Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:33:12.588334 containerd[1509]: time="2024-12-13T13:33:12.587128280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 13:33:12.588334 containerd[1509]: time="2024-12-13T13:33:12.587147636Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:33:12.588334 containerd[1509]: time="2024-12-13T13:33:12.587172688Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 13:33:12.588897 containerd[1509]: time="2024-12-13T13:33:12.587292520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:33:12.588897 containerd[1509]: time="2024-12-13T13:33:12.587723559Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:33:12.588897 containerd[1509]: time="2024-12-13T13:33:12.587871667Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:33:12.588897 containerd[1509]: time="2024-12-13T13:33:12.587907358Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 13:33:12.588897 containerd[1509]: time="2024-12-13T13:33:12.588035610Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 13:33:12.588897 containerd[1509]: time="2024-12-13T13:33:12.588113373Z" level=info msg="metadata content store policy set" policy=shared Dec 13 13:33:12.608986 containerd[1509]: time="2024-12-13T13:33:12.608659134Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 13:33:12.608986 containerd[1509]: time="2024-12-13T13:33:12.608819118Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 13:33:12.608986 containerd[1509]: time="2024-12-13T13:33:12.608848961Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 13:33:12.608986 containerd[1509]: time="2024-12-13T13:33:12.608912683Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 13:33:12.608986 containerd[1509]: time="2024-12-13T13:33:12.608937968Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 13:33:12.609659 containerd[1509]: time="2024-12-13T13:33:12.609444107Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 13:33:12.610075 containerd[1509]: time="2024-12-13T13:33:12.610024201Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 13:33:12.610477 containerd[1509]: time="2024-12-13T13:33:12.610361806Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 13:33:12.610477 containerd[1509]: time="2024-12-13T13:33:12.610420389Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 13:33:12.610477 containerd[1509]: time="2024-12-13T13:33:12.610447321Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 13:33:12.610831 containerd[1509]: time="2024-12-13T13:33:12.610659670Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 13:33:12.610831 containerd[1509]: time="2024-12-13T13:33:12.610707793Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 13:33:12.610831 containerd[1509]: time="2024-12-13T13:33:12.610748371Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 13:33:12.610831 containerd[1509]: time="2024-12-13T13:33:12.610782316Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 13:33:12.611187 containerd[1509]: time="2024-12-13T13:33:12.610801254Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 13:33:12.611187 containerd[1509]: time="2024-12-13T13:33:12.611065938Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 13:33:12.611187 containerd[1509]: time="2024-12-13T13:33:12.611115809Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 13:33:12.611187 containerd[1509]: time="2024-12-13T13:33:12.611139594Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 13:33:12.611426 containerd[1509]: time="2024-12-13T13:33:12.611167558Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 13:33:12.611426 containerd[1509]: time="2024-12-13T13:33:12.611381985Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 13:33:12.611725 containerd[1509]: time="2024-12-13T13:33:12.611405011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 13:33:12.611725 containerd[1509]: time="2024-12-13T13:33:12.611582687Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 13:33:12.611725 containerd[1509]: time="2024-12-13T13:33:12.611667966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 13:33:12.612102 containerd[1509]: time="2024-12-13T13:33:12.611709038Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 13:33:12.612102 containerd[1509]: time="2024-12-13T13:33:12.611977316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 13:33:12.612102 containerd[1509]: time="2024-12-13T13:33:12.612000217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 13:33:12.612102 containerd[1509]: time="2024-12-13T13:33:12.612042588Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 13:33:12.612102 containerd[1509]: time="2024-12-13T13:33:12.612065178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 13:33:12.612470 containerd[1509]: time="2024-12-13T13:33:12.612082229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 13:33:12.612470 containerd[1509]: time="2024-12-13T13:33:12.612347787Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 13:33:12.612470 containerd[1509]: time="2024-12-13T13:33:12.612370635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 13:33:12.612470 containerd[1509]: time="2024-12-13T13:33:12.612412836Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 13:33:12.612470 containerd[1509]: time="2024-12-13T13:33:12.612444553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 13:33:12.612909 containerd[1509]: time="2024-12-13T13:33:12.612702901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 13:33:12.612909 containerd[1509]: time="2024-12-13T13:33:12.612729349Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 13:33:12.613101 containerd[1509]: time="2024-12-13T13:33:12.612829159Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 13:33:12.613101 containerd[1509]: time="2024-12-13T13:33:12.613049940Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 13:33:12.613101 containerd[1509]: time="2024-12-13T13:33:12.613068926Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 13:33:12.613355 containerd[1509]: time="2024-12-13T13:33:12.613198735Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 13:33:12.613355 containerd[1509]: time="2024-12-13T13:33:12.613221888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 13:33:12.613655 containerd[1509]: time="2024-12-13T13:33:12.613476707Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 13:33:12.613655 containerd[1509]: time="2024-12-13T13:33:12.613512365Z" level=info msg="NRI interface is disabled by configuration." Dec 13 13:33:12.613655 containerd[1509]: time="2024-12-13T13:33:12.613543832Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 13:33:12.614404 containerd[1509]: time="2024-12-13T13:33:12.614173726Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 13:33:12.614404 containerd[1509]: time="2024-12-13T13:33:12.614268519Z" level=info msg="Connect containerd service" Dec 13 13:33:12.614404 containerd[1509]: time="2024-12-13T13:33:12.614339312Z" level=info msg="using legacy CRI server" Dec 13 13:33:12.614983 containerd[1509]: time="2024-12-13T13:33:12.614359606Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 13:33:12.615289 containerd[1509]: time="2024-12-13T13:33:12.615102034Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 13:33:12.616605 containerd[1509]: time="2024-12-13T13:33:12.616343876Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 13:33:12.616605 containerd[1509]: time="2024-12-13T13:33:12.616499385Z" level=info msg="Start subscribing containerd event" Dec 13 13:33:12.616605 containerd[1509]: time="2024-12-13T13:33:12.616598609Z" level=info msg="Start recovering state" Dec 13 13:33:12.616790 containerd[1509]: time="2024-12-13T13:33:12.616747130Z" level=info msg="Start event monitor" Dec 13 13:33:12.616790 containerd[1509]: time="2024-12-13T13:33:12.616771654Z" level=info msg="Start snapshots syncer" Dec 13 13:33:12.616790 containerd[1509]: time="2024-12-13T13:33:12.616786054Z" level=info msg="Start cni network conf syncer for default" Dec 13 13:33:12.616904 containerd[1509]: time="2024-12-13T13:33:12.616797792Z" level=info msg="Start streaming server" Dec 13 13:33:12.617669 containerd[1509]: time="2024-12-13T13:33:12.617323270Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 13:33:12.617669 containerd[1509]: time="2024-12-13T13:33:12.617429337Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 13:33:12.617669 containerd[1509]: time="2024-12-13T13:33:12.617543958Z" level=info msg="containerd successfully booted in 0.090251s" Dec 13 13:33:12.617670 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 13:33:12.751981 sshd_keygen[1503]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 13:33:12.780495 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 13:33:12.790197 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 13:33:12.793148 systemd[1]: Started sshd@0-10.230.34.102:22-139.178.68.195:47470.service - OpenSSH per-connection server daemon (139.178.68.195:47470). Dec 13 13:33:12.802032 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 13:33:12.802470 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 13:33:12.809519 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 13:33:12.839781 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 13:33:12.858294 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 13:33:12.860945 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 13:33:12.862069 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 13:33:13.093999 systemd-networkd[1441]: eth0: Gained IPv6LL Dec 13 13:33:13.097329 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 13:33:13.099956 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 13:33:13.107047 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:33:13.118332 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 13:33:13.142388 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 13:33:13.717908 sshd[1576]: Accepted publickey for core from 139.178.68.195 port 47470 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 13:33:13.719822 sshd-session[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:33:13.734701 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 13:33:13.742198 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 13:33:13.750604 systemd-logind[1491]: New session 1 of user core. Dec 13 13:33:13.769578 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 13:33:13.779555 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 13:33:13.789066 (systemd)[1600]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 13:33:13.934397 systemd[1600]: Queued start job for default target default.target. Dec 13 13:33:13.947537 systemd[1600]: Created slice app.slice - User Application Slice. Dec 13 13:33:13.947815 systemd[1600]: Reached target paths.target - Paths. Dec 13 13:33:13.947840 systemd[1600]: Reached target timers.target - Timers. Dec 13 13:33:13.950143 systemd[1600]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 13:33:13.985447 systemd[1600]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 13:33:13.985686 systemd[1600]: Reached target sockets.target - Sockets. Dec 13 13:33:13.985715 systemd[1600]: Reached target basic.target - Basic System. Dec 13 13:33:13.985782 systemd[1600]: Reached target default.target - Main User Target. Dec 13 13:33:13.985857 systemd[1600]: Startup finished in 186ms. Dec 13 13:33:13.986011 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 13:33:13.995189 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 13:33:14.031972 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:33:14.038944 (kubelet)[1614]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:33:14.604094 systemd-networkd[1441]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8899:24:19ff:fee6:2266/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8899:24:19ff:fee6:2266/64 assigned by NDisc. Dec 13 13:33:14.604139 systemd-networkd[1441]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 13 13:33:14.636206 systemd[1]: Started sshd@1-10.230.34.102:22-139.178.68.195:47486.service - OpenSSH per-connection server daemon (139.178.68.195:47486). Dec 13 13:33:14.796384 kubelet[1614]: E1213 13:33:14.796237 1614 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:33:14.799410 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:33:14.799782 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:33:14.800340 systemd[1]: kubelet.service: Consumed 1.087s CPU time. Dec 13 13:33:15.539412 sshd[1624]: Accepted publickey for core from 139.178.68.195 port 47486 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 13:33:15.541885 sshd-session[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:33:15.549848 systemd-logind[1491]: New session 2 of user core. Dec 13 13:33:15.566018 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 13:33:16.162586 sshd[1630]: Connection closed by 139.178.68.195 port 47486 Dec 13 13:33:16.163764 sshd-session[1624]: pam_unix(sshd:session): session closed for user core Dec 13 13:33:16.168934 systemd-logind[1491]: Session 2 logged out. Waiting for processes to exit. Dec 13 13:33:16.170913 systemd[1]: sshd@1-10.230.34.102:22-139.178.68.195:47486.service: Deactivated successfully. Dec 13 13:33:16.173129 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 13:33:16.174772 systemd-logind[1491]: Removed session 2. Dec 13 13:33:16.328293 systemd[1]: Started sshd@2-10.230.34.102:22-139.178.68.195:60808.service - OpenSSH per-connection server daemon (139.178.68.195:60808). Dec 13 13:33:17.230008 sshd[1635]: Accepted publickey for core from 139.178.68.195 port 60808 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 13:33:17.232451 sshd-session[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:33:17.241025 systemd-logind[1491]: New session 3 of user core. Dec 13 13:33:17.255079 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 13:33:17.854277 sshd[1637]: Connection closed by 139.178.68.195 port 60808 Dec 13 13:33:17.855195 sshd-session[1635]: pam_unix(sshd:session): session closed for user core Dec 13 13:33:17.860268 systemd[1]: sshd@2-10.230.34.102:22-139.178.68.195:60808.service: Deactivated successfully. Dec 13 13:33:17.863380 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 13:33:17.865877 systemd-logind[1491]: Session 3 logged out. Waiting for processes to exit. Dec 13 13:33:17.868437 systemd-logind[1491]: Removed session 3. Dec 13 13:33:17.905697 agetty[1583]: failed to open credentials directory Dec 13 13:33:17.906478 agetty[1584]: failed to open credentials directory Dec 13 13:33:17.921908 login[1584]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 13:33:17.924137 login[1583]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 13:33:17.931532 systemd-logind[1491]: New session 4 of user core. Dec 13 13:33:17.943952 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 13:33:17.948895 systemd-logind[1491]: New session 5 of user core. Dec 13 13:33:17.954928 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 13:33:18.923927 coreos-metadata[1482]: Dec 13 13:33:18.923 WARN failed to locate config-drive, using the metadata service API instead Dec 13 13:33:18.951270 coreos-metadata[1482]: Dec 13 13:33:18.951 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Dec 13 13:33:18.957845 coreos-metadata[1482]: Dec 13 13:33:18.957 INFO Fetch failed with 404: resource not found Dec 13 13:33:18.957845 coreos-metadata[1482]: Dec 13 13:33:18.957 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 13:33:18.958512 coreos-metadata[1482]: Dec 13 13:33:18.958 INFO Fetch successful Dec 13 13:33:18.958512 coreos-metadata[1482]: Dec 13 13:33:18.958 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Dec 13 13:33:18.970670 coreos-metadata[1482]: Dec 13 13:33:18.970 INFO Fetch successful Dec 13 13:33:18.970786 coreos-metadata[1482]: Dec 13 13:33:18.970 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Dec 13 13:33:18.984463 coreos-metadata[1482]: Dec 13 13:33:18.984 INFO Fetch successful Dec 13 13:33:18.984700 coreos-metadata[1482]: Dec 13 13:33:18.984 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Dec 13 13:33:18.999909 coreos-metadata[1482]: Dec 13 13:33:18.999 INFO Fetch successful Dec 13 13:33:19.000065 coreos-metadata[1482]: Dec 13 13:33:18.999 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Dec 13 13:33:19.016911 coreos-metadata[1482]: Dec 13 13:33:19.016 INFO Fetch successful Dec 13 13:33:19.042354 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 13:33:19.044022 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 13:33:19.475151 coreos-metadata[1545]: Dec 13 13:33:19.475 WARN failed to locate config-drive, using the metadata service API instead Dec 13 13:33:19.497545 coreos-metadata[1545]: Dec 13 13:33:19.497 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Dec 13 13:33:19.526690 coreos-metadata[1545]: Dec 13 13:33:19.526 INFO Fetch successful Dec 13 13:33:19.526861 coreos-metadata[1545]: Dec 13 13:33:19.526 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 13:33:19.558018 coreos-metadata[1545]: Dec 13 13:33:19.557 INFO Fetch successful Dec 13 13:33:19.560081 unknown[1545]: wrote ssh authorized keys file for user: core Dec 13 13:33:19.578331 update-ssh-keys[1676]: Updated "/home/core/.ssh/authorized_keys" Dec 13 13:33:19.579834 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 13:33:19.581724 systemd[1]: Finished sshkeys.service. Dec 13 13:33:19.585697 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 13:33:19.589788 systemd[1]: Startup finished in 1.309s (kernel) + 14.294s (initrd) + 11.779s (userspace) = 27.384s. Dec 13 13:33:25.022492 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 13:33:25.033921 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:33:25.180342 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:33:25.191092 (kubelet)[1688]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:33:25.313721 kubelet[1688]: E1213 13:33:25.313442 1688 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:33:25.318755 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:33:25.319032 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:33:28.021199 systemd[1]: Started sshd@3-10.230.34.102:22-139.178.68.195:52112.service - OpenSSH per-connection server daemon (139.178.68.195:52112). Dec 13 13:33:28.916068 sshd[1697]: Accepted publickey for core from 139.178.68.195 port 52112 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 13:33:28.918254 sshd-session[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:33:28.925779 systemd-logind[1491]: New session 6 of user core. Dec 13 13:33:28.933883 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 13:33:29.536040 sshd[1699]: Connection closed by 139.178.68.195 port 52112 Dec 13 13:33:29.537056 sshd-session[1697]: pam_unix(sshd:session): session closed for user core Dec 13 13:33:29.541798 systemd[1]: sshd@3-10.230.34.102:22-139.178.68.195:52112.service: Deactivated successfully. Dec 13 13:33:29.544058 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 13:33:29.545116 systemd-logind[1491]: Session 6 logged out. Waiting for processes to exit. Dec 13 13:33:29.546549 systemd-logind[1491]: Removed session 6. Dec 13 13:33:29.697045 systemd[1]: Started sshd@4-10.230.34.102:22-139.178.68.195:52122.service - OpenSSH per-connection server daemon (139.178.68.195:52122). Dec 13 13:33:30.582773 sshd[1704]: Accepted publickey for core from 139.178.68.195 port 52122 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 13:33:30.584775 sshd-session[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:33:30.592564 systemd-logind[1491]: New session 7 of user core. Dec 13 13:33:30.601873 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 13:33:31.193861 sshd[1706]: Connection closed by 139.178.68.195 port 52122 Dec 13 13:33:31.192931 sshd-session[1704]: pam_unix(sshd:session): session closed for user core Dec 13 13:33:31.197870 systemd-logind[1491]: Session 7 logged out. Waiting for processes to exit. Dec 13 13:33:31.199301 systemd[1]: sshd@4-10.230.34.102:22-139.178.68.195:52122.service: Deactivated successfully. Dec 13 13:33:31.201870 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 13:33:31.203539 systemd-logind[1491]: Removed session 7. Dec 13 13:33:31.356328 systemd[1]: Started sshd@5-10.230.34.102:22-139.178.68.195:52124.service - OpenSSH per-connection server daemon (139.178.68.195:52124). Dec 13 13:33:32.238710 sshd[1711]: Accepted publickey for core from 139.178.68.195 port 52124 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 13:33:32.240758 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:33:32.247495 systemd-logind[1491]: New session 8 of user core. Dec 13 13:33:32.255888 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 13:33:32.854722 sshd[1713]: Connection closed by 139.178.68.195 port 52124 Dec 13 13:33:32.855742 sshd-session[1711]: pam_unix(sshd:session): session closed for user core Dec 13 13:33:32.861222 systemd[1]: sshd@5-10.230.34.102:22-139.178.68.195:52124.service: Deactivated successfully. Dec 13 13:33:32.863932 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 13:33:32.864975 systemd-logind[1491]: Session 8 logged out. Waiting for processes to exit. Dec 13 13:33:32.866594 systemd-logind[1491]: Removed session 8. Dec 13 13:33:33.025260 systemd[1]: Started sshd@6-10.230.34.102:22-139.178.68.195:52132.service - OpenSSH per-connection server daemon (139.178.68.195:52132). Dec 13 13:33:33.914972 sshd[1718]: Accepted publickey for core from 139.178.68.195 port 52132 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 13:33:33.917363 sshd-session[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:33:33.924684 systemd-logind[1491]: New session 9 of user core. Dec 13 13:33:33.937014 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 13:33:34.407615 sudo[1721]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 13:33:34.408198 sudo[1721]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:33:34.422987 sudo[1721]: pam_unix(sudo:session): session closed for user root Dec 13 13:33:34.566324 sshd[1720]: Connection closed by 139.178.68.195 port 52132 Dec 13 13:33:34.567460 sshd-session[1718]: pam_unix(sshd:session): session closed for user core Dec 13 13:33:34.572822 systemd-logind[1491]: Session 9 logged out. Waiting for processes to exit. Dec 13 13:33:34.573623 systemd[1]: sshd@6-10.230.34.102:22-139.178.68.195:52132.service: Deactivated successfully. Dec 13 13:33:34.576060 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 13:33:34.577407 systemd-logind[1491]: Removed session 9. Dec 13 13:33:34.719747 systemd[1]: Started sshd@7-10.230.34.102:22-139.178.68.195:52140.service - OpenSSH per-connection server daemon (139.178.68.195:52140). Dec 13 13:33:35.472990 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 13:33:35.488958 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:33:35.626922 sshd[1726]: Accepted publickey for core from 139.178.68.195 port 52140 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 13:33:35.628473 sshd-session[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:33:35.645922 systemd-logind[1491]: New session 10 of user core. Dec 13 13:33:35.651922 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:33:35.654868 (kubelet)[1735]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:33:35.658247 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 13:33:35.732371 kubelet[1735]: E1213 13:33:35.732134 1735 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:33:35.735819 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:33:35.736319 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:33:36.102746 sudo[1746]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 13:33:36.103246 sudo[1746]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:33:36.108553 sudo[1746]: pam_unix(sudo:session): session closed for user root Dec 13 13:33:36.116835 sudo[1745]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 13 13:33:36.117320 sudo[1745]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:33:36.136148 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:33:36.179983 augenrules[1768]: No rules Dec 13 13:33:36.180906 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:33:36.181200 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:33:36.182525 sudo[1745]: pam_unix(sudo:session): session closed for user root Dec 13 13:33:36.326138 sshd[1741]: Connection closed by 139.178.68.195 port 52140 Dec 13 13:33:36.327111 sshd-session[1726]: pam_unix(sshd:session): session closed for user core Dec 13 13:33:36.331397 systemd-logind[1491]: Session 10 logged out. Waiting for processes to exit. Dec 13 13:33:36.331917 systemd[1]: sshd@7-10.230.34.102:22-139.178.68.195:52140.service: Deactivated successfully. Dec 13 13:33:36.333909 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 13:33:36.335848 systemd-logind[1491]: Removed session 10. Dec 13 13:33:36.490043 systemd[1]: Started sshd@8-10.230.34.102:22-139.178.68.195:50540.service - OpenSSH per-connection server daemon (139.178.68.195:50540). Dec 13 13:33:37.381100 sshd[1776]: Accepted publickey for core from 139.178.68.195 port 50540 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 13:33:37.383238 sshd-session[1776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:33:37.390975 systemd-logind[1491]: New session 11 of user core. Dec 13 13:33:37.398862 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 13:33:37.858546 sudo[1779]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 13:33:37.859048 sudo[1779]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:33:38.713386 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:33:38.722942 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:33:38.757588 systemd[1]: Reloading requested from client PID 1817 ('systemctl') (unit session-11.scope)... Dec 13 13:33:38.757632 systemd[1]: Reloading... Dec 13 13:33:38.914707 zram_generator::config[1859]: No configuration found. Dec 13 13:33:39.089705 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:33:39.201006 systemd[1]: Reloading finished in 442 ms. Dec 13 13:33:39.269284 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:33:39.274361 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 13:33:39.274733 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:33:39.280969 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:33:39.422766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:33:39.435163 (kubelet)[1925]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 13:33:39.504686 kubelet[1925]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:33:39.504686 kubelet[1925]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 13:33:39.504686 kubelet[1925]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:33:39.505330 kubelet[1925]: I1213 13:33:39.504778 1925 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 13:33:40.603055 kubelet[1925]: I1213 13:33:40.602986 1925 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 13:33:40.603055 kubelet[1925]: I1213 13:33:40.603044 1925 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 13:33:40.604711 kubelet[1925]: I1213 13:33:40.603997 1925 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 13:33:40.628070 kubelet[1925]: I1213 13:33:40.628029 1925 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 13:33:40.646422 kubelet[1925]: I1213 13:33:40.646395 1925 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 13:33:40.648416 kubelet[1925]: I1213 13:33:40.648391 1925 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 13:33:40.648850 kubelet[1925]: I1213 13:33:40.648804 1925 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 13:33:40.650216 kubelet[1925]: I1213 13:33:40.649767 1925 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 13:33:40.650216 kubelet[1925]: I1213 13:33:40.649798 1925 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 13:33:40.650216 kubelet[1925]: I1213 13:33:40.650031 1925 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:33:40.650422 kubelet[1925]: I1213 13:33:40.650261 1925 kubelet.go:396] "Attempting to sync node with API server" Dec 13 13:33:40.650422 kubelet[1925]: I1213 13:33:40.650293 1925 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 13:33:40.650422 kubelet[1925]: I1213 13:33:40.650353 1925 kubelet.go:312] "Adding apiserver pod source" Dec 13 13:33:40.650422 kubelet[1925]: I1213 13:33:40.650386 1925 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 13:33:40.653085 kubelet[1925]: I1213 13:33:40.652815 1925 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Dec 13 13:33:40.653451 kubelet[1925]: E1213 13:33:40.653426 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:33:40.653634 kubelet[1925]: E1213 13:33:40.653611 1925 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:33:40.656366 kubelet[1925]: I1213 13:33:40.656318 1925 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 13:33:40.659676 kubelet[1925]: W1213 13:33:40.657802 1925 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 13:33:40.659676 kubelet[1925]: I1213 13:33:40.658798 1925 server.go:1256] "Started kubelet" Dec 13 13:33:40.659676 kubelet[1925]: I1213 13:33:40.659124 1925 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 13:33:40.660406 kubelet[1925]: I1213 13:33:40.660376 1925 server.go:461] "Adding debug handlers to kubelet server" Dec 13 13:33:40.660988 kubelet[1925]: I1213 13:33:40.660963 1925 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 13:33:40.662507 kubelet[1925]: I1213 13:33:40.662026 1925 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 13:33:40.662507 kubelet[1925]: I1213 13:33:40.662289 1925 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 13:33:40.662956 kubelet[1925]: W1213 13:33:40.662809 1925 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 13:33:40.662956 kubelet[1925]: E1213 13:33:40.662867 1925 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 13:33:40.663085 kubelet[1925]: W1213 13:33:40.663032 1925 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "10.230.34.102" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 13:33:40.663085 kubelet[1925]: E1213 13:33:40.663064 1925 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.230.34.102" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 13:33:40.674369 kubelet[1925]: E1213 13:33:40.674326 1925 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.230.34.102.1810bfdee97f5f01 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.230.34.102,UID:10.230.34.102,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.230.34.102,},FirstTimestamp:2024-12-13 13:33:40.658724609 +0000 UTC m=+1.216931870,LastTimestamp:2024-12-13 13:33:40.658724609 +0000 UTC m=+1.216931870,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.230.34.102,}" Dec 13 13:33:40.676022 kubelet[1925]: I1213 13:33:40.674765 1925 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 13:33:40.676022 kubelet[1925]: I1213 13:33:40.675288 1925 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 13:33:40.676022 kubelet[1925]: I1213 13:33:40.675414 1925 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 13:33:40.679507 kubelet[1925]: I1213 13:33:40.679378 1925 factory.go:221] Registration of the systemd container factory successfully Dec 13 13:33:40.679905 kubelet[1925]: I1213 13:33:40.679682 1925 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 13:33:40.686071 kubelet[1925]: I1213 13:33:40.686035 1925 factory.go:221] Registration of the containerd container factory successfully Dec 13 13:33:40.696417 kubelet[1925]: W1213 13:33:40.689104 1925 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Dec 13 13:33:40.696417 kubelet[1925]: E1213 13:33:40.689148 1925 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Dec 13 13:33:40.696417 kubelet[1925]: E1213 13:33:40.689291 1925 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.230.34.102\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Dec 13 13:33:40.696417 kubelet[1925]: E1213 13:33:40.692567 1925 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 13:33:40.739598 kubelet[1925]: I1213 13:33:40.739565 1925 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 13:33:40.739785 kubelet[1925]: I1213 13:33:40.739766 1925 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 13:33:40.739927 kubelet[1925]: I1213 13:33:40.739908 1925 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:33:40.746674 kubelet[1925]: I1213 13:33:40.745343 1925 policy_none.go:49] "None policy: Start" Dec 13 13:33:40.746674 kubelet[1925]: I1213 13:33:40.746324 1925 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 13:33:40.746674 kubelet[1925]: I1213 13:33:40.746364 1925 state_mem.go:35] "Initializing new in-memory state store" Dec 13 13:33:40.760454 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 13:33:40.776396 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 13:33:40.778592 kubelet[1925]: I1213 13:33:40.778564 1925 kubelet_node_status.go:73] "Attempting to register node" node="10.230.34.102" Dec 13 13:33:40.786615 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 13:33:40.789939 kubelet[1925]: I1213 13:33:40.789868 1925 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 13:33:40.792369 kubelet[1925]: I1213 13:33:40.792344 1925 kubelet_node_status.go:76] "Successfully registered node" node="10.230.34.102" Dec 13 13:33:40.793227 kubelet[1925]: I1213 13:33:40.792962 1925 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 13:33:40.793312 kubelet[1925]: I1213 13:33:40.793263 1925 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 13:33:40.793374 kubelet[1925]: I1213 13:33:40.793321 1925 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 13:33:40.794717 kubelet[1925]: E1213 13:33:40.793516 1925 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 13:33:40.796848 kubelet[1925]: I1213 13:33:40.795366 1925 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 13:33:40.796848 kubelet[1925]: I1213 13:33:40.795796 1925 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 13:33:40.803607 kubelet[1925]: E1213 13:33:40.803559 1925 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.230.34.102\" not found" Dec 13 13:33:40.836225 kubelet[1925]: E1213 13:33:40.836199 1925 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.34.102\" not found" Dec 13 13:33:40.937506 kubelet[1925]: E1213 13:33:40.937365 1925 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.34.102\" not found" Dec 13 13:33:41.039527 kubelet[1925]: E1213 13:33:41.039448 1925 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.34.102\" not found" Dec 13 13:33:41.140382 kubelet[1925]: E1213 13:33:41.140354 1925 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.34.102\" not found" Dec 13 13:33:41.241215 kubelet[1925]: E1213 13:33:41.241035 1925 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.34.102\" not found" Dec 13 13:33:41.341951 kubelet[1925]: E1213 13:33:41.341849 1925 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.34.102\" not found" Dec 13 13:33:41.442730 kubelet[1925]: E1213 13:33:41.442679 1925 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.34.102\" not found" Dec 13 13:33:41.543779 kubelet[1925]: E1213 13:33:41.543718 1925 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.34.102\" not found" Dec 13 13:33:41.623762 kubelet[1925]: I1213 13:33:41.623435 1925 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 13:33:41.623762 kubelet[1925]: W1213 13:33:41.623601 1925 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.CSIDriver ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 13:33:41.623762 kubelet[1925]: W1213 13:33:41.623699 1925 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 13:33:41.623762 kubelet[1925]: W1213 13:33:41.623736 1925 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.Service ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 13:33:41.644667 kubelet[1925]: E1213 13:33:41.644572 1925 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.34.102\" not found" Dec 13 13:33:41.653748 kubelet[1925]: E1213 13:33:41.653702 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:33:41.745752 kubelet[1925]: I1213 13:33:41.745525 1925 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 13:33:41.746104 containerd[1509]: time="2024-12-13T13:33:41.745987315Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 13:33:41.746590 kubelet[1925]: I1213 13:33:41.746313 1925 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 13:33:41.950226 sudo[1779]: pam_unix(sudo:session): session closed for user root Dec 13 13:33:42.095344 sshd[1778]: Connection closed by 139.178.68.195 port 50540 Dec 13 13:33:42.094976 sshd-session[1776]: pam_unix(sshd:session): session closed for user core Dec 13 13:33:42.100175 systemd[1]: sshd@8-10.230.34.102:22-139.178.68.195:50540.service: Deactivated successfully. Dec 13 13:33:42.103016 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 13:33:42.105047 systemd-logind[1491]: Session 11 logged out. Waiting for processes to exit. Dec 13 13:33:42.106952 systemd-logind[1491]: Removed session 11. Dec 13 13:33:42.653256 kubelet[1925]: I1213 13:33:42.652873 1925 apiserver.go:52] "Watching apiserver" Dec 13 13:33:42.653981 kubelet[1925]: E1213 13:33:42.653786 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:33:42.658907 kubelet[1925]: I1213 13:33:42.658869 1925 topology_manager.go:215] "Topology Admit Handler" podUID="20230d91-0ce8-452a-b397-e3c0f73a38ab" podNamespace="calico-system" podName="csi-node-driver-l9zzm" Dec 13 13:33:42.660231 kubelet[1925]: I1213 13:33:42.659085 1925 topology_manager.go:215] "Topology Admit Handler" podUID="3a84951d-b3ab-481d-a187-24d36f117169" podNamespace="kube-system" podName="kube-proxy-5q7dm" Dec 13 13:33:42.660231 kubelet[1925]: I1213 13:33:42.659202 1925 topology_manager.go:215] "Topology Admit Handler" podUID="ac819974-3bf2-4737-be4f-a8f989c8c16f" podNamespace="calico-system" podName="calico-node-smzkf" Dec 13 13:33:42.660231 kubelet[1925]: E1213 13:33:42.659362 1925 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l9zzm" podUID="20230d91-0ce8-452a-b397-e3c0f73a38ab" Dec 13 13:33:42.672723 systemd[1]: Created slice kubepods-besteffort-pod3a84951d_b3ab_481d_a187_24d36f117169.slice - libcontainer container kubepods-besteffort-pod3a84951d_b3ab_481d_a187_24d36f117169.slice. Dec 13 13:33:42.675863 kubelet[1925]: I1213 13:33:42.675824 1925 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 13:33:42.687291 systemd[1]: Created slice kubepods-besteffort-podac819974_3bf2_4737_be4f_a8f989c8c16f.slice - libcontainer container kubepods-besteffort-podac819974_3bf2_4737_be4f_a8f989c8c16f.slice. Dec 13 13:33:42.688490 kubelet[1925]: I1213 13:33:42.688453 1925 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3a84951d-b3ab-481d-a187-24d36f117169-kube-proxy\") pod \"kube-proxy-5q7dm\" (UID: \"3a84951d-b3ab-481d-a187-24d36f117169\") " pod="kube-system/kube-proxy-5q7dm" Dec 13 13:33:42.689289 kubelet[1925]: I1213 13:33:42.689266 1925 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3a84951d-b3ab-481d-a187-24d36f117169-xtables-lock\") pod \"kube-proxy-5q7dm\" (UID: \"3a84951d-b3ab-481d-a187-24d36f117169\") " pod="kube-system/kube-proxy-5q7dm" Dec 13 13:33:42.689448 kubelet[1925]: I1213 13:33:42.689428 1925 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ac819974-3bf2-4737-be4f-a8f989c8c16f-xtables-lock\") pod \"calico-node-smzkf\" (UID: \"ac819974-3bf2-4737-be4f-a8f989c8c16f\") " pod="calico-system/calico-node-smzkf" Dec 13 13:33:42.689612 kubelet[1925]: I1213 13:33:42.689591 1925 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ac819974-3bf2-4737-be4f-a8f989c8c16f-cni-net-dir\") pod \"calico-node-smzkf\" (UID: \"ac819974-3bf2-4737-be4f-a8f989c8c16f\") " pod="calico-system/calico-node-smzkf" Dec 13 13:33:42.689916 kubelet[1925]: I1213 13:33:42.689887 1925 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5gmw\" (UniqueName: \"kubernetes.io/projected/ac819974-3bf2-4737-be4f-a8f989c8c16f-kube-api-access-w5gmw\") pod \"calico-node-smzkf\" (UID: \"ac819974-3bf2-4737-be4f-a8f989c8c16f\") " pod="calico-system/calico-node-smzkf" Dec 13 13:33:42.690808 kubelet[1925]: I1213 13:33:42.690714 1925 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/20230d91-0ce8-452a-b397-e3c0f73a38ab-varrun\") pod \"csi-node-driver-l9zzm\" (UID: \"20230d91-0ce8-452a-b397-e3c0f73a38ab\") " pod="calico-system/csi-node-driver-l9zzm" Dec 13 13:33:42.690808 kubelet[1925]: I1213 13:33:42.690793 1925 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/20230d91-0ce8-452a-b397-e3c0f73a38ab-registration-dir\") pod \"csi-node-driver-l9zzm\" (UID: \"20230d91-0ce8-452a-b397-e3c0f73a38ab\") " pod="calico-system/csi-node-driver-l9zzm" Dec 13 13:33:42.690947 kubelet[1925]: I1213 13:33:42.690858 1925 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a84951d-b3ab-481d-a187-24d36f117169-lib-modules\") pod \"kube-proxy-5q7dm\" (UID: \"3a84951d-b3ab-481d-a187-24d36f117169\") " pod="kube-system/kube-proxy-5q7dm" Dec 13 13:33:42.690947 kubelet[1925]: I1213 13:33:42.690904 1925 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ac819974-3bf2-4737-be4f-a8f989c8c16f-var-run-calico\") pod \"calico-node-smzkf\" (UID: \"ac819974-3bf2-4737-be4f-a8f989c8c16f\") " pod="calico-system/calico-node-smzkf" Dec 13 13:33:42.691050 kubelet[1925]: I1213 13:33:42.690958 1925 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ac819974-3bf2-4737-be4f-a8f989c8c16f-var-lib-calico\") pod \"calico-node-smzkf\" (UID: \"ac819974-3bf2-4737-be4f-a8f989c8c16f\") " pod="calico-system/calico-node-smzkf" Dec 13 13:33:42.691050 kubelet[1925]: I1213 13:33:42.690993 1925 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ac819974-3bf2-4737-be4f-a8f989c8c16f-flexvol-driver-host\") pod \"calico-node-smzkf\" (UID: \"ac819974-3bf2-4737-be4f-a8f989c8c16f\") " pod="calico-system/calico-node-smzkf" Dec 13 13:33:42.691165 kubelet[1925]: I1213 13:33:42.691152 1925 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/20230d91-0ce8-452a-b397-e3c0f73a38ab-kubelet-dir\") pod \"csi-node-driver-l9zzm\" (UID: \"20230d91-0ce8-452a-b397-e3c0f73a38ab\") " pod="calico-system/csi-node-driver-l9zzm" Dec 13 13:33:42.691222 kubelet[1925]: I1213 13:33:42.691194 1925 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/20230d91-0ce8-452a-b397-e3c0f73a38ab-socket-dir\") pod \"csi-node-driver-l9zzm\" (UID: \"20230d91-0ce8-452a-b397-e3c0f73a38ab\") " pod="calico-system/csi-node-driver-l9zzm" Dec 13 13:33:42.691278 kubelet[1925]: I1213 13:33:42.691247 1925 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg4vc\" (UniqueName: \"kubernetes.io/projected/20230d91-0ce8-452a-b397-e3c0f73a38ab-kube-api-access-qg4vc\") pod \"csi-node-driver-l9zzm\" (UID: \"20230d91-0ce8-452a-b397-e3c0f73a38ab\") " pod="calico-system/csi-node-driver-l9zzm" Dec 13 13:33:42.691343 kubelet[1925]: I1213 13:33:42.691323 1925 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ac819974-3bf2-4737-be4f-a8f989c8c16f-policysync\") pod \"calico-node-smzkf\" (UID: \"ac819974-3bf2-4737-be4f-a8f989c8c16f\") " pod="calico-system/calico-node-smzkf" Dec 13 13:33:42.692063 kubelet[1925]: I1213 13:33:42.691358 1925 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac819974-3bf2-4737-be4f-a8f989c8c16f-tigera-ca-bundle\") pod \"calico-node-smzkf\" (UID: \"ac819974-3bf2-4737-be4f-a8f989c8c16f\") " pod="calico-system/calico-node-smzkf" Dec 13 13:33:42.692063 kubelet[1925]: I1213 13:33:42.691447 1925 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ac819974-3bf2-4737-be4f-a8f989c8c16f-node-certs\") pod \"calico-node-smzkf\" (UID: \"ac819974-3bf2-4737-be4f-a8f989c8c16f\") " pod="calico-system/calico-node-smzkf" Dec 13 13:33:42.692063 kubelet[1925]: I1213 13:33:42.691610 1925 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4bmf\" (UniqueName: \"kubernetes.io/projected/3a84951d-b3ab-481d-a187-24d36f117169-kube-api-access-h4bmf\") pod \"kube-proxy-5q7dm\" (UID: \"3a84951d-b3ab-481d-a187-24d36f117169\") " pod="kube-system/kube-proxy-5q7dm" Dec 13 13:33:42.692063 kubelet[1925]: I1213 13:33:42.691707 1925 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ac819974-3bf2-4737-be4f-a8f989c8c16f-lib-modules\") pod \"calico-node-smzkf\" (UID: \"ac819974-3bf2-4737-be4f-a8f989c8c16f\") " pod="calico-system/calico-node-smzkf" Dec 13 13:33:42.692063 kubelet[1925]: I1213 13:33:42.691764 1925 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ac819974-3bf2-4737-be4f-a8f989c8c16f-cni-bin-dir\") pod \"calico-node-smzkf\" (UID: \"ac819974-3bf2-4737-be4f-a8f989c8c16f\") " pod="calico-system/calico-node-smzkf" Dec 13 13:33:42.692329 kubelet[1925]: I1213 13:33:42.691846 1925 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ac819974-3bf2-4737-be4f-a8f989c8c16f-cni-log-dir\") pod \"calico-node-smzkf\" (UID: \"ac819974-3bf2-4737-be4f-a8f989c8c16f\") " pod="calico-system/calico-node-smzkf" Dec 13 13:33:42.797345 kubelet[1925]: E1213 13:33:42.797208 1925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:33:42.797345 kubelet[1925]: W1213 13:33:42.797273 1925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:33:42.797345 kubelet[1925]: E1213 13:33:42.797310 1925 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:33:42.798496 kubelet[1925]: E1213 13:33:42.798364 1925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:33:42.798496 kubelet[1925]: W1213 13:33:42.798394 1925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:33:42.798496 kubelet[1925]: E1213 13:33:42.798432 1925 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:33:42.815282 kubelet[1925]: E1213 13:33:42.814013 1925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:33:42.815282 kubelet[1925]: W1213 13:33:42.814034 1925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:33:42.815282 kubelet[1925]: E1213 13:33:42.814058 1925 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:33:42.824758 kubelet[1925]: E1213 13:33:42.824725 1925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:33:42.824758 kubelet[1925]: W1213 13:33:42.824745 1925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:33:42.825707 kubelet[1925]: E1213 13:33:42.824773 1925 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:33:42.825707 kubelet[1925]: E1213 13:33:42.825049 1925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:33:42.825707 kubelet[1925]: W1213 13:33:42.825062 1925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:33:42.825707 kubelet[1925]: E1213 13:33:42.825087 1925 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:33:42.826967 kubelet[1925]: E1213 13:33:42.826887 1925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:33:42.826967 kubelet[1925]: W1213 13:33:42.826909 1925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:33:42.826967 kubelet[1925]: E1213 13:33:42.826927 1925 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:33:42.983775 containerd[1509]: time="2024-12-13T13:33:42.983521774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5q7dm,Uid:3a84951d-b3ab-481d-a187-24d36f117169,Namespace:kube-system,Attempt:0,}" Dec 13 13:33:42.993636 containerd[1509]: time="2024-12-13T13:33:42.993172987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-smzkf,Uid:ac819974-3bf2-4737-be4f-a8f989c8c16f,Namespace:calico-system,Attempt:0,}" Dec 13 13:33:43.654192 kubelet[1925]: E1213 13:33:43.654138 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:33:43.835581 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount712414216.mount: Deactivated successfully. Dec 13 13:33:43.840969 containerd[1509]: time="2024-12-13T13:33:43.840893035Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:33:43.842671 containerd[1509]: time="2024-12-13T13:33:43.842577694Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Dec 13 13:33:43.845237 containerd[1509]: time="2024-12-13T13:33:43.845184604Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:33:43.846805 containerd[1509]: time="2024-12-13T13:33:43.846765377Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:33:43.847328 containerd[1509]: time="2024-12-13T13:33:43.847238426Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 13:33:43.851712 containerd[1509]: time="2024-12-13T13:33:43.851635515Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:33:43.853940 containerd[1509]: time="2024-12-13T13:33:43.852889952Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 859.615324ms" Dec 13 13:33:43.854435 containerd[1509]: time="2024-12-13T13:33:43.854379760Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 870.564181ms" Dec 13 13:33:44.010101 containerd[1509]: time="2024-12-13T13:33:44.009271333Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:33:44.010101 containerd[1509]: time="2024-12-13T13:33:44.009472805Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:33:44.010101 containerd[1509]: time="2024-12-13T13:33:44.009551808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:44.011114 containerd[1509]: time="2024-12-13T13:33:44.010018529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:44.011114 containerd[1509]: time="2024-12-13T13:33:44.006126119Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:33:44.011114 containerd[1509]: time="2024-12-13T13:33:44.010858323Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:33:44.011255 containerd[1509]: time="2024-12-13T13:33:44.010982854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:44.011981 containerd[1509]: time="2024-12-13T13:33:44.011874300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:44.135864 systemd[1]: Started cri-containerd-6b63a5657cf9eed0bfbe2e2ac3ce2956c49d60746879d8593f404157d7c1c42b.scope - libcontainer container 6b63a5657cf9eed0bfbe2e2ac3ce2956c49d60746879d8593f404157d7c1c42b. Dec 13 13:33:44.139767 systemd[1]: Started cri-containerd-df7bb87a1597144ee4e12e209b15133f39b10bbda96d3c7e639f54e4c05cea48.scope - libcontainer container df7bb87a1597144ee4e12e209b15133f39b10bbda96d3c7e639f54e4c05cea48. Dec 13 13:33:44.188713 containerd[1509]: time="2024-12-13T13:33:44.188573488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-smzkf,Uid:ac819974-3bf2-4737-be4f-a8f989c8c16f,Namespace:calico-system,Attempt:0,} returns sandbox id \"6b63a5657cf9eed0bfbe2e2ac3ce2956c49d60746879d8593f404157d7c1c42b\"" Dec 13 13:33:44.195502 containerd[1509]: time="2024-12-13T13:33:44.195221874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 13:33:44.196033 containerd[1509]: time="2024-12-13T13:33:44.195940086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5q7dm,Uid:3a84951d-b3ab-481d-a187-24d36f117169,Namespace:kube-system,Attempt:0,} returns sandbox id \"df7bb87a1597144ee4e12e209b15133f39b10bbda96d3c7e639f54e4c05cea48\"" Dec 13 13:33:44.649834 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 13:33:44.655252 kubelet[1925]: E1213 13:33:44.655193 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:33:44.795063 kubelet[1925]: E1213 13:33:44.794455 1925 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l9zzm" podUID="20230d91-0ce8-452a-b397-e3c0f73a38ab" Dec 13 13:33:45.487313 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount927694241.mount: Deactivated successfully. Dec 13 13:33:45.621628 containerd[1509]: time="2024-12-13T13:33:45.621515204Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:45.623091 containerd[1509]: time="2024-12-13T13:33:45.623023836Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Dec 13 13:33:45.623944 containerd[1509]: time="2024-12-13T13:33:45.623873245Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:45.627519 containerd[1509]: time="2024-12-13T13:33:45.627451056Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:45.629324 containerd[1509]: time="2024-12-13T13:33:45.628517771Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.433251918s" Dec 13 13:33:45.629324 containerd[1509]: time="2024-12-13T13:33:45.628586333Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Dec 13 13:33:45.629960 containerd[1509]: time="2024-12-13T13:33:45.629894610Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 13:33:45.632581 containerd[1509]: time="2024-12-13T13:33:45.632539052Z" level=info msg="CreateContainer within sandbox \"6b63a5657cf9eed0bfbe2e2ac3ce2956c49d60746879d8593f404157d7c1c42b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 13:33:45.655873 containerd[1509]: time="2024-12-13T13:33:45.655834569Z" level=info msg="CreateContainer within sandbox \"6b63a5657cf9eed0bfbe2e2ac3ce2956c49d60746879d8593f404157d7c1c42b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7e0fb085777f8cda3d4b6bdbf48cfac1ac9afdbb256d36f22ae3564f26b80ba6\"" Dec 13 13:33:45.656221 kubelet[1925]: E1213 13:33:45.656068 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:33:45.657038 containerd[1509]: time="2024-12-13T13:33:45.656979261Z" level=info msg="StartContainer for \"7e0fb085777f8cda3d4b6bdbf48cfac1ac9afdbb256d36f22ae3564f26b80ba6\"" Dec 13 13:33:45.697260 systemd[1]: Started cri-containerd-7e0fb085777f8cda3d4b6bdbf48cfac1ac9afdbb256d36f22ae3564f26b80ba6.scope - libcontainer container 7e0fb085777f8cda3d4b6bdbf48cfac1ac9afdbb256d36f22ae3564f26b80ba6. Dec 13 13:33:45.742237 containerd[1509]: time="2024-12-13T13:33:45.742091945Z" level=info msg="StartContainer for \"7e0fb085777f8cda3d4b6bdbf48cfac1ac9afdbb256d36f22ae3564f26b80ba6\" returns successfully" Dec 13 13:33:45.761494 systemd[1]: cri-containerd-7e0fb085777f8cda3d4b6bdbf48cfac1ac9afdbb256d36f22ae3564f26b80ba6.scope: Deactivated successfully. Dec 13 13:33:45.843749 containerd[1509]: time="2024-12-13T13:33:45.843634454Z" level=info msg="shim disconnected" id=7e0fb085777f8cda3d4b6bdbf48cfac1ac9afdbb256d36f22ae3564f26b80ba6 namespace=k8s.io Dec 13 13:33:45.843749 containerd[1509]: time="2024-12-13T13:33:45.843739226Z" level=warning msg="cleaning up after shim disconnected" id=7e0fb085777f8cda3d4b6bdbf48cfac1ac9afdbb256d36f22ae3564f26b80ba6 namespace=k8s.io Dec 13 13:33:45.843749 containerd[1509]: time="2024-12-13T13:33:45.843757729Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:33:45.861639 containerd[1509]: time="2024-12-13T13:33:45.861566221Z" level=warning msg="cleanup warnings time=\"2024-12-13T13:33:45Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 13:33:46.656614 kubelet[1925]: E1213 13:33:46.656551 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:33:46.796913 kubelet[1925]: E1213 13:33:46.796520 1925 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l9zzm" podUID="20230d91-0ce8-452a-b397-e3c0f73a38ab" Dec 13 13:33:47.005229 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3482736839.mount: Deactivated successfully. Dec 13 13:33:47.615421 containerd[1509]: time="2024-12-13T13:33:47.615175818Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:47.616522 containerd[1509]: time="2024-12-13T13:33:47.616310736Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619966" Dec 13 13:33:47.617421 containerd[1509]: time="2024-12-13T13:33:47.617376292Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:47.620506 containerd[1509]: time="2024-12-13T13:33:47.620440586Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:47.621760 containerd[1509]: time="2024-12-13T13:33:47.621603915Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 1.991666388s" Dec 13 13:33:47.621760 containerd[1509]: time="2024-12-13T13:33:47.621641760Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 13:33:47.622720 containerd[1509]: time="2024-12-13T13:33:47.622288657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 13:33:47.625026 containerd[1509]: time="2024-12-13T13:33:47.624828940Z" level=info msg="CreateContainer within sandbox \"df7bb87a1597144ee4e12e209b15133f39b10bbda96d3c7e639f54e4c05cea48\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 13:33:47.641570 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1277835675.mount: Deactivated successfully. Dec 13 13:33:47.649080 containerd[1509]: time="2024-12-13T13:33:47.649043400Z" level=info msg="CreateContainer within sandbox \"df7bb87a1597144ee4e12e209b15133f39b10bbda96d3c7e639f54e4c05cea48\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ea8c62c68a1cf92c4f2ab6bbcac9c57ff7141395a85dbe4b8af7a1f5dfce5e94\"" Dec 13 13:33:47.649750 containerd[1509]: time="2024-12-13T13:33:47.649704962Z" level=info msg="StartContainer for \"ea8c62c68a1cf92c4f2ab6bbcac9c57ff7141395a85dbe4b8af7a1f5dfce5e94\"" Dec 13 13:33:47.657832 kubelet[1925]: E1213 13:33:47.657677 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:33:47.698943 systemd[1]: Started cri-containerd-ea8c62c68a1cf92c4f2ab6bbcac9c57ff7141395a85dbe4b8af7a1f5dfce5e94.scope - libcontainer container ea8c62c68a1cf92c4f2ab6bbcac9c57ff7141395a85dbe4b8af7a1f5dfce5e94. Dec 13 13:33:47.743580 containerd[1509]: time="2024-12-13T13:33:47.743532531Z" level=info msg="StartContainer for \"ea8c62c68a1cf92c4f2ab6bbcac9c57ff7141395a85dbe4b8af7a1f5dfce5e94\" returns successfully" Dec 13 13:33:47.850325 kubelet[1925]: I1213 13:33:47.850198 1925 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-5q7dm" podStartSLOduration=4.425483914 podStartE2EDuration="7.850054454s" podCreationTimestamp="2024-12-13 13:33:40 +0000 UTC" firstStartedPulling="2024-12-13 13:33:44.197436674 +0000 UTC m=+4.755643937" lastFinishedPulling="2024-12-13 13:33:47.622007205 +0000 UTC m=+8.180214477" observedRunningTime="2024-12-13 13:33:47.849932321 +0000 UTC m=+8.408139619" watchObservedRunningTime="2024-12-13 13:33:47.850054454 +0000 UTC m=+8.408261725" Dec 13 13:33:48.658476 kubelet[1925]: E1213 13:33:48.658337 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:33:48.795505 kubelet[1925]: E1213 13:33:48.795041 1925 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l9zzm" podUID="20230d91-0ce8-452a-b397-e3c0f73a38ab" Dec 13 13:33:49.659497 kubelet[1925]: E1213 13:33:49.659359 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:33:50.661301 kubelet[1925]: E1213 13:33:50.661128 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:33:50.795458 kubelet[1925]: E1213 13:33:50.795413 1925 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l9zzm" podUID="20230d91-0ce8-452a-b397-e3c0f73a38ab" Dec 13 13:33:51.662399 kubelet[1925]: E1213 13:33:51.662343 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:33:52.664219 kubelet[1925]: E1213 13:33:52.664106 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:33:52.794477 kubelet[1925]: E1213 13:33:52.793848 1925 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l9zzm" podUID="20230d91-0ce8-452a-b397-e3c0f73a38ab" Dec 13 13:33:52.958623 containerd[1509]: time="2024-12-13T13:33:52.958307607Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:52.960377 containerd[1509]: time="2024-12-13T13:33:52.960090446Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Dec 13 13:33:52.961330 containerd[1509]: time="2024-12-13T13:33:52.961254429Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:52.967114 containerd[1509]: time="2024-12-13T13:33:52.965649725Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:52.967114 containerd[1509]: time="2024-12-13T13:33:52.966931492Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.344592801s" Dec 13 13:33:52.967114 containerd[1509]: time="2024-12-13T13:33:52.966977288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Dec 13 13:33:52.970032 containerd[1509]: time="2024-12-13T13:33:52.969978202Z" level=info msg="CreateContainer within sandbox \"6b63a5657cf9eed0bfbe2e2ac3ce2956c49d60746879d8593f404157d7c1c42b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 13:33:52.991442 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount708904352.mount: Deactivated successfully. Dec 13 13:33:52.992754 containerd[1509]: time="2024-12-13T13:33:52.992422572Z" level=info msg="CreateContainer within sandbox \"6b63a5657cf9eed0bfbe2e2ac3ce2956c49d60746879d8593f404157d7c1c42b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"86e40b48aa2a579b4c1809112e918ee36acbaad1cca9baa30857d066fa0e85b6\"" Dec 13 13:33:52.993563 containerd[1509]: time="2024-12-13T13:33:52.993497566Z" level=info msg="StartContainer for \"86e40b48aa2a579b4c1809112e918ee36acbaad1cca9baa30857d066fa0e85b6\"" Dec 13 13:33:53.045921 systemd[1]: Started cri-containerd-86e40b48aa2a579b4c1809112e918ee36acbaad1cca9baa30857d066fa0e85b6.scope - libcontainer container 86e40b48aa2a579b4c1809112e918ee36acbaad1cca9baa30857d066fa0e85b6. Dec 13 13:33:53.089591 containerd[1509]: time="2024-12-13T13:33:53.089451951Z" level=info msg="StartContainer for \"86e40b48aa2a579b4c1809112e918ee36acbaad1cca9baa30857d066fa0e85b6\" returns successfully" Dec 13 13:33:53.664880 kubelet[1925]: E1213 13:33:53.664749 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:33:53.969708 containerd[1509]: time="2024-12-13T13:33:53.969424146Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 13:33:53.972974 systemd[1]: cri-containerd-86e40b48aa2a579b4c1809112e918ee36acbaad1cca9baa30857d066fa0e85b6.scope: Deactivated successfully. Dec 13 13:33:54.001198 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-86e40b48aa2a579b4c1809112e918ee36acbaad1cca9baa30857d066fa0e85b6-rootfs.mount: Deactivated successfully. Dec 13 13:33:54.011771 kubelet[1925]: I1213 13:33:54.011727 1925 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 13:33:54.152105 containerd[1509]: time="2024-12-13T13:33:54.151782997Z" level=info msg="shim disconnected" id=86e40b48aa2a579b4c1809112e918ee36acbaad1cca9baa30857d066fa0e85b6 namespace=k8s.io Dec 13 13:33:54.152105 containerd[1509]: time="2024-12-13T13:33:54.152032372Z" level=warning msg="cleaning up after shim disconnected" id=86e40b48aa2a579b4c1809112e918ee36acbaad1cca9baa30857d066fa0e85b6 namespace=k8s.io Dec 13 13:33:54.152105 containerd[1509]: time="2024-12-13T13:33:54.152056671Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:33:54.665533 kubelet[1925]: E1213 13:33:54.665455 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:33:54.803099 systemd[1]: Created slice kubepods-besteffort-pod20230d91_0ce8_452a_b397_e3c0f73a38ab.slice - libcontainer container kubepods-besteffort-pod20230d91_0ce8_452a_b397_e3c0f73a38ab.slice. Dec 13 13:33:54.807475 containerd[1509]: time="2024-12-13T13:33:54.806535169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l9zzm,Uid:20230d91-0ce8-452a-b397-e3c0f73a38ab,Namespace:calico-system,Attempt:0,}" Dec 13 13:33:54.857764 containerd[1509]: time="2024-12-13T13:33:54.857544121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 13:33:54.897962 containerd[1509]: time="2024-12-13T13:33:54.897891806Z" level=error msg="Failed to destroy network for sandbox \"5aa3a0a1ee3b4659ed4fc0ea1c42835fcbde52c773ea9e270ede34f5c29d644c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:54.900277 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5aa3a0a1ee3b4659ed4fc0ea1c42835fcbde52c773ea9e270ede34f5c29d644c-shm.mount: Deactivated successfully. Dec 13 13:33:54.900583 containerd[1509]: time="2024-12-13T13:33:54.900531276Z" level=error msg="encountered an error cleaning up failed sandbox \"5aa3a0a1ee3b4659ed4fc0ea1c42835fcbde52c773ea9e270ede34f5c29d644c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:54.900683 containerd[1509]: time="2024-12-13T13:33:54.900628627Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l9zzm,Uid:20230d91-0ce8-452a-b397-e3c0f73a38ab,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5aa3a0a1ee3b4659ed4fc0ea1c42835fcbde52c773ea9e270ede34f5c29d644c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:54.901010 kubelet[1925]: E1213 13:33:54.900972 1925 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5aa3a0a1ee3b4659ed4fc0ea1c42835fcbde52c773ea9e270ede34f5c29d644c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:54.901094 kubelet[1925]: E1213 13:33:54.901067 1925 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5aa3a0a1ee3b4659ed4fc0ea1c42835fcbde52c773ea9e270ede34f5c29d644c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l9zzm" Dec 13 13:33:54.901147 kubelet[1925]: E1213 13:33:54.901107 1925 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5aa3a0a1ee3b4659ed4fc0ea1c42835fcbde52c773ea9e270ede34f5c29d644c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l9zzm" Dec 13 13:33:54.901231 kubelet[1925]: E1213 13:33:54.901185 1925 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l9zzm_calico-system(20230d91-0ce8-452a-b397-e3c0f73a38ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l9zzm_calico-system(20230d91-0ce8-452a-b397-e3c0f73a38ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5aa3a0a1ee3b4659ed4fc0ea1c42835fcbde52c773ea9e270ede34f5c29d644c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l9zzm" podUID="20230d91-0ce8-452a-b397-e3c0f73a38ab" Dec 13 13:33:55.666088 kubelet[1925]: E1213 13:33:55.666023 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:33:55.857886 kubelet[1925]: I1213 13:33:55.857838 1925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5aa3a0a1ee3b4659ed4fc0ea1c42835fcbde52c773ea9e270ede34f5c29d644c" Dec 13 13:33:55.858835 containerd[1509]: time="2024-12-13T13:33:55.858796693Z" level=info msg="StopPodSandbox for \"5aa3a0a1ee3b4659ed4fc0ea1c42835fcbde52c773ea9e270ede34f5c29d644c\"" Dec 13 13:33:55.860131 containerd[1509]: time="2024-12-13T13:33:55.859863513Z" level=info msg="Ensure that sandbox 5aa3a0a1ee3b4659ed4fc0ea1c42835fcbde52c773ea9e270ede34f5c29d644c in task-service has been cleanup successfully" Dec 13 13:33:55.860523 containerd[1509]: time="2024-12-13T13:33:55.860403553Z" level=info msg="TearDown network for sandbox \"5aa3a0a1ee3b4659ed4fc0ea1c42835fcbde52c773ea9e270ede34f5c29d644c\" successfully" Dec 13 13:33:55.860523 containerd[1509]: time="2024-12-13T13:33:55.860430703Z" level=info msg="StopPodSandbox for \"5aa3a0a1ee3b4659ed4fc0ea1c42835fcbde52c773ea9e270ede34f5c29d644c\" returns successfully" Dec 13 13:33:55.862862 containerd[1509]: time="2024-12-13T13:33:55.862743814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l9zzm,Uid:20230d91-0ce8-452a-b397-e3c0f73a38ab,Namespace:calico-system,Attempt:1,}" Dec 13 13:33:55.863450 systemd[1]: run-netns-cni\x2df43d7d16\x2da083\x2dd920\x2d69cd\x2d7c63e67b0707.mount: Deactivated successfully. Dec 13 13:33:55.972090 containerd[1509]: time="2024-12-13T13:33:55.971628392Z" level=error msg="Failed to destroy network for sandbox \"a9bf62381fd7d265186d3afe14c3485c6255b0c5ca6e1afe0ac78b8786d1952d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:55.975605 containerd[1509]: time="2024-12-13T13:33:55.975563042Z" level=error msg="encountered an error cleaning up failed sandbox \"a9bf62381fd7d265186d3afe14c3485c6255b0c5ca6e1afe0ac78b8786d1952d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:55.978806 containerd[1509]: time="2024-12-13T13:33:55.975826837Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l9zzm,Uid:20230d91-0ce8-452a-b397-e3c0f73a38ab,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"a9bf62381fd7d265186d3afe14c3485c6255b0c5ca6e1afe0ac78b8786d1952d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:55.978897 kubelet[1925]: E1213 13:33:55.977689 1925 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9bf62381fd7d265186d3afe14c3485c6255b0c5ca6e1afe0ac78b8786d1952d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:55.978897 kubelet[1925]: E1213 13:33:55.977904 1925 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9bf62381fd7d265186d3afe14c3485c6255b0c5ca6e1afe0ac78b8786d1952d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l9zzm" Dec 13 13:33:55.978897 kubelet[1925]: E1213 13:33:55.977962 1925 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9bf62381fd7d265186d3afe14c3485c6255b0c5ca6e1afe0ac78b8786d1952d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l9zzm" Dec 13 13:33:55.976027 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a9bf62381fd7d265186d3afe14c3485c6255b0c5ca6e1afe0ac78b8786d1952d-shm.mount: Deactivated successfully. Dec 13 13:33:55.979254 kubelet[1925]: E1213 13:33:55.978172 1925 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l9zzm_calico-system(20230d91-0ce8-452a-b397-e3c0f73a38ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l9zzm_calico-system(20230d91-0ce8-452a-b397-e3c0f73a38ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a9bf62381fd7d265186d3afe14c3485c6255b0c5ca6e1afe0ac78b8786d1952d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l9zzm" podUID="20230d91-0ce8-452a-b397-e3c0f73a38ab" Dec 13 13:33:56.666546 kubelet[1925]: E1213 13:33:56.666496 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:33:56.862106 kubelet[1925]: I1213 13:33:56.862016 1925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9bf62381fd7d265186d3afe14c3485c6255b0c5ca6e1afe0ac78b8786d1952d" Dec 13 13:33:56.864062 containerd[1509]: time="2024-12-13T13:33:56.863571911Z" level=info msg="StopPodSandbox for \"a9bf62381fd7d265186d3afe14c3485c6255b0c5ca6e1afe0ac78b8786d1952d\"" Dec 13 13:33:56.864062 containerd[1509]: time="2024-12-13T13:33:56.863892120Z" level=info msg="Ensure that sandbox a9bf62381fd7d265186d3afe14c3485c6255b0c5ca6e1afe0ac78b8786d1952d in task-service has been cleanup successfully" Dec 13 13:33:56.864827 containerd[1509]: time="2024-12-13T13:33:56.864704538Z" level=info msg="TearDown network for sandbox \"a9bf62381fd7d265186d3afe14c3485c6255b0c5ca6e1afe0ac78b8786d1952d\" successfully" Dec 13 13:33:56.864827 containerd[1509]: time="2024-12-13T13:33:56.864732169Z" level=info msg="StopPodSandbox for \"a9bf62381fd7d265186d3afe14c3485c6255b0c5ca6e1afe0ac78b8786d1952d\" returns successfully" Dec 13 13:33:56.868161 containerd[1509]: time="2024-12-13T13:33:56.868130686Z" level=info msg="StopPodSandbox for \"5aa3a0a1ee3b4659ed4fc0ea1c42835fcbde52c773ea9e270ede34f5c29d644c\"" Dec 13 13:33:56.868511 containerd[1509]: time="2024-12-13T13:33:56.868384269Z" level=info msg="TearDown network for sandbox \"5aa3a0a1ee3b4659ed4fc0ea1c42835fcbde52c773ea9e270ede34f5c29d644c\" successfully" Dec 13 13:33:56.868511 containerd[1509]: time="2024-12-13T13:33:56.868409378Z" level=info msg="StopPodSandbox for \"5aa3a0a1ee3b4659ed4fc0ea1c42835fcbde52c773ea9e270ede34f5c29d644c\" returns successfully" Dec 13 13:33:56.869503 containerd[1509]: time="2024-12-13T13:33:56.869075459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l9zzm,Uid:20230d91-0ce8-452a-b397-e3c0f73a38ab,Namespace:calico-system,Attempt:2,}" Dec 13 13:33:56.869157 systemd[1]: run-netns-cni\x2d38ed6616\x2d4a7e\x2dc4d8\x2de2e1\x2da27f6745c126.mount: Deactivated successfully. Dec 13 13:33:56.971541 containerd[1509]: time="2024-12-13T13:33:56.971335214Z" level=error msg="Failed to destroy network for sandbox \"806034164e3bd1b2f0c2218df74bc898f9548b6fdccc6244916402da1e2bacf7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:56.974313 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-806034164e3bd1b2f0c2218df74bc898f9548b6fdccc6244916402da1e2bacf7-shm.mount: Deactivated successfully. Dec 13 13:33:56.975530 containerd[1509]: time="2024-12-13T13:33:56.974355086Z" level=error msg="encountered an error cleaning up failed sandbox \"806034164e3bd1b2f0c2218df74bc898f9548b6fdccc6244916402da1e2bacf7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:56.975530 containerd[1509]: time="2024-12-13T13:33:56.974448793Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l9zzm,Uid:20230d91-0ce8-452a-b397-e3c0f73a38ab,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"806034164e3bd1b2f0c2218df74bc898f9548b6fdccc6244916402da1e2bacf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:56.975680 kubelet[1925]: E1213 13:33:56.974858 1925 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"806034164e3bd1b2f0c2218df74bc898f9548b6fdccc6244916402da1e2bacf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:56.975680 kubelet[1925]: E1213 13:33:56.974951 1925 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"806034164e3bd1b2f0c2218df74bc898f9548b6fdccc6244916402da1e2bacf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l9zzm" Dec 13 13:33:56.975680 kubelet[1925]: E1213 13:33:56.974989 1925 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"806034164e3bd1b2f0c2218df74bc898f9548b6fdccc6244916402da1e2bacf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l9zzm" Dec 13 13:33:56.975869 kubelet[1925]: E1213 13:33:56.975072 1925 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l9zzm_calico-system(20230d91-0ce8-452a-b397-e3c0f73a38ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l9zzm_calico-system(20230d91-0ce8-452a-b397-e3c0f73a38ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"806034164e3bd1b2f0c2218df74bc898f9548b6fdccc6244916402da1e2bacf7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l9zzm" podUID="20230d91-0ce8-452a-b397-e3c0f73a38ab" Dec 13 13:33:57.209715 update_engine[1492]: I20241213 13:33:57.209295 1492 update_attempter.cc:509] Updating boot flags... Dec 13 13:33:57.289824 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2466) Dec 13 13:33:57.414094 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2465) Dec 13 13:33:57.669460 kubelet[1925]: E1213 13:33:57.668482 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:33:57.867711 kubelet[1925]: I1213 13:33:57.867565 1925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="806034164e3bd1b2f0c2218df74bc898f9548b6fdccc6244916402da1e2bacf7" Dec 13 13:33:57.869286 containerd[1509]: time="2024-12-13T13:33:57.868521338Z" level=info msg="StopPodSandbox for \"806034164e3bd1b2f0c2218df74bc898f9548b6fdccc6244916402da1e2bacf7\"" Dec 13 13:33:57.869286 containerd[1509]: time="2024-12-13T13:33:57.869047183Z" level=info msg="Ensure that sandbox 806034164e3bd1b2f0c2218df74bc898f9548b6fdccc6244916402da1e2bacf7 in task-service has been cleanup successfully" Dec 13 13:33:57.871769 containerd[1509]: time="2024-12-13T13:33:57.871738185Z" level=info msg="TearDown network for sandbox \"806034164e3bd1b2f0c2218df74bc898f9548b6fdccc6244916402da1e2bacf7\" successfully" Dec 13 13:33:57.871907 containerd[1509]: time="2024-12-13T13:33:57.871880883Z" level=info msg="StopPodSandbox for \"806034164e3bd1b2f0c2218df74bc898f9548b6fdccc6244916402da1e2bacf7\" returns successfully" Dec 13 13:33:57.872966 containerd[1509]: time="2024-12-13T13:33:57.872639949Z" level=info msg="StopPodSandbox for \"a9bf62381fd7d265186d3afe14c3485c6255b0c5ca6e1afe0ac78b8786d1952d\"" Dec 13 13:33:57.872966 containerd[1509]: time="2024-12-13T13:33:57.872768423Z" level=info msg="TearDown network for sandbox \"a9bf62381fd7d265186d3afe14c3485c6255b0c5ca6e1afe0ac78b8786d1952d\" successfully" Dec 13 13:33:57.872966 containerd[1509]: time="2024-12-13T13:33:57.872788914Z" level=info msg="StopPodSandbox for \"a9bf62381fd7d265186d3afe14c3485c6255b0c5ca6e1afe0ac78b8786d1952d\" returns successfully" Dec 13 13:33:57.873347 containerd[1509]: time="2024-12-13T13:33:57.873305356Z" level=info msg="StopPodSandbox for \"5aa3a0a1ee3b4659ed4fc0ea1c42835fcbde52c773ea9e270ede34f5c29d644c\"" Dec 13 13:33:57.873595 containerd[1509]: time="2024-12-13T13:33:57.873513361Z" level=info msg="TearDown network for sandbox \"5aa3a0a1ee3b4659ed4fc0ea1c42835fcbde52c773ea9e270ede34f5c29d644c\" successfully" Dec 13 13:33:57.874046 containerd[1509]: time="2024-12-13T13:33:57.873706942Z" level=info msg="StopPodSandbox for \"5aa3a0a1ee3b4659ed4fc0ea1c42835fcbde52c773ea9e270ede34f5c29d644c\" returns successfully" Dec 13 13:33:57.873784 systemd[1]: run-netns-cni\x2dbd8b87ae\x2d779d\x2d75ea\x2d32a7\x2dd8acbed20034.mount: Deactivated successfully. Dec 13 13:33:57.876466 containerd[1509]: time="2024-12-13T13:33:57.875212380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l9zzm,Uid:20230d91-0ce8-452a-b397-e3c0f73a38ab,Namespace:calico-system,Attempt:3,}" Dec 13 13:33:57.974689 containerd[1509]: time="2024-12-13T13:33:57.973924303Z" level=error msg="Failed to destroy network for sandbox \"2e8d6721a59d314b3d0a8bcdf38bda10f22b356813469000dd1a437c0b9250e8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:57.977721 containerd[1509]: time="2024-12-13T13:33:57.977161834Z" level=error msg="encountered an error cleaning up failed sandbox \"2e8d6721a59d314b3d0a8bcdf38bda10f22b356813469000dd1a437c0b9250e8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:57.977721 containerd[1509]: time="2024-12-13T13:33:57.977248798Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l9zzm,Uid:20230d91-0ce8-452a-b397-e3c0f73a38ab,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"2e8d6721a59d314b3d0a8bcdf38bda10f22b356813469000dd1a437c0b9250e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:57.977884 kubelet[1925]: E1213 13:33:57.977560 1925 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e8d6721a59d314b3d0a8bcdf38bda10f22b356813469000dd1a437c0b9250e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:57.977884 kubelet[1925]: E1213 13:33:57.977628 1925 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e8d6721a59d314b3d0a8bcdf38bda10f22b356813469000dd1a437c0b9250e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l9zzm" Dec 13 13:33:57.977884 kubelet[1925]: E1213 13:33:57.977778 1925 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e8d6721a59d314b3d0a8bcdf38bda10f22b356813469000dd1a437c0b9250e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l9zzm" Dec 13 13:33:57.978081 kubelet[1925]: E1213 13:33:57.977863 1925 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l9zzm_calico-system(20230d91-0ce8-452a-b397-e3c0f73a38ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l9zzm_calico-system(20230d91-0ce8-452a-b397-e3c0f73a38ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2e8d6721a59d314b3d0a8bcdf38bda10f22b356813469000dd1a437c0b9250e8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l9zzm" podUID="20230d91-0ce8-452a-b397-e3c0f73a38ab" Dec 13 13:33:57.980272 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2e8d6721a59d314b3d0a8bcdf38bda10f22b356813469000dd1a437c0b9250e8-shm.mount: Deactivated successfully. Dec 13 13:33:58.669797 kubelet[1925]: E1213 13:33:58.669283 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:33:58.875073 kubelet[1925]: I1213 13:33:58.874599 1925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e8d6721a59d314b3d0a8bcdf38bda10f22b356813469000dd1a437c0b9250e8" Dec 13 13:33:58.876265 containerd[1509]: time="2024-12-13T13:33:58.875349966Z" level=info msg="StopPodSandbox for \"2e8d6721a59d314b3d0a8bcdf38bda10f22b356813469000dd1a437c0b9250e8\"" Dec 13 13:33:58.876265 containerd[1509]: time="2024-12-13T13:33:58.875683640Z" level=info msg="Ensure that sandbox 2e8d6721a59d314b3d0a8bcdf38bda10f22b356813469000dd1a437c0b9250e8 in task-service has been cleanup successfully" Dec 13 13:33:58.879201 containerd[1509]: time="2024-12-13T13:33:58.877799806Z" level=info msg="TearDown network for sandbox \"2e8d6721a59d314b3d0a8bcdf38bda10f22b356813469000dd1a437c0b9250e8\" successfully" Dec 13 13:33:58.879201 containerd[1509]: time="2024-12-13T13:33:58.877833421Z" level=info msg="StopPodSandbox for \"2e8d6721a59d314b3d0a8bcdf38bda10f22b356813469000dd1a437c0b9250e8\" returns successfully" Dec 13 13:33:58.880202 systemd[1]: run-netns-cni\x2de7f0a18f\x2d8a94\x2d6ac1\x2d8a78\x2d8736535065af.mount: Deactivated successfully. Dec 13 13:33:58.881297 containerd[1509]: time="2024-12-13T13:33:58.880839485Z" level=info msg="StopPodSandbox for \"806034164e3bd1b2f0c2218df74bc898f9548b6fdccc6244916402da1e2bacf7\"" Dec 13 13:33:58.881297 containerd[1509]: time="2024-12-13T13:33:58.880963913Z" level=info msg="TearDown network for sandbox \"806034164e3bd1b2f0c2218df74bc898f9548b6fdccc6244916402da1e2bacf7\" successfully" Dec 13 13:33:58.881297 containerd[1509]: time="2024-12-13T13:33:58.880982071Z" level=info msg="StopPodSandbox for \"806034164e3bd1b2f0c2218df74bc898f9548b6fdccc6244916402da1e2bacf7\" returns successfully" Dec 13 13:33:58.883187 containerd[1509]: time="2024-12-13T13:33:58.881952570Z" level=info msg="StopPodSandbox for \"a9bf62381fd7d265186d3afe14c3485c6255b0c5ca6e1afe0ac78b8786d1952d\"" Dec 13 13:33:58.883438 containerd[1509]: time="2024-12-13T13:33:58.883409733Z" level=info msg="TearDown network for sandbox \"a9bf62381fd7d265186d3afe14c3485c6255b0c5ca6e1afe0ac78b8786d1952d\" successfully" Dec 13 13:33:58.884383 containerd[1509]: time="2024-12-13T13:33:58.884324453Z" level=info msg="StopPodSandbox for \"a9bf62381fd7d265186d3afe14c3485c6255b0c5ca6e1afe0ac78b8786d1952d\" returns successfully" Dec 13 13:33:58.884932 containerd[1509]: time="2024-12-13T13:33:58.884881881Z" level=info msg="StopPodSandbox for \"5aa3a0a1ee3b4659ed4fc0ea1c42835fcbde52c773ea9e270ede34f5c29d644c\"" Dec 13 13:33:58.885941 containerd[1509]: time="2024-12-13T13:33:58.885808040Z" level=info msg="TearDown network for sandbox \"5aa3a0a1ee3b4659ed4fc0ea1c42835fcbde52c773ea9e270ede34f5c29d644c\" successfully" Dec 13 13:33:58.885941 containerd[1509]: time="2024-12-13T13:33:58.885833845Z" level=info msg="StopPodSandbox for \"5aa3a0a1ee3b4659ed4fc0ea1c42835fcbde52c773ea9e270ede34f5c29d644c\" returns successfully" Dec 13 13:33:58.886668 containerd[1509]: time="2024-12-13T13:33:58.886367525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l9zzm,Uid:20230d91-0ce8-452a-b397-e3c0f73a38ab,Namespace:calico-system,Attempt:4,}" Dec 13 13:33:59.031270 containerd[1509]: time="2024-12-13T13:33:59.031216178Z" level=error msg="Failed to destroy network for sandbox \"a1ad99f6aeb4e47962dd648c30f639895fc1d776f030c9b5bf7d999b684f8be0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:59.033245 containerd[1509]: time="2024-12-13T13:33:59.033091130Z" level=error msg="encountered an error cleaning up failed sandbox \"a1ad99f6aeb4e47962dd648c30f639895fc1d776f030c9b5bf7d999b684f8be0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:59.033336 containerd[1509]: time="2024-12-13T13:33:59.033256615Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l9zzm,Uid:20230d91-0ce8-452a-b397-e3c0f73a38ab,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"a1ad99f6aeb4e47962dd648c30f639895fc1d776f030c9b5bf7d999b684f8be0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:59.035169 kubelet[1925]: E1213 13:33:59.033602 1925 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1ad99f6aeb4e47962dd648c30f639895fc1d776f030c9b5bf7d999b684f8be0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:33:59.035169 kubelet[1925]: E1213 13:33:59.033731 1925 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1ad99f6aeb4e47962dd648c30f639895fc1d776f030c9b5bf7d999b684f8be0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l9zzm" Dec 13 13:33:59.035169 kubelet[1925]: E1213 13:33:59.033768 1925 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1ad99f6aeb4e47962dd648c30f639895fc1d776f030c9b5bf7d999b684f8be0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l9zzm" Dec 13 13:33:59.035401 kubelet[1925]: E1213 13:33:59.033844 1925 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l9zzm_calico-system(20230d91-0ce8-452a-b397-e3c0f73a38ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l9zzm_calico-system(20230d91-0ce8-452a-b397-e3c0f73a38ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a1ad99f6aeb4e47962dd648c30f639895fc1d776f030c9b5bf7d999b684f8be0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l9zzm" podUID="20230d91-0ce8-452a-b397-e3c0f73a38ab" Dec 13 13:33:59.035406 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a1ad99f6aeb4e47962dd648c30f639895fc1d776f030c9b5bf7d999b684f8be0-shm.mount: Deactivated successfully. Dec 13 13:33:59.670734 kubelet[1925]: E1213 13:33:59.670624 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:33:59.889045 kubelet[1925]: I1213 13:33:59.888516 1925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1ad99f6aeb4e47962dd648c30f639895fc1d776f030c9b5bf7d999b684f8be0" Dec 13 13:33:59.892938 containerd[1509]: time="2024-12-13T13:33:59.892853519Z" level=info msg="StopPodSandbox for \"a1ad99f6aeb4e47962dd648c30f639895fc1d776f030c9b5bf7d999b684f8be0\"" Dec 13 13:33:59.894269 containerd[1509]: time="2024-12-13T13:33:59.894038183Z" level=info msg="Ensure that sandbox a1ad99f6aeb4e47962dd648c30f639895fc1d776f030c9b5bf7d999b684f8be0 in task-service has been cleanup successfully" Dec 13 13:33:59.898765 containerd[1509]: time="2024-12-13T13:33:59.898733572Z" level=info msg="TearDown network for sandbox \"a1ad99f6aeb4e47962dd648c30f639895fc1d776f030c9b5bf7d999b684f8be0\" successfully" Dec 13 13:33:59.898918 containerd[1509]: time="2024-12-13T13:33:59.898891134Z" level=info msg="StopPodSandbox for \"a1ad99f6aeb4e47962dd648c30f639895fc1d776f030c9b5bf7d999b684f8be0\" returns successfully" Dec 13 13:33:59.899941 systemd[1]: run-netns-cni\x2d23ea2bdb\x2d4b24\x2d1dee\x2d1263\x2d94a55227bf98.mount: Deactivated successfully. Dec 13 13:33:59.901567 containerd[1509]: time="2024-12-13T13:33:59.900995694Z" level=info msg="StopPodSandbox for \"2e8d6721a59d314b3d0a8bcdf38bda10f22b356813469000dd1a437c0b9250e8\"" Dec 13 13:33:59.901567 containerd[1509]: time="2024-12-13T13:33:59.901491091Z" level=info msg="TearDown network for sandbox \"2e8d6721a59d314b3d0a8bcdf38bda10f22b356813469000dd1a437c0b9250e8\" successfully" Dec 13 13:33:59.901567 containerd[1509]: time="2024-12-13T13:33:59.901512694Z" level=info msg="StopPodSandbox for \"2e8d6721a59d314b3d0a8bcdf38bda10f22b356813469000dd1a437c0b9250e8\" returns successfully" Dec 13 13:33:59.904558 containerd[1509]: time="2024-12-13T13:33:59.903221170Z" level=info msg="StopPodSandbox for \"806034164e3bd1b2f0c2218df74bc898f9548b6fdccc6244916402da1e2bacf7\"" Dec 13 13:33:59.904558 containerd[1509]: time="2024-12-13T13:33:59.903384262Z" level=info msg="TearDown network for sandbox \"806034164e3bd1b2f0c2218df74bc898f9548b6fdccc6244916402da1e2bacf7\" successfully" Dec 13 13:33:59.904558 containerd[1509]: time="2024-12-13T13:33:59.903404463Z" level=info msg="StopPodSandbox for \"806034164e3bd1b2f0c2218df74bc898f9548b6fdccc6244916402da1e2bacf7\" returns successfully" Dec 13 13:33:59.904926 containerd[1509]: time="2024-12-13T13:33:59.904896423Z" level=info msg="StopPodSandbox for \"a9bf62381fd7d265186d3afe14c3485c6255b0c5ca6e1afe0ac78b8786d1952d\"" Dec 13 13:33:59.906790 containerd[1509]: time="2024-12-13T13:33:59.906764489Z" level=info msg="TearDown network for sandbox \"a9bf62381fd7d265186d3afe14c3485c6255b0c5ca6e1afe0ac78b8786d1952d\" successfully" Dec 13 13:33:59.906932 containerd[1509]: time="2024-12-13T13:33:59.906908731Z" level=info msg="StopPodSandbox for \"a9bf62381fd7d265186d3afe14c3485c6255b0c5ca6e1afe0ac78b8786d1952d\" returns successfully" Dec 13 13:33:59.907397 containerd[1509]: time="2024-12-13T13:33:59.907368159Z" level=info msg="StopPodSandbox for \"5aa3a0a1ee3b4659ed4fc0ea1c42835fcbde52c773ea9e270ede34f5c29d644c\"" Dec 13 13:33:59.907709 containerd[1509]: time="2024-12-13T13:33:59.907561799Z" level=info msg="TearDown network for sandbox \"5aa3a0a1ee3b4659ed4fc0ea1c42835fcbde52c773ea9e270ede34f5c29d644c\" successfully" Dec 13 13:33:59.907840 containerd[1509]: time="2024-12-13T13:33:59.907806331Z" level=info msg="StopPodSandbox for \"5aa3a0a1ee3b4659ed4fc0ea1c42835fcbde52c773ea9e270ede34f5c29d644c\" returns successfully" Dec 13 13:33:59.909202 containerd[1509]: time="2024-12-13T13:33:59.909171829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l9zzm,Uid:20230d91-0ce8-452a-b397-e3c0f73a38ab,Namespace:calico-system,Attempt:5,}" Dec 13 13:34:00.052794 containerd[1509]: time="2024-12-13T13:34:00.052731441Z" level=error msg="Failed to destroy network for sandbox \"5778ece67220be491b52f677544a3acfa0f15338385d7cb4e18345a3abcb18fe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:00.053489 containerd[1509]: time="2024-12-13T13:34:00.053452625Z" level=error msg="encountered an error cleaning up failed sandbox \"5778ece67220be491b52f677544a3acfa0f15338385d7cb4e18345a3abcb18fe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:00.053677 containerd[1509]: time="2024-12-13T13:34:00.053620218Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l9zzm,Uid:20230d91-0ce8-452a-b397-e3c0f73a38ab,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"5778ece67220be491b52f677544a3acfa0f15338385d7cb4e18345a3abcb18fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:00.055677 kubelet[1925]: E1213 13:34:00.054100 1925 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5778ece67220be491b52f677544a3acfa0f15338385d7cb4e18345a3abcb18fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:00.055677 kubelet[1925]: E1213 13:34:00.054194 1925 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5778ece67220be491b52f677544a3acfa0f15338385d7cb4e18345a3abcb18fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l9zzm" Dec 13 13:34:00.055677 kubelet[1925]: E1213 13:34:00.054241 1925 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5778ece67220be491b52f677544a3acfa0f15338385d7cb4e18345a3abcb18fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l9zzm" Dec 13 13:34:00.055936 kubelet[1925]: E1213 13:34:00.054371 1925 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l9zzm_calico-system(20230d91-0ce8-452a-b397-e3c0f73a38ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l9zzm_calico-system(20230d91-0ce8-452a-b397-e3c0f73a38ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5778ece67220be491b52f677544a3acfa0f15338385d7cb4e18345a3abcb18fe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l9zzm" podUID="20230d91-0ce8-452a-b397-e3c0f73a38ab" Dec 13 13:34:00.056561 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5778ece67220be491b52f677544a3acfa0f15338385d7cb4e18345a3abcb18fe-shm.mount: Deactivated successfully. Dec 13 13:34:00.072534 kubelet[1925]: I1213 13:34:00.071690 1925 topology_manager.go:215] "Topology Admit Handler" podUID="9c7bd7a9-f050-43e9-9f01-dbc63acc5f4e" podNamespace="default" podName="nginx-deployment-6d5f899847-bqgq7" Dec 13 13:34:00.085177 systemd[1]: Created slice kubepods-besteffort-pod9c7bd7a9_f050_43e9_9f01_dbc63acc5f4e.slice - libcontainer container kubepods-besteffort-pod9c7bd7a9_f050_43e9_9f01_dbc63acc5f4e.slice. Dec 13 13:34:00.198491 kubelet[1925]: I1213 13:34:00.198427 1925 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8j6b\" (UniqueName: \"kubernetes.io/projected/9c7bd7a9-f050-43e9-9f01-dbc63acc5f4e-kube-api-access-m8j6b\") pod \"nginx-deployment-6d5f899847-bqgq7\" (UID: \"9c7bd7a9-f050-43e9-9f01-dbc63acc5f4e\") " pod="default/nginx-deployment-6d5f899847-bqgq7" Dec 13 13:34:00.391271 containerd[1509]: time="2024-12-13T13:34:00.390720515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-bqgq7,Uid:9c7bd7a9-f050-43e9-9f01-dbc63acc5f4e,Namespace:default,Attempt:0,}" Dec 13 13:34:00.613360 containerd[1509]: time="2024-12-13T13:34:00.613282405Z" level=error msg="Failed to destroy network for sandbox \"a49cd802272c3dd080ae1bd6e830bd122e212bff968b2dd6260081ce8ef5ae96\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:00.614254 containerd[1509]: time="2024-12-13T13:34:00.614217299Z" level=error msg="encountered an error cleaning up failed sandbox \"a49cd802272c3dd080ae1bd6e830bd122e212bff968b2dd6260081ce8ef5ae96\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:00.614861 containerd[1509]: time="2024-12-13T13:34:00.614811878Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-bqgq7,Uid:9c7bd7a9-f050-43e9-9f01-dbc63acc5f4e,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a49cd802272c3dd080ae1bd6e830bd122e212bff968b2dd6260081ce8ef5ae96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:00.616003 kubelet[1925]: E1213 13:34:00.615530 1925 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a49cd802272c3dd080ae1bd6e830bd122e212bff968b2dd6260081ce8ef5ae96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:00.616003 kubelet[1925]: E1213 13:34:00.615614 1925 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a49cd802272c3dd080ae1bd6e830bd122e212bff968b2dd6260081ce8ef5ae96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-bqgq7" Dec 13 13:34:00.616003 kubelet[1925]: E1213 13:34:00.615681 1925 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a49cd802272c3dd080ae1bd6e830bd122e212bff968b2dd6260081ce8ef5ae96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-bqgq7" Dec 13 13:34:00.616253 kubelet[1925]: E1213 13:34:00.615765 1925 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-bqgq7_default(9c7bd7a9-f050-43e9-9f01-dbc63acc5f4e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-bqgq7_default(9c7bd7a9-f050-43e9-9f01-dbc63acc5f4e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a49cd802272c3dd080ae1bd6e830bd122e212bff968b2dd6260081ce8ef5ae96\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-bqgq7" podUID="9c7bd7a9-f050-43e9-9f01-dbc63acc5f4e" Dec 13 13:34:00.651311 kubelet[1925]: E1213 13:34:00.651177 1925 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:34:00.671450 kubelet[1925]: E1213 13:34:00.671409 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:34:00.895240 kubelet[1925]: I1213 13:34:00.893981 1925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5778ece67220be491b52f677544a3acfa0f15338385d7cb4e18345a3abcb18fe" Dec 13 13:34:00.898764 containerd[1509]: time="2024-12-13T13:34:00.896117777Z" level=info msg="StopPodSandbox for \"5778ece67220be491b52f677544a3acfa0f15338385d7cb4e18345a3abcb18fe\"" Dec 13 13:34:00.900230 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a49cd802272c3dd080ae1bd6e830bd122e212bff968b2dd6260081ce8ef5ae96-shm.mount: Deactivated successfully. Dec 13 13:34:00.900831 containerd[1509]: time="2024-12-13T13:34:00.900799809Z" level=info msg="Ensure that sandbox 5778ece67220be491b52f677544a3acfa0f15338385d7cb4e18345a3abcb18fe in task-service has been cleanup successfully" Dec 13 13:34:00.903043 containerd[1509]: time="2024-12-13T13:34:00.902110162Z" level=info msg="TearDown network for sandbox \"5778ece67220be491b52f677544a3acfa0f15338385d7cb4e18345a3abcb18fe\" successfully" Dec 13 13:34:00.903043 containerd[1509]: time="2024-12-13T13:34:00.902139103Z" level=info msg="StopPodSandbox for \"5778ece67220be491b52f677544a3acfa0f15338385d7cb4e18345a3abcb18fe\" returns successfully" Dec 13 13:34:00.905152 containerd[1509]: time="2024-12-13T13:34:00.904004301Z" level=info msg="StopPodSandbox for \"a1ad99f6aeb4e47962dd648c30f639895fc1d776f030c9b5bf7d999b684f8be0\"" Dec 13 13:34:00.905152 containerd[1509]: time="2024-12-13T13:34:00.904105117Z" level=info msg="TearDown network for sandbox \"a1ad99f6aeb4e47962dd648c30f639895fc1d776f030c9b5bf7d999b684f8be0\" successfully" Dec 13 13:34:00.905152 containerd[1509]: time="2024-12-13T13:34:00.904123344Z" level=info msg="StopPodSandbox for \"a1ad99f6aeb4e47962dd648c30f639895fc1d776f030c9b5bf7d999b684f8be0\" returns successfully" Dec 13 13:34:00.905441 kubelet[1925]: I1213 13:34:00.904933 1925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a49cd802272c3dd080ae1bd6e830bd122e212bff968b2dd6260081ce8ef5ae96" Dec 13 13:34:00.907990 containerd[1509]: time="2024-12-13T13:34:00.905998315Z" level=info msg="StopPodSandbox for \"a49cd802272c3dd080ae1bd6e830bd122e212bff968b2dd6260081ce8ef5ae96\"" Dec 13 13:34:00.907990 containerd[1509]: time="2024-12-13T13:34:00.906268948Z" level=info msg="Ensure that sandbox a49cd802272c3dd080ae1bd6e830bd122e212bff968b2dd6260081ce8ef5ae96 in task-service has been cleanup successfully" Dec 13 13:34:00.907990 containerd[1509]: time="2024-12-13T13:34:00.907737561Z" level=info msg="StopPodSandbox for \"2e8d6721a59d314b3d0a8bcdf38bda10f22b356813469000dd1a437c0b9250e8\"" Dec 13 13:34:00.907990 containerd[1509]: time="2024-12-13T13:34:00.907833473Z" level=info msg="TearDown network for sandbox \"2e8d6721a59d314b3d0a8bcdf38bda10f22b356813469000dd1a437c0b9250e8\" successfully" Dec 13 13:34:00.907990 containerd[1509]: time="2024-12-13T13:34:00.907851160Z" level=info msg="StopPodSandbox for \"2e8d6721a59d314b3d0a8bcdf38bda10f22b356813469000dd1a437c0b9250e8\" returns successfully" Dec 13 13:34:00.906113 systemd[1]: run-netns-cni\x2d7e509a00\x2da17a\x2d1173\x2dddc7\x2dfc71529d992f.mount: Deactivated successfully. Dec 13 13:34:00.909567 containerd[1509]: time="2024-12-13T13:34:00.909218967Z" level=info msg="TearDown network for sandbox \"a49cd802272c3dd080ae1bd6e830bd122e212bff968b2dd6260081ce8ef5ae96\" successfully" Dec 13 13:34:00.909567 containerd[1509]: time="2024-12-13T13:34:00.909245556Z" level=info msg="StopPodSandbox for \"a49cd802272c3dd080ae1bd6e830bd122e212bff968b2dd6260081ce8ef5ae96\" returns successfully" Dec 13 13:34:00.909567 containerd[1509]: time="2024-12-13T13:34:00.909390532Z" level=info msg="StopPodSandbox for \"806034164e3bd1b2f0c2218df74bc898f9548b6fdccc6244916402da1e2bacf7\"" Dec 13 13:34:00.909567 containerd[1509]: time="2024-12-13T13:34:00.909482846Z" level=info msg="TearDown network for sandbox \"806034164e3bd1b2f0c2218df74bc898f9548b6fdccc6244916402da1e2bacf7\" successfully" Dec 13 13:34:00.909567 containerd[1509]: time="2024-12-13T13:34:00.909500352Z" level=info msg="StopPodSandbox for \"806034164e3bd1b2f0c2218df74bc898f9548b6fdccc6244916402da1e2bacf7\" returns successfully" Dec 13 13:34:00.910675 containerd[1509]: time="2024-12-13T13:34:00.910409230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-bqgq7,Uid:9c7bd7a9-f050-43e9-9f01-dbc63acc5f4e,Namespace:default,Attempt:1,}" Dec 13 13:34:00.911189 systemd[1]: run-netns-cni\x2dc9c50cfc\x2df893\x2dd6b5\x2d9f9d\x2d52526e53a121.mount: Deactivated successfully. Dec 13 13:34:00.913375 containerd[1509]: time="2024-12-13T13:34:00.913298329Z" level=info msg="StopPodSandbox for \"a9bf62381fd7d265186d3afe14c3485c6255b0c5ca6e1afe0ac78b8786d1952d\"" Dec 13 13:34:00.913491 containerd[1509]: time="2024-12-13T13:34:00.913437336Z" level=info msg="TearDown network for sandbox \"a9bf62381fd7d265186d3afe14c3485c6255b0c5ca6e1afe0ac78b8786d1952d\" successfully" Dec 13 13:34:00.913491 containerd[1509]: time="2024-12-13T13:34:00.913456718Z" level=info msg="StopPodSandbox for \"a9bf62381fd7d265186d3afe14c3485c6255b0c5ca6e1afe0ac78b8786d1952d\" returns successfully" Dec 13 13:34:00.920752 containerd[1509]: time="2024-12-13T13:34:00.919359560Z" level=info msg="StopPodSandbox for \"5aa3a0a1ee3b4659ed4fc0ea1c42835fcbde52c773ea9e270ede34f5c29d644c\"" Dec 13 13:34:00.920752 containerd[1509]: time="2024-12-13T13:34:00.919477789Z" level=info msg="TearDown network for sandbox \"5aa3a0a1ee3b4659ed4fc0ea1c42835fcbde52c773ea9e270ede34f5c29d644c\" successfully" Dec 13 13:34:00.920752 containerd[1509]: time="2024-12-13T13:34:00.919497026Z" level=info msg="StopPodSandbox for \"5aa3a0a1ee3b4659ed4fc0ea1c42835fcbde52c773ea9e270ede34f5c29d644c\" returns successfully" Dec 13 13:34:00.921436 containerd[1509]: time="2024-12-13T13:34:00.920818382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l9zzm,Uid:20230d91-0ce8-452a-b397-e3c0f73a38ab,Namespace:calico-system,Attempt:6,}" Dec 13 13:34:01.079219 containerd[1509]: time="2024-12-13T13:34:01.078955922Z" level=error msg="Failed to destroy network for sandbox \"c556daef8ffe98ac226475fb70ade009d48aaa8e4e8d00690a5e566dd5ae9b02\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:01.079493 containerd[1509]: time="2024-12-13T13:34:01.079441262Z" level=error msg="encountered an error cleaning up failed sandbox \"c556daef8ffe98ac226475fb70ade009d48aaa8e4e8d00690a5e566dd5ae9b02\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:01.079562 containerd[1509]: time="2024-12-13T13:34:01.079508794Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-bqgq7,Uid:9c7bd7a9-f050-43e9-9f01-dbc63acc5f4e,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"c556daef8ffe98ac226475fb70ade009d48aaa8e4e8d00690a5e566dd5ae9b02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:01.080201 kubelet[1925]: E1213 13:34:01.079911 1925 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c556daef8ffe98ac226475fb70ade009d48aaa8e4e8d00690a5e566dd5ae9b02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:01.080201 kubelet[1925]: E1213 13:34:01.079988 1925 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c556daef8ffe98ac226475fb70ade009d48aaa8e4e8d00690a5e566dd5ae9b02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-bqgq7" Dec 13 13:34:01.080201 kubelet[1925]: E1213 13:34:01.080021 1925 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c556daef8ffe98ac226475fb70ade009d48aaa8e4e8d00690a5e566dd5ae9b02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-bqgq7" Dec 13 13:34:01.080425 kubelet[1925]: E1213 13:34:01.080098 1925 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-bqgq7_default(9c7bd7a9-f050-43e9-9f01-dbc63acc5f4e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-bqgq7_default(9c7bd7a9-f050-43e9-9f01-dbc63acc5f4e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c556daef8ffe98ac226475fb70ade009d48aaa8e4e8d00690a5e566dd5ae9b02\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-bqgq7" podUID="9c7bd7a9-f050-43e9-9f01-dbc63acc5f4e" Dec 13 13:34:01.153826 containerd[1509]: time="2024-12-13T13:34:01.153539578Z" level=error msg="Failed to destroy network for sandbox \"fa291a37c125db69b561f6552ca6e154c4d30dcd88ec1d735fad43de536fb560\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:01.154838 containerd[1509]: time="2024-12-13T13:34:01.154294390Z" level=error msg="encountered an error cleaning up failed sandbox \"fa291a37c125db69b561f6552ca6e154c4d30dcd88ec1d735fad43de536fb560\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:01.154838 containerd[1509]: time="2024-12-13T13:34:01.154435202Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l9zzm,Uid:20230d91-0ce8-452a-b397-e3c0f73a38ab,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"fa291a37c125db69b561f6552ca6e154c4d30dcd88ec1d735fad43de536fb560\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:01.154997 kubelet[1925]: E1213 13:34:01.154764 1925 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa291a37c125db69b561f6552ca6e154c4d30dcd88ec1d735fad43de536fb560\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:01.154997 kubelet[1925]: E1213 13:34:01.154835 1925 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa291a37c125db69b561f6552ca6e154c4d30dcd88ec1d735fad43de536fb560\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l9zzm" Dec 13 13:34:01.154997 kubelet[1925]: E1213 13:34:01.154868 1925 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa291a37c125db69b561f6552ca6e154c4d30dcd88ec1d735fad43de536fb560\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l9zzm" Dec 13 13:34:01.155174 kubelet[1925]: E1213 13:34:01.154944 1925 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l9zzm_calico-system(20230d91-0ce8-452a-b397-e3c0f73a38ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l9zzm_calico-system(20230d91-0ce8-452a-b397-e3c0f73a38ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fa291a37c125db69b561f6552ca6e154c4d30dcd88ec1d735fad43de536fb560\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l9zzm" podUID="20230d91-0ce8-452a-b397-e3c0f73a38ab" Dec 13 13:34:01.672285 kubelet[1925]: E1213 13:34:01.672168 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:34:01.901008 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c556daef8ffe98ac226475fb70ade009d48aaa8e4e8d00690a5e566dd5ae9b02-shm.mount: Deactivated successfully. Dec 13 13:34:01.913948 kubelet[1925]: I1213 13:34:01.913771 1925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa291a37c125db69b561f6552ca6e154c4d30dcd88ec1d735fad43de536fb560" Dec 13 13:34:01.914994 containerd[1509]: time="2024-12-13T13:34:01.914865390Z" level=info msg="StopPodSandbox for \"fa291a37c125db69b561f6552ca6e154c4d30dcd88ec1d735fad43de536fb560\"" Dec 13 13:34:01.915490 containerd[1509]: time="2024-12-13T13:34:01.915141249Z" level=info msg="Ensure that sandbox fa291a37c125db69b561f6552ca6e154c4d30dcd88ec1d735fad43de536fb560 in task-service has been cleanup successfully" Dec 13 13:34:01.919388 containerd[1509]: time="2024-12-13T13:34:01.919348471Z" level=info msg="TearDown network for sandbox \"fa291a37c125db69b561f6552ca6e154c4d30dcd88ec1d735fad43de536fb560\" successfully" Dec 13 13:34:01.919388 containerd[1509]: time="2024-12-13T13:34:01.919377997Z" level=info msg="StopPodSandbox for \"fa291a37c125db69b561f6552ca6e154c4d30dcd88ec1d735fad43de536fb560\" returns successfully" Dec 13 13:34:01.919976 systemd[1]: run-netns-cni\x2da4a0a7c2\x2ddbae\x2d3257\x2d22ee\x2d2d607c75f819.mount: Deactivated successfully. Dec 13 13:34:01.923916 containerd[1509]: time="2024-12-13T13:34:01.923770082Z" level=info msg="StopPodSandbox for \"5778ece67220be491b52f677544a3acfa0f15338385d7cb4e18345a3abcb18fe\"" Dec 13 13:34:01.923916 containerd[1509]: time="2024-12-13T13:34:01.923880602Z" level=info msg="TearDown network for sandbox \"5778ece67220be491b52f677544a3acfa0f15338385d7cb4e18345a3abcb18fe\" successfully" Dec 13 13:34:01.924411 containerd[1509]: time="2024-12-13T13:34:01.924286380Z" level=info msg="StopPodSandbox for \"5778ece67220be491b52f677544a3acfa0f15338385d7cb4e18345a3abcb18fe\" returns successfully" Dec 13 13:34:01.925047 kubelet[1925]: I1213 13:34:01.924700 1925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c556daef8ffe98ac226475fb70ade009d48aaa8e4e8d00690a5e566dd5ae9b02" Dec 13 13:34:01.925418 containerd[1509]: time="2024-12-13T13:34:01.925390731Z" level=info msg="StopPodSandbox for \"c556daef8ffe98ac226475fb70ade009d48aaa8e4e8d00690a5e566dd5ae9b02\"" Dec 13 13:34:01.927668 containerd[1509]: time="2024-12-13T13:34:01.925737818Z" level=info msg="Ensure that sandbox c556daef8ffe98ac226475fb70ade009d48aaa8e4e8d00690a5e566dd5ae9b02 in task-service has been cleanup successfully" Dec 13 13:34:01.928356 containerd[1509]: time="2024-12-13T13:34:01.928314424Z" level=info msg="TearDown network for sandbox \"c556daef8ffe98ac226475fb70ade009d48aaa8e4e8d00690a5e566dd5ae9b02\" successfully" Dec 13 13:34:01.928496 systemd[1]: run-netns-cni\x2d0518dfb0\x2de034\x2dcd7e\x2d216d\x2de782f09ac54c.mount: Deactivated successfully. Dec 13 13:34:01.929342 containerd[1509]: time="2024-12-13T13:34:01.929010873Z" level=info msg="StopPodSandbox for \"c556daef8ffe98ac226475fb70ade009d48aaa8e4e8d00690a5e566dd5ae9b02\" returns successfully" Dec 13 13:34:01.929342 containerd[1509]: time="2024-12-13T13:34:01.929110452Z" level=info msg="StopPodSandbox for \"a1ad99f6aeb4e47962dd648c30f639895fc1d776f030c9b5bf7d999b684f8be0\"" Dec 13 13:34:01.929342 containerd[1509]: time="2024-12-13T13:34:01.929216045Z" level=info msg="TearDown network for sandbox \"a1ad99f6aeb4e47962dd648c30f639895fc1d776f030c9b5bf7d999b684f8be0\" successfully" Dec 13 13:34:01.929342 containerd[1509]: time="2024-12-13T13:34:01.929234467Z" level=info msg="StopPodSandbox for \"a1ad99f6aeb4e47962dd648c30f639895fc1d776f030c9b5bf7d999b684f8be0\" returns successfully" Dec 13 13:34:01.930710 containerd[1509]: time="2024-12-13T13:34:01.930406399Z" level=info msg="StopPodSandbox for \"a49cd802272c3dd080ae1bd6e830bd122e212bff968b2dd6260081ce8ef5ae96\"" Dec 13 13:34:01.930710 containerd[1509]: time="2024-12-13T13:34:01.930578122Z" level=info msg="TearDown network for sandbox \"a49cd802272c3dd080ae1bd6e830bd122e212bff968b2dd6260081ce8ef5ae96\" successfully" Dec 13 13:34:01.930710 containerd[1509]: time="2024-12-13T13:34:01.930598759Z" level=info msg="StopPodSandbox for \"a49cd802272c3dd080ae1bd6e830bd122e212bff968b2dd6260081ce8ef5ae96\" returns successfully" Dec 13 13:34:01.930916 containerd[1509]: time="2024-12-13T13:34:01.930750565Z" level=info msg="StopPodSandbox for \"2e8d6721a59d314b3d0a8bcdf38bda10f22b356813469000dd1a437c0b9250e8\"" Dec 13 13:34:01.930916 containerd[1509]: time="2024-12-13T13:34:01.930841443Z" level=info msg="TearDown network for sandbox \"2e8d6721a59d314b3d0a8bcdf38bda10f22b356813469000dd1a437c0b9250e8\" successfully" Dec 13 13:34:01.930916 containerd[1509]: time="2024-12-13T13:34:01.930859074Z" level=info msg="StopPodSandbox for \"2e8d6721a59d314b3d0a8bcdf38bda10f22b356813469000dd1a437c0b9250e8\" returns successfully" Dec 13 13:34:01.931943 containerd[1509]: time="2024-12-13T13:34:01.931466737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-bqgq7,Uid:9c7bd7a9-f050-43e9-9f01-dbc63acc5f4e,Namespace:default,Attempt:2,}" Dec 13 13:34:01.934588 containerd[1509]: time="2024-12-13T13:34:01.934547937Z" level=info msg="StopPodSandbox for \"806034164e3bd1b2f0c2218df74bc898f9548b6fdccc6244916402da1e2bacf7\"" Dec 13 13:34:01.935056 containerd[1509]: time="2024-12-13T13:34:01.935016254Z" level=info msg="TearDown network for sandbox \"806034164e3bd1b2f0c2218df74bc898f9548b6fdccc6244916402da1e2bacf7\" successfully" Dec 13 13:34:01.935056 containerd[1509]: time="2024-12-13T13:34:01.935047255Z" level=info msg="StopPodSandbox for \"806034164e3bd1b2f0c2218df74bc898f9548b6fdccc6244916402da1e2bacf7\" returns successfully" Dec 13 13:34:01.942809 containerd[1509]: time="2024-12-13T13:34:01.942764215Z" level=info msg="StopPodSandbox for \"a9bf62381fd7d265186d3afe14c3485c6255b0c5ca6e1afe0ac78b8786d1952d\"" Dec 13 13:34:01.942910 containerd[1509]: time="2024-12-13T13:34:01.942880857Z" level=info msg="TearDown network for sandbox \"a9bf62381fd7d265186d3afe14c3485c6255b0c5ca6e1afe0ac78b8786d1952d\" successfully" Dec 13 13:34:01.942910 containerd[1509]: time="2024-12-13T13:34:01.942900060Z" level=info msg="StopPodSandbox for \"a9bf62381fd7d265186d3afe14c3485c6255b0c5ca6e1afe0ac78b8786d1952d\" returns successfully" Dec 13 13:34:01.943849 containerd[1509]: time="2024-12-13T13:34:01.943810800Z" level=info msg="StopPodSandbox for \"5aa3a0a1ee3b4659ed4fc0ea1c42835fcbde52c773ea9e270ede34f5c29d644c\"" Dec 13 13:34:01.943944 containerd[1509]: time="2024-12-13T13:34:01.943921003Z" level=info msg="TearDown network for sandbox \"5aa3a0a1ee3b4659ed4fc0ea1c42835fcbde52c773ea9e270ede34f5c29d644c\" successfully" Dec 13 13:34:01.944013 containerd[1509]: time="2024-12-13T13:34:01.943945723Z" level=info msg="StopPodSandbox for \"5aa3a0a1ee3b4659ed4fc0ea1c42835fcbde52c773ea9e270ede34f5c29d644c\" returns successfully" Dec 13 13:34:01.944631 containerd[1509]: time="2024-12-13T13:34:01.944589280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l9zzm,Uid:20230d91-0ce8-452a-b397-e3c0f73a38ab,Namespace:calico-system,Attempt:7,}" Dec 13 13:34:02.198971 containerd[1509]: time="2024-12-13T13:34:02.198821026Z" level=error msg="Failed to destroy network for sandbox \"efa1d538b4bc9add1bd187e60d1ab94df81de2ddad15d703ac710156a8c89a79\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:02.200039 containerd[1509]: time="2024-12-13T13:34:02.199284115Z" level=error msg="encountered an error cleaning up failed sandbox \"efa1d538b4bc9add1bd187e60d1ab94df81de2ddad15d703ac710156a8c89a79\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:02.200039 containerd[1509]: time="2024-12-13T13:34:02.199368617Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l9zzm,Uid:20230d91-0ce8-452a-b397-e3c0f73a38ab,Namespace:calico-system,Attempt:7,} failed, error" error="failed to setup network for sandbox \"efa1d538b4bc9add1bd187e60d1ab94df81de2ddad15d703ac710156a8c89a79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:02.200513 kubelet[1925]: E1213 13:34:02.200478 1925 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efa1d538b4bc9add1bd187e60d1ab94df81de2ddad15d703ac710156a8c89a79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:02.200731 kubelet[1925]: E1213 13:34:02.200551 1925 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efa1d538b4bc9add1bd187e60d1ab94df81de2ddad15d703ac710156a8c89a79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l9zzm" Dec 13 13:34:02.200731 kubelet[1925]: E1213 13:34:02.200586 1925 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efa1d538b4bc9add1bd187e60d1ab94df81de2ddad15d703ac710156a8c89a79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l9zzm" Dec 13 13:34:02.201278 kubelet[1925]: E1213 13:34:02.201216 1925 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l9zzm_calico-system(20230d91-0ce8-452a-b397-e3c0f73a38ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l9zzm_calico-system(20230d91-0ce8-452a-b397-e3c0f73a38ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"efa1d538b4bc9add1bd187e60d1ab94df81de2ddad15d703ac710156a8c89a79\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l9zzm" podUID="20230d91-0ce8-452a-b397-e3c0f73a38ab" Dec 13 13:34:02.204620 containerd[1509]: time="2024-12-13T13:34:02.204104031Z" level=error msg="Failed to destroy network for sandbox \"e3ca0f9dfa9d9900aea0fe001174697b50e107cb8b14e69d2d740c280a765645\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:02.204620 containerd[1509]: time="2024-12-13T13:34:02.204470313Z" level=error msg="encountered an error cleaning up failed sandbox \"e3ca0f9dfa9d9900aea0fe001174697b50e107cb8b14e69d2d740c280a765645\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:02.204620 containerd[1509]: time="2024-12-13T13:34:02.204528917Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-bqgq7,Uid:9c7bd7a9-f050-43e9-9f01-dbc63acc5f4e,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"e3ca0f9dfa9d9900aea0fe001174697b50e107cb8b14e69d2d740c280a765645\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:02.204896 kubelet[1925]: E1213 13:34:02.204789 1925 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3ca0f9dfa9d9900aea0fe001174697b50e107cb8b14e69d2d740c280a765645\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:02.204896 kubelet[1925]: E1213 13:34:02.204835 1925 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3ca0f9dfa9d9900aea0fe001174697b50e107cb8b14e69d2d740c280a765645\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-bqgq7" Dec 13 13:34:02.204896 kubelet[1925]: E1213 13:34:02.204863 1925 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3ca0f9dfa9d9900aea0fe001174697b50e107cb8b14e69d2d740c280a765645\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-bqgq7" Dec 13 13:34:02.205055 kubelet[1925]: E1213 13:34:02.204916 1925 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-bqgq7_default(9c7bd7a9-f050-43e9-9f01-dbc63acc5f4e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-bqgq7_default(9c7bd7a9-f050-43e9-9f01-dbc63acc5f4e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e3ca0f9dfa9d9900aea0fe001174697b50e107cb8b14e69d2d740c280a765645\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-bqgq7" podUID="9c7bd7a9-f050-43e9-9f01-dbc63acc5f4e" Dec 13 13:34:02.673080 kubelet[1925]: E1213 13:34:02.672969 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:34:02.902196 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-efa1d538b4bc9add1bd187e60d1ab94df81de2ddad15d703ac710156a8c89a79-shm.mount: Deactivated successfully. Dec 13 13:34:02.935606 kubelet[1925]: I1213 13:34:02.934614 1925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="efa1d538b4bc9add1bd187e60d1ab94df81de2ddad15d703ac710156a8c89a79" Dec 13 13:34:02.937000 containerd[1509]: time="2024-12-13T13:34:02.936916459Z" level=info msg="StopPodSandbox for \"efa1d538b4bc9add1bd187e60d1ab94df81de2ddad15d703ac710156a8c89a79\"" Dec 13 13:34:02.941648 containerd[1509]: time="2024-12-13T13:34:02.938506740Z" level=info msg="Ensure that sandbox efa1d538b4bc9add1bd187e60d1ab94df81de2ddad15d703ac710156a8c89a79 in task-service has been cleanup successfully" Dec 13 13:34:02.941648 containerd[1509]: time="2024-12-13T13:34:02.939052772Z" level=info msg="TearDown network for sandbox \"efa1d538b4bc9add1bd187e60d1ab94df81de2ddad15d703ac710156a8c89a79\" successfully" Dec 13 13:34:02.941648 containerd[1509]: time="2024-12-13T13:34:02.939335218Z" level=info msg="StopPodSandbox for \"efa1d538b4bc9add1bd187e60d1ab94df81de2ddad15d703ac710156a8c89a79\" returns successfully" Dec 13 13:34:02.941648 containerd[1509]: time="2024-12-13T13:34:02.941050041Z" level=info msg="StopPodSandbox for \"fa291a37c125db69b561f6552ca6e154c4d30dcd88ec1d735fad43de536fb560\"" Dec 13 13:34:02.941648 containerd[1509]: time="2024-12-13T13:34:02.941215692Z" level=info msg="TearDown network for sandbox \"fa291a37c125db69b561f6552ca6e154c4d30dcd88ec1d735fad43de536fb560\" successfully" Dec 13 13:34:02.941648 containerd[1509]: time="2024-12-13T13:34:02.941254454Z" level=info msg="StopPodSandbox for \"fa291a37c125db69b561f6552ca6e154c4d30dcd88ec1d735fad43de536fb560\" returns successfully" Dec 13 13:34:02.944103 kubelet[1925]: I1213 13:34:02.942596 1925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3ca0f9dfa9d9900aea0fe001174697b50e107cb8b14e69d2d740c280a765645" Dec 13 13:34:02.944655 systemd[1]: run-netns-cni\x2df51ec1df\x2d6ad5\x2de6e6\x2d94cf\x2d489733ef6c8e.mount: Deactivated successfully. Dec 13 13:34:02.946933 containerd[1509]: time="2024-12-13T13:34:02.946903882Z" level=info msg="StopPodSandbox for \"5778ece67220be491b52f677544a3acfa0f15338385d7cb4e18345a3abcb18fe\"" Dec 13 13:34:02.947515 containerd[1509]: time="2024-12-13T13:34:02.947487160Z" level=info msg="TearDown network for sandbox \"5778ece67220be491b52f677544a3acfa0f15338385d7cb4e18345a3abcb18fe\" successfully" Dec 13 13:34:02.947779 containerd[1509]: time="2024-12-13T13:34:02.947732088Z" level=info msg="StopPodSandbox for \"5778ece67220be491b52f677544a3acfa0f15338385d7cb4e18345a3abcb18fe\" returns successfully" Dec 13 13:34:02.948198 containerd[1509]: time="2024-12-13T13:34:02.948169248Z" level=info msg="StopPodSandbox for \"e3ca0f9dfa9d9900aea0fe001174697b50e107cb8b14e69d2d740c280a765645\"" Dec 13 13:34:02.948537 containerd[1509]: time="2024-12-13T13:34:02.948502449Z" level=info msg="Ensure that sandbox e3ca0f9dfa9d9900aea0fe001174697b50e107cb8b14e69d2d740c280a765645 in task-service has been cleanup successfully" Dec 13 13:34:02.952801 containerd[1509]: time="2024-12-13T13:34:02.951052007Z" level=info msg="StopPodSandbox for \"a1ad99f6aeb4e47962dd648c30f639895fc1d776f030c9b5bf7d999b684f8be0\"" Dec 13 13:34:02.952801 containerd[1509]: time="2024-12-13T13:34:02.951165814Z" level=info msg="TearDown network for sandbox \"a1ad99f6aeb4e47962dd648c30f639895fc1d776f030c9b5bf7d999b684f8be0\" successfully" Dec 13 13:34:02.952801 containerd[1509]: time="2024-12-13T13:34:02.951185056Z" level=info msg="StopPodSandbox for \"a1ad99f6aeb4e47962dd648c30f639895fc1d776f030c9b5bf7d999b684f8be0\" returns successfully" Dec 13 13:34:02.953976 containerd[1509]: time="2024-12-13T13:34:02.953945492Z" level=info msg="TearDown network for sandbox \"e3ca0f9dfa9d9900aea0fe001174697b50e107cb8b14e69d2d740c280a765645\" successfully" Dec 13 13:34:02.954099 containerd[1509]: time="2024-12-13T13:34:02.954074093Z" level=info msg="StopPodSandbox for \"e3ca0f9dfa9d9900aea0fe001174697b50e107cb8b14e69d2d740c280a765645\" returns successfully" Dec 13 13:34:02.954503 systemd[1]: run-netns-cni\x2dd444e4b7\x2d78ee\x2d56b0\x2d6805\x2dcdfc33ab3b33.mount: Deactivated successfully. Dec 13 13:34:02.955997 containerd[1509]: time="2024-12-13T13:34:02.955940343Z" level=info msg="StopPodSandbox for \"c556daef8ffe98ac226475fb70ade009d48aaa8e4e8d00690a5e566dd5ae9b02\"" Dec 13 13:34:02.956091 containerd[1509]: time="2024-12-13T13:34:02.956066421Z" level=info msg="TearDown network for sandbox \"c556daef8ffe98ac226475fb70ade009d48aaa8e4e8d00690a5e566dd5ae9b02\" successfully" Dec 13 13:34:02.956164 containerd[1509]: time="2024-12-13T13:34:02.956085235Z" level=info msg="StopPodSandbox for \"c556daef8ffe98ac226475fb70ade009d48aaa8e4e8d00690a5e566dd5ae9b02\" returns successfully" Dec 13 13:34:02.956232 containerd[1509]: time="2024-12-13T13:34:02.956188449Z" level=info msg="StopPodSandbox for \"2e8d6721a59d314b3d0a8bcdf38bda10f22b356813469000dd1a437c0b9250e8\"" Dec 13 13:34:02.956347 containerd[1509]: time="2024-12-13T13:34:02.956290802Z" level=info msg="TearDown network for sandbox \"2e8d6721a59d314b3d0a8bcdf38bda10f22b356813469000dd1a437c0b9250e8\" successfully" Dec 13 13:34:02.956347 containerd[1509]: time="2024-12-13T13:34:02.956340471Z" level=info msg="StopPodSandbox for \"2e8d6721a59d314b3d0a8bcdf38bda10f22b356813469000dd1a437c0b9250e8\" returns successfully" Dec 13 13:34:02.957345 containerd[1509]: time="2024-12-13T13:34:02.957281194Z" level=info msg="StopPodSandbox for \"a49cd802272c3dd080ae1bd6e830bd122e212bff968b2dd6260081ce8ef5ae96\"" Dec 13 13:34:02.959147 containerd[1509]: time="2024-12-13T13:34:02.957522060Z" level=info msg="TearDown network for sandbox \"a49cd802272c3dd080ae1bd6e830bd122e212bff968b2dd6260081ce8ef5ae96\" successfully" Dec 13 13:34:02.959147 containerd[1509]: time="2024-12-13T13:34:02.957614796Z" level=info msg="StopPodSandbox for \"a49cd802272c3dd080ae1bd6e830bd122e212bff968b2dd6260081ce8ef5ae96\" returns successfully" Dec 13 13:34:02.959147 containerd[1509]: time="2024-12-13T13:34:02.957742417Z" level=info msg="StopPodSandbox for \"806034164e3bd1b2f0c2218df74bc898f9548b6fdccc6244916402da1e2bacf7\"" Dec 13 13:34:02.959147 containerd[1509]: time="2024-12-13T13:34:02.957850695Z" level=info msg="TearDown network for sandbox \"806034164e3bd1b2f0c2218df74bc898f9548b6fdccc6244916402da1e2bacf7\" successfully" Dec 13 13:34:02.959147 containerd[1509]: time="2024-12-13T13:34:02.957868538Z" level=info msg="StopPodSandbox for \"806034164e3bd1b2f0c2218df74bc898f9548b6fdccc6244916402da1e2bacf7\" returns successfully" Dec 13 13:34:02.959147 containerd[1509]: time="2024-12-13T13:34:02.958291156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-bqgq7,Uid:9c7bd7a9-f050-43e9-9f01-dbc63acc5f4e,Namespace:default,Attempt:3,}" Dec 13 13:34:02.959147 containerd[1509]: time="2024-12-13T13:34:02.958631113Z" level=info msg="StopPodSandbox for \"a9bf62381fd7d265186d3afe14c3485c6255b0c5ca6e1afe0ac78b8786d1952d\"" Dec 13 13:34:02.959147 containerd[1509]: time="2024-12-13T13:34:02.958825023Z" level=info msg="TearDown network for sandbox \"a9bf62381fd7d265186d3afe14c3485c6255b0c5ca6e1afe0ac78b8786d1952d\" successfully" Dec 13 13:34:02.959147 containerd[1509]: time="2024-12-13T13:34:02.958892876Z" level=info msg="StopPodSandbox for \"a9bf62381fd7d265186d3afe14c3485c6255b0c5ca6e1afe0ac78b8786d1952d\" returns successfully" Dec 13 13:34:02.959685 containerd[1509]: time="2024-12-13T13:34:02.959257455Z" level=info msg="StopPodSandbox for \"5aa3a0a1ee3b4659ed4fc0ea1c42835fcbde52c773ea9e270ede34f5c29d644c\"" Dec 13 13:34:02.959685 containerd[1509]: time="2024-12-13T13:34:02.959415832Z" level=info msg="TearDown network for sandbox \"5aa3a0a1ee3b4659ed4fc0ea1c42835fcbde52c773ea9e270ede34f5c29d644c\" successfully" Dec 13 13:34:02.959685 containerd[1509]: time="2024-12-13T13:34:02.959433741Z" level=info msg="StopPodSandbox for \"5aa3a0a1ee3b4659ed4fc0ea1c42835fcbde52c773ea9e270ede34f5c29d644c\" returns successfully" Dec 13 13:34:02.960035 containerd[1509]: time="2024-12-13T13:34:02.960004485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l9zzm,Uid:20230d91-0ce8-452a-b397-e3c0f73a38ab,Namespace:calico-system,Attempt:8,}" Dec 13 13:34:03.130503 containerd[1509]: time="2024-12-13T13:34:03.130439190Z" level=error msg="Failed to destroy network for sandbox \"e00c79de7aa774063e3a9a2ebb81462662288851eafb8b5b9fdce324e8f95844\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:03.131235 containerd[1509]: time="2024-12-13T13:34:03.131198464Z" level=error msg="encountered an error cleaning up failed sandbox \"e00c79de7aa774063e3a9a2ebb81462662288851eafb8b5b9fdce324e8f95844\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:03.132010 containerd[1509]: time="2024-12-13T13:34:03.131973240Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-bqgq7,Uid:9c7bd7a9-f050-43e9-9f01-dbc63acc5f4e,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"e00c79de7aa774063e3a9a2ebb81462662288851eafb8b5b9fdce324e8f95844\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:03.133281 kubelet[1925]: E1213 13:34:03.132797 1925 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e00c79de7aa774063e3a9a2ebb81462662288851eafb8b5b9fdce324e8f95844\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:03.133281 kubelet[1925]: E1213 13:34:03.132944 1925 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e00c79de7aa774063e3a9a2ebb81462662288851eafb8b5b9fdce324e8f95844\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-bqgq7" Dec 13 13:34:03.133281 kubelet[1925]: E1213 13:34:03.133002 1925 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e00c79de7aa774063e3a9a2ebb81462662288851eafb8b5b9fdce324e8f95844\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-bqgq7" Dec 13 13:34:03.133531 kubelet[1925]: E1213 13:34:03.133201 1925 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-bqgq7_default(9c7bd7a9-f050-43e9-9f01-dbc63acc5f4e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-bqgq7_default(9c7bd7a9-f050-43e9-9f01-dbc63acc5f4e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e00c79de7aa774063e3a9a2ebb81462662288851eafb8b5b9fdce324e8f95844\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-bqgq7" podUID="9c7bd7a9-f050-43e9-9f01-dbc63acc5f4e" Dec 13 13:34:03.146061 containerd[1509]: time="2024-12-13T13:34:03.146012786Z" level=error msg="Failed to destroy network for sandbox \"80e4f575bc36ab713d593c7a3e2746da73417526ecd30560b3bd3e02ad6dbd08\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:03.146868 containerd[1509]: time="2024-12-13T13:34:03.146833137Z" level=error msg="encountered an error cleaning up failed sandbox \"80e4f575bc36ab713d593c7a3e2746da73417526ecd30560b3bd3e02ad6dbd08\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:03.147055 containerd[1509]: time="2024-12-13T13:34:03.147016596Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l9zzm,Uid:20230d91-0ce8-452a-b397-e3c0f73a38ab,Namespace:calico-system,Attempt:8,} failed, error" error="failed to setup network for sandbox \"80e4f575bc36ab713d593c7a3e2746da73417526ecd30560b3bd3e02ad6dbd08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:03.147620 kubelet[1925]: E1213 13:34:03.147417 1925 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80e4f575bc36ab713d593c7a3e2746da73417526ecd30560b3bd3e02ad6dbd08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:03.147620 kubelet[1925]: E1213 13:34:03.147477 1925 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80e4f575bc36ab713d593c7a3e2746da73417526ecd30560b3bd3e02ad6dbd08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l9zzm" Dec 13 13:34:03.147620 kubelet[1925]: E1213 13:34:03.147532 1925 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80e4f575bc36ab713d593c7a3e2746da73417526ecd30560b3bd3e02ad6dbd08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l9zzm" Dec 13 13:34:03.147865 kubelet[1925]: E1213 13:34:03.147630 1925 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l9zzm_calico-system(20230d91-0ce8-452a-b397-e3c0f73a38ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l9zzm_calico-system(20230d91-0ce8-452a-b397-e3c0f73a38ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"80e4f575bc36ab713d593c7a3e2746da73417526ecd30560b3bd3e02ad6dbd08\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l9zzm" podUID="20230d91-0ce8-452a-b397-e3c0f73a38ab" Dec 13 13:34:03.673332 kubelet[1925]: E1213 13:34:03.673173 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:34:03.902875 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-80e4f575bc36ab713d593c7a3e2746da73417526ecd30560b3bd3e02ad6dbd08-shm.mount: Deactivated successfully. Dec 13 13:34:03.903065 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e00c79de7aa774063e3a9a2ebb81462662288851eafb8b5b9fdce324e8f95844-shm.mount: Deactivated successfully. Dec 13 13:34:03.954096 kubelet[1925]: I1213 13:34:03.953973 1925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80e4f575bc36ab713d593c7a3e2746da73417526ecd30560b3bd3e02ad6dbd08" Dec 13 13:34:03.956291 containerd[1509]: time="2024-12-13T13:34:03.955543539Z" level=info msg="StopPodSandbox for \"80e4f575bc36ab713d593c7a3e2746da73417526ecd30560b3bd3e02ad6dbd08\"" Dec 13 13:34:03.956291 containerd[1509]: time="2024-12-13T13:34:03.955853715Z" level=info msg="Ensure that sandbox 80e4f575bc36ab713d593c7a3e2746da73417526ecd30560b3bd3e02ad6dbd08 in task-service has been cleanup successfully" Dec 13 13:34:03.957003 containerd[1509]: time="2024-12-13T13:34:03.956973425Z" level=info msg="TearDown network for sandbox \"80e4f575bc36ab713d593c7a3e2746da73417526ecd30560b3bd3e02ad6dbd08\" successfully" Dec 13 13:34:03.957169 containerd[1509]: time="2024-12-13T13:34:03.957145004Z" level=info msg="StopPodSandbox for \"80e4f575bc36ab713d593c7a3e2746da73417526ecd30560b3bd3e02ad6dbd08\" returns successfully" Dec 13 13:34:03.957861 containerd[1509]: time="2024-12-13T13:34:03.957797561Z" level=info msg="StopPodSandbox for \"efa1d538b4bc9add1bd187e60d1ab94df81de2ddad15d703ac710156a8c89a79\"" Dec 13 13:34:03.958231 containerd[1509]: time="2024-12-13T13:34:03.958188917Z" level=info msg="TearDown network for sandbox \"efa1d538b4bc9add1bd187e60d1ab94df81de2ddad15d703ac710156a8c89a79\" successfully" Dec 13 13:34:03.958478 containerd[1509]: time="2024-12-13T13:34:03.958399498Z" level=info msg="StopPodSandbox for \"efa1d538b4bc9add1bd187e60d1ab94df81de2ddad15d703ac710156a8c89a79\" returns successfully" Dec 13 13:34:03.959175 containerd[1509]: time="2024-12-13T13:34:03.959137413Z" level=info msg="StopPodSandbox for \"fa291a37c125db69b561f6552ca6e154c4d30dcd88ec1d735fad43de536fb560\"" Dec 13 13:34:03.959538 containerd[1509]: time="2024-12-13T13:34:03.959411560Z" level=info msg="TearDown network for sandbox \"fa291a37c125db69b561f6552ca6e154c4d30dcd88ec1d735fad43de536fb560\" successfully" Dec 13 13:34:03.959538 containerd[1509]: time="2024-12-13T13:34:03.959459882Z" level=info msg="StopPodSandbox for \"fa291a37c125db69b561f6552ca6e154c4d30dcd88ec1d735fad43de536fb560\" returns successfully" Dec 13 13:34:03.960836 containerd[1509]: time="2024-12-13T13:34:03.960163685Z" level=info msg="StopPodSandbox for \"5778ece67220be491b52f677544a3acfa0f15338385d7cb4e18345a3abcb18fe\"" Dec 13 13:34:03.960836 containerd[1509]: time="2024-12-13T13:34:03.960263247Z" level=info msg="TearDown network for sandbox \"5778ece67220be491b52f677544a3acfa0f15338385d7cb4e18345a3abcb18fe\" successfully" Dec 13 13:34:03.960836 containerd[1509]: time="2024-12-13T13:34:03.960281831Z" level=info msg="StopPodSandbox for \"5778ece67220be491b52f677544a3acfa0f15338385d7cb4e18345a3abcb18fe\" returns successfully" Dec 13 13:34:03.961034 kubelet[1925]: I1213 13:34:03.960532 1925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e00c79de7aa774063e3a9a2ebb81462662288851eafb8b5b9fdce324e8f95844" Dec 13 13:34:03.961807 containerd[1509]: time="2024-12-13T13:34:03.961574139Z" level=info msg="StopPodSandbox for \"a1ad99f6aeb4e47962dd648c30f639895fc1d776f030c9b5bf7d999b684f8be0\"" Dec 13 13:34:03.962512 containerd[1509]: time="2024-12-13T13:34:03.961988714Z" level=info msg="TearDown network for sandbox \"a1ad99f6aeb4e47962dd648c30f639895fc1d776f030c9b5bf7d999b684f8be0\" successfully" Dec 13 13:34:03.962512 containerd[1509]: time="2024-12-13T13:34:03.962014338Z" level=info msg="StopPodSandbox for \"a1ad99f6aeb4e47962dd648c30f639895fc1d776f030c9b5bf7d999b684f8be0\" returns successfully" Dec 13 13:34:03.962512 containerd[1509]: time="2024-12-13T13:34:03.962089392Z" level=info msg="StopPodSandbox for \"e00c79de7aa774063e3a9a2ebb81462662288851eafb8b5b9fdce324e8f95844\"" Dec 13 13:34:03.962512 containerd[1509]: time="2024-12-13T13:34:03.962369872Z" level=info msg="Ensure that sandbox e00c79de7aa774063e3a9a2ebb81462662288851eafb8b5b9fdce324e8f95844 in task-service has been cleanup successfully" Dec 13 13:34:03.962005 systemd[1]: run-netns-cni\x2df0e50a0b\x2dff25\x2d8199\x2d2355\x2d2ff5b2404af6.mount: Deactivated successfully. Dec 13 13:34:03.965233 containerd[1509]: time="2024-12-13T13:34:03.963824349Z" level=info msg="TearDown network for sandbox \"e00c79de7aa774063e3a9a2ebb81462662288851eafb8b5b9fdce324e8f95844\" successfully" Dec 13 13:34:03.965233 containerd[1509]: time="2024-12-13T13:34:03.963880889Z" level=info msg="StopPodSandbox for \"e00c79de7aa774063e3a9a2ebb81462662288851eafb8b5b9fdce324e8f95844\" returns successfully" Dec 13 13:34:03.965233 containerd[1509]: time="2024-12-13T13:34:03.964432319Z" level=info msg="StopPodSandbox for \"2e8d6721a59d314b3d0a8bcdf38bda10f22b356813469000dd1a437c0b9250e8\"" Dec 13 13:34:03.965233 containerd[1509]: time="2024-12-13T13:34:03.964527471Z" level=info msg="TearDown network for sandbox \"2e8d6721a59d314b3d0a8bcdf38bda10f22b356813469000dd1a437c0b9250e8\" successfully" Dec 13 13:34:03.965233 containerd[1509]: time="2024-12-13T13:34:03.964545234Z" level=info msg="StopPodSandbox for \"2e8d6721a59d314b3d0a8bcdf38bda10f22b356813469000dd1a437c0b9250e8\" returns successfully" Dec 13 13:34:03.965233 containerd[1509]: time="2024-12-13T13:34:03.964620314Z" level=info msg="StopPodSandbox for \"e3ca0f9dfa9d9900aea0fe001174697b50e107cb8b14e69d2d740c280a765645\"" Dec 13 13:34:03.965791 containerd[1509]: time="2024-12-13T13:34:03.965351257Z" level=info msg="TearDown network for sandbox \"e3ca0f9dfa9d9900aea0fe001174697b50e107cb8b14e69d2d740c280a765645\" successfully" Dec 13 13:34:03.965791 containerd[1509]: time="2024-12-13T13:34:03.965371853Z" level=info msg="StopPodSandbox for \"e3ca0f9dfa9d9900aea0fe001174697b50e107cb8b14e69d2d740c280a765645\" returns successfully" Dec 13 13:34:03.967455 containerd[1509]: time="2024-12-13T13:34:03.966301988Z" level=info msg="StopPodSandbox for \"806034164e3bd1b2f0c2218df74bc898f9548b6fdccc6244916402da1e2bacf7\"" Dec 13 13:34:03.967455 containerd[1509]: time="2024-12-13T13:34:03.966427339Z" level=info msg="TearDown network for sandbox \"806034164e3bd1b2f0c2218df74bc898f9548b6fdccc6244916402da1e2bacf7\" successfully" Dec 13 13:34:03.967455 containerd[1509]: time="2024-12-13T13:34:03.966446071Z" level=info msg="StopPodSandbox for \"806034164e3bd1b2f0c2218df74bc898f9548b6fdccc6244916402da1e2bacf7\" returns successfully" Dec 13 13:34:03.967455 containerd[1509]: time="2024-12-13T13:34:03.966515355Z" level=info msg="StopPodSandbox for \"c556daef8ffe98ac226475fb70ade009d48aaa8e4e8d00690a5e566dd5ae9b02\"" Dec 13 13:34:03.967455 containerd[1509]: time="2024-12-13T13:34:03.966600484Z" level=info msg="TearDown network for sandbox \"c556daef8ffe98ac226475fb70ade009d48aaa8e4e8d00690a5e566dd5ae9b02\" successfully" Dec 13 13:34:03.967455 containerd[1509]: time="2024-12-13T13:34:03.966619091Z" level=info msg="StopPodSandbox for \"c556daef8ffe98ac226475fb70ade009d48aaa8e4e8d00690a5e566dd5ae9b02\" returns successfully" Dec 13 13:34:03.967455 containerd[1509]: time="2024-12-13T13:34:03.967053592Z" level=info msg="StopPodSandbox for \"a9bf62381fd7d265186d3afe14c3485c6255b0c5ca6e1afe0ac78b8786d1952d\"" Dec 13 13:34:03.967455 containerd[1509]: time="2024-12-13T13:34:03.967153395Z" level=info msg="TearDown network for sandbox \"a9bf62381fd7d265186d3afe14c3485c6255b0c5ca6e1afe0ac78b8786d1952d\" successfully" Dec 13 13:34:03.967455 containerd[1509]: time="2024-12-13T13:34:03.967170555Z" level=info msg="StopPodSandbox for \"a9bf62381fd7d265186d3afe14c3485c6255b0c5ca6e1afe0ac78b8786d1952d\" returns successfully" Dec 13 13:34:03.967455 containerd[1509]: time="2024-12-13T13:34:03.967242527Z" level=info msg="StopPodSandbox for \"a49cd802272c3dd080ae1bd6e830bd122e212bff968b2dd6260081ce8ef5ae96\"" Dec 13 13:34:03.967455 containerd[1509]: time="2024-12-13T13:34:03.967363159Z" level=info msg="TearDown network for sandbox \"a49cd802272c3dd080ae1bd6e830bd122e212bff968b2dd6260081ce8ef5ae96\" successfully" Dec 13 13:34:03.967455 containerd[1509]: time="2024-12-13T13:34:03.967389187Z" level=info msg="StopPodSandbox for \"a49cd802272c3dd080ae1bd6e830bd122e212bff968b2dd6260081ce8ef5ae96\" returns successfully" Dec 13 13:34:03.967009 systemd[1]: run-netns-cni\x2d80e3e650\x2d0572\x2dbb0e\x2d14da\x2d472e46eea165.mount: Deactivated successfully. Dec 13 13:34:03.969588 containerd[1509]: time="2024-12-13T13:34:03.969104320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-bqgq7,Uid:9c7bd7a9-f050-43e9-9f01-dbc63acc5f4e,Namespace:default,Attempt:4,}" Dec 13 13:34:03.969588 containerd[1509]: time="2024-12-13T13:34:03.969357024Z" level=info msg="StopPodSandbox for \"5aa3a0a1ee3b4659ed4fc0ea1c42835fcbde52c773ea9e270ede34f5c29d644c\"" Dec 13 13:34:03.969588 containerd[1509]: time="2024-12-13T13:34:03.969453123Z" level=info msg="TearDown network for sandbox \"5aa3a0a1ee3b4659ed4fc0ea1c42835fcbde52c773ea9e270ede34f5c29d644c\" successfully" Dec 13 13:34:03.969588 containerd[1509]: time="2024-12-13T13:34:03.969520146Z" level=info msg="StopPodSandbox for \"5aa3a0a1ee3b4659ed4fc0ea1c42835fcbde52c773ea9e270ede34f5c29d644c\" returns successfully" Dec 13 13:34:03.971416 containerd[1509]: time="2024-12-13T13:34:03.971184624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l9zzm,Uid:20230d91-0ce8-452a-b397-e3c0f73a38ab,Namespace:calico-system,Attempt:9,}" Dec 13 13:34:04.138686 containerd[1509]: time="2024-12-13T13:34:04.138560463Z" level=error msg="Failed to destroy network for sandbox \"8f35c4f9188c376bfb4e6e7920d9e43f784db9b7caa5db8a15101c74d8676273\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:04.139928 containerd[1509]: time="2024-12-13T13:34:04.139066665Z" level=error msg="encountered an error cleaning up failed sandbox \"8f35c4f9188c376bfb4e6e7920d9e43f784db9b7caa5db8a15101c74d8676273\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:04.139928 containerd[1509]: time="2024-12-13T13:34:04.139149902Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l9zzm,Uid:20230d91-0ce8-452a-b397-e3c0f73a38ab,Namespace:calico-system,Attempt:9,} failed, error" error="failed to setup network for sandbox \"8f35c4f9188c376bfb4e6e7920d9e43f784db9b7caa5db8a15101c74d8676273\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:04.140039 kubelet[1925]: E1213 13:34:04.139428 1925 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f35c4f9188c376bfb4e6e7920d9e43f784db9b7caa5db8a15101c74d8676273\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:04.140039 kubelet[1925]: E1213 13:34:04.139501 1925 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f35c4f9188c376bfb4e6e7920d9e43f784db9b7caa5db8a15101c74d8676273\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l9zzm" Dec 13 13:34:04.140039 kubelet[1925]: E1213 13:34:04.139531 1925 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f35c4f9188c376bfb4e6e7920d9e43f784db9b7caa5db8a15101c74d8676273\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l9zzm" Dec 13 13:34:04.140235 kubelet[1925]: E1213 13:34:04.139601 1925 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l9zzm_calico-system(20230d91-0ce8-452a-b397-e3c0f73a38ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l9zzm_calico-system(20230d91-0ce8-452a-b397-e3c0f73a38ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8f35c4f9188c376bfb4e6e7920d9e43f784db9b7caa5db8a15101c74d8676273\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l9zzm" podUID="20230d91-0ce8-452a-b397-e3c0f73a38ab" Dec 13 13:34:04.150388 containerd[1509]: time="2024-12-13T13:34:04.150345124Z" level=error msg="Failed to destroy network for sandbox \"ec6e45a3152036bb99b885ae7ab10d9f25eca3e142499b3ce6d99952ad42c8a2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:04.150924 containerd[1509]: time="2024-12-13T13:34:04.150887647Z" level=error msg="encountered an error cleaning up failed sandbox \"ec6e45a3152036bb99b885ae7ab10d9f25eca3e142499b3ce6d99952ad42c8a2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:04.151386 containerd[1509]: time="2024-12-13T13:34:04.151150177Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-bqgq7,Uid:9c7bd7a9-f050-43e9-9f01-dbc63acc5f4e,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"ec6e45a3152036bb99b885ae7ab10d9f25eca3e142499b3ce6d99952ad42c8a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:04.151773 kubelet[1925]: E1213 13:34:04.151739 1925 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec6e45a3152036bb99b885ae7ab10d9f25eca3e142499b3ce6d99952ad42c8a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:04.152399 kubelet[1925]: E1213 13:34:04.151976 1925 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec6e45a3152036bb99b885ae7ab10d9f25eca3e142499b3ce6d99952ad42c8a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-bqgq7" Dec 13 13:34:04.152399 kubelet[1925]: E1213 13:34:04.152070 1925 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec6e45a3152036bb99b885ae7ab10d9f25eca3e142499b3ce6d99952ad42c8a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-bqgq7" Dec 13 13:34:04.152399 kubelet[1925]: E1213 13:34:04.152163 1925 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-bqgq7_default(9c7bd7a9-f050-43e9-9f01-dbc63acc5f4e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-bqgq7_default(9c7bd7a9-f050-43e9-9f01-dbc63acc5f4e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec6e45a3152036bb99b885ae7ab10d9f25eca3e142499b3ce6d99952ad42c8a2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-bqgq7" podUID="9c7bd7a9-f050-43e9-9f01-dbc63acc5f4e" Dec 13 13:34:04.313636 containerd[1509]: time="2024-12-13T13:34:04.313504870Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:04.314740 containerd[1509]: time="2024-12-13T13:34:04.314684670Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Dec 13 13:34:04.315715 containerd[1509]: time="2024-12-13T13:34:04.315628644Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:04.335155 containerd[1509]: time="2024-12-13T13:34:04.335111387Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:04.336760 containerd[1509]: time="2024-12-13T13:34:04.336551309Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 9.478569756s" Dec 13 13:34:04.336760 containerd[1509]: time="2024-12-13T13:34:04.336603259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Dec 13 13:34:04.360511 containerd[1509]: time="2024-12-13T13:34:04.360406171Z" level=info msg="CreateContainer within sandbox \"6b63a5657cf9eed0bfbe2e2ac3ce2956c49d60746879d8593f404157d7c1c42b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 13:34:04.377509 containerd[1509]: time="2024-12-13T13:34:04.377408032Z" level=info msg="CreateContainer within sandbox \"6b63a5657cf9eed0bfbe2e2ac3ce2956c49d60746879d8593f404157d7c1c42b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8d07e5f658ff6c7bcb1f08dedd4ce67f651de0cc1d7b267a9bcecf059474a8b5\"" Dec 13 13:34:04.378230 containerd[1509]: time="2024-12-13T13:34:04.378081477Z" level=info msg="StartContainer for \"8d07e5f658ff6c7bcb1f08dedd4ce67f651de0cc1d7b267a9bcecf059474a8b5\"" Dec 13 13:34:04.498997 systemd[1]: Started cri-containerd-8d07e5f658ff6c7bcb1f08dedd4ce67f651de0cc1d7b267a9bcecf059474a8b5.scope - libcontainer container 8d07e5f658ff6c7bcb1f08dedd4ce67f651de0cc1d7b267a9bcecf059474a8b5. Dec 13 13:34:04.548824 containerd[1509]: time="2024-12-13T13:34:04.548476043Z" level=info msg="StartContainer for \"8d07e5f658ff6c7bcb1f08dedd4ce67f651de0cc1d7b267a9bcecf059474a8b5\" returns successfully" Dec 13 13:34:04.650470 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 13:34:04.650814 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 13:34:04.673679 kubelet[1925]: E1213 13:34:04.673614 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:34:04.905588 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8f35c4f9188c376bfb4e6e7920d9e43f784db9b7caa5db8a15101c74d8676273-shm.mount: Deactivated successfully. Dec 13 13:34:04.905760 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ec6e45a3152036bb99b885ae7ab10d9f25eca3e142499b3ce6d99952ad42c8a2-shm.mount: Deactivated successfully. Dec 13 13:34:04.905877 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2940603969.mount: Deactivated successfully. Dec 13 13:34:04.969215 kubelet[1925]: I1213 13:34:04.969093 1925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f35c4f9188c376bfb4e6e7920d9e43f784db9b7caa5db8a15101c74d8676273" Dec 13 13:34:04.972672 containerd[1509]: time="2024-12-13T13:34:04.970544624Z" level=info msg="StopPodSandbox for \"8f35c4f9188c376bfb4e6e7920d9e43f784db9b7caa5db8a15101c74d8676273\"" Dec 13 13:34:04.972672 containerd[1509]: time="2024-12-13T13:34:04.970820190Z" level=info msg="Ensure that sandbox 8f35c4f9188c376bfb4e6e7920d9e43f784db9b7caa5db8a15101c74d8676273 in task-service has been cleanup successfully" Dec 13 13:34:04.973647 containerd[1509]: time="2024-12-13T13:34:04.973491392Z" level=info msg="TearDown network for sandbox \"8f35c4f9188c376bfb4e6e7920d9e43f784db9b7caa5db8a15101c74d8676273\" successfully" Dec 13 13:34:04.973647 containerd[1509]: time="2024-12-13T13:34:04.973520851Z" level=info msg="StopPodSandbox for \"8f35c4f9188c376bfb4e6e7920d9e43f784db9b7caa5db8a15101c74d8676273\" returns successfully" Dec 13 13:34:04.975109 containerd[1509]: time="2024-12-13T13:34:04.974638617Z" level=info msg="StopPodSandbox for \"80e4f575bc36ab713d593c7a3e2746da73417526ecd30560b3bd3e02ad6dbd08\"" Dec 13 13:34:04.975109 containerd[1509]: time="2024-12-13T13:34:04.974779964Z" level=info msg="TearDown network for sandbox \"80e4f575bc36ab713d593c7a3e2746da73417526ecd30560b3bd3e02ad6dbd08\" successfully" Dec 13 13:34:04.975109 containerd[1509]: time="2024-12-13T13:34:04.974850621Z" level=info msg="StopPodSandbox for \"80e4f575bc36ab713d593c7a3e2746da73417526ecd30560b3bd3e02ad6dbd08\" returns successfully" Dec 13 13:34:04.975191 systemd[1]: run-netns-cni\x2d64149867\x2d3cae\x2de8a1\x2d49fd\x2d6f855766bffa.mount: Deactivated successfully. Dec 13 13:34:04.978447 containerd[1509]: time="2024-12-13T13:34:04.977384957Z" level=info msg="StopPodSandbox for \"efa1d538b4bc9add1bd187e60d1ab94df81de2ddad15d703ac710156a8c89a79\"" Dec 13 13:34:04.978447 containerd[1509]: time="2024-12-13T13:34:04.977491656Z" level=info msg="TearDown network for sandbox \"efa1d538b4bc9add1bd187e60d1ab94df81de2ddad15d703ac710156a8c89a79\" successfully" Dec 13 13:34:04.978447 containerd[1509]: time="2024-12-13T13:34:04.977511565Z" level=info msg="StopPodSandbox for \"efa1d538b4bc9add1bd187e60d1ab94df81de2ddad15d703ac710156a8c89a79\" returns successfully" Dec 13 13:34:04.978447 containerd[1509]: time="2024-12-13T13:34:04.978407132Z" level=info msg="StopPodSandbox for \"fa291a37c125db69b561f6552ca6e154c4d30dcd88ec1d735fad43de536fb560\"" Dec 13 13:34:04.979638 containerd[1509]: time="2024-12-13T13:34:04.978502417Z" level=info msg="TearDown network for sandbox \"fa291a37c125db69b561f6552ca6e154c4d30dcd88ec1d735fad43de536fb560\" successfully" Dec 13 13:34:04.979638 containerd[1509]: time="2024-12-13T13:34:04.978520437Z" level=info msg="StopPodSandbox for \"fa291a37c125db69b561f6552ca6e154c4d30dcd88ec1d735fad43de536fb560\" returns successfully" Dec 13 13:34:04.979823 kubelet[1925]: I1213 13:34:04.978823 1925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec6e45a3152036bb99b885ae7ab10d9f25eca3e142499b3ce6d99952ad42c8a2" Dec 13 13:34:04.980038 containerd[1509]: time="2024-12-13T13:34:04.979992257Z" level=info msg="StopPodSandbox for \"5778ece67220be491b52f677544a3acfa0f15338385d7cb4e18345a3abcb18fe\"" Dec 13 13:34:04.980196 containerd[1509]: time="2024-12-13T13:34:04.980168456Z" level=info msg="StopPodSandbox for \"ec6e45a3152036bb99b885ae7ab10d9f25eca3e142499b3ce6d99952ad42c8a2\"" Dec 13 13:34:04.980460 containerd[1509]: time="2024-12-13T13:34:04.980205292Z" level=info msg="TearDown network for sandbox \"5778ece67220be491b52f677544a3acfa0f15338385d7cb4e18345a3abcb18fe\" successfully" Dec 13 13:34:04.980628 containerd[1509]: time="2024-12-13T13:34:04.980564694Z" level=info msg="StopPodSandbox for \"5778ece67220be491b52f677544a3acfa0f15338385d7cb4e18345a3abcb18fe\" returns successfully" Dec 13 13:34:04.981490 containerd[1509]: time="2024-12-13T13:34:04.981322525Z" level=info msg="Ensure that sandbox ec6e45a3152036bb99b885ae7ab10d9f25eca3e142499b3ce6d99952ad42c8a2 in task-service has been cleanup successfully" Dec 13 13:34:04.982677 containerd[1509]: time="2024-12-13T13:34:04.981976085Z" level=info msg="StopPodSandbox for \"a1ad99f6aeb4e47962dd648c30f639895fc1d776f030c9b5bf7d999b684f8be0\"" Dec 13 13:34:04.982677 containerd[1509]: time="2024-12-13T13:34:04.982073776Z" level=info msg="TearDown network for sandbox \"a1ad99f6aeb4e47962dd648c30f639895fc1d776f030c9b5bf7d999b684f8be0\" successfully" Dec 13 13:34:04.982677 containerd[1509]: time="2024-12-13T13:34:04.982091156Z" level=info msg="StopPodSandbox for \"a1ad99f6aeb4e47962dd648c30f639895fc1d776f030c9b5bf7d999b684f8be0\" returns successfully" Dec 13 13:34:04.984006 containerd[1509]: time="2024-12-13T13:34:04.983958214Z" level=info msg="TearDown network for sandbox \"ec6e45a3152036bb99b885ae7ab10d9f25eca3e142499b3ce6d99952ad42c8a2\" successfully" Dec 13 13:34:04.984184 containerd[1509]: time="2024-12-13T13:34:04.984088310Z" level=info msg="StopPodSandbox for \"ec6e45a3152036bb99b885ae7ab10d9f25eca3e142499b3ce6d99952ad42c8a2\" returns successfully" Dec 13 13:34:04.986435 containerd[1509]: time="2024-12-13T13:34:04.986210018Z" level=info msg="StopPodSandbox for \"e00c79de7aa774063e3a9a2ebb81462662288851eafb8b5b9fdce324e8f95844\"" Dec 13 13:34:04.986435 containerd[1509]: time="2024-12-13T13:34:04.986269507Z" level=info msg="StopPodSandbox for \"2e8d6721a59d314b3d0a8bcdf38bda10f22b356813469000dd1a437c0b9250e8\"" Dec 13 13:34:04.986435 containerd[1509]: time="2024-12-13T13:34:04.986354122Z" level=info msg="TearDown network for sandbox \"e00c79de7aa774063e3a9a2ebb81462662288851eafb8b5b9fdce324e8f95844\" successfully" Dec 13 13:34:04.986435 containerd[1509]: time="2024-12-13T13:34:04.986373378Z" level=info msg="StopPodSandbox for \"e00c79de7aa774063e3a9a2ebb81462662288851eafb8b5b9fdce324e8f95844\" returns successfully" Dec 13 13:34:04.986435 containerd[1509]: time="2024-12-13T13:34:04.986405437Z" level=info msg="TearDown network for sandbox \"2e8d6721a59d314b3d0a8bcdf38bda10f22b356813469000dd1a437c0b9250e8\" successfully" Dec 13 13:34:04.986435 containerd[1509]: time="2024-12-13T13:34:04.986424980Z" level=info msg="StopPodSandbox for \"2e8d6721a59d314b3d0a8bcdf38bda10f22b356813469000dd1a437c0b9250e8\" returns successfully" Dec 13 13:34:04.987490 containerd[1509]: time="2024-12-13T13:34:04.987065813Z" level=info msg="StopPodSandbox for \"e3ca0f9dfa9d9900aea0fe001174697b50e107cb8b14e69d2d740c280a765645\"" Dec 13 13:34:04.987490 containerd[1509]: time="2024-12-13T13:34:04.987204711Z" level=info msg="TearDown network for sandbox \"e3ca0f9dfa9d9900aea0fe001174697b50e107cb8b14e69d2d740c280a765645\" successfully" Dec 13 13:34:04.987490 containerd[1509]: time="2024-12-13T13:34:04.987224146Z" level=info msg="StopPodSandbox for \"e3ca0f9dfa9d9900aea0fe001174697b50e107cb8b14e69d2d740c280a765645\" returns successfully" Dec 13 13:34:04.987490 containerd[1509]: time="2024-12-13T13:34:04.987324551Z" level=info msg="StopPodSandbox for \"806034164e3bd1b2f0c2218df74bc898f9548b6fdccc6244916402da1e2bacf7\"" Dec 13 13:34:04.987490 containerd[1509]: time="2024-12-13T13:34:04.987410925Z" level=info msg="TearDown network for sandbox \"806034164e3bd1b2f0c2218df74bc898f9548b6fdccc6244916402da1e2bacf7\" successfully" Dec 13 13:34:04.987490 containerd[1509]: time="2024-12-13T13:34:04.987427334Z" level=info msg="StopPodSandbox for \"806034164e3bd1b2f0c2218df74bc898f9548b6fdccc6244916402da1e2bacf7\" returns successfully" Dec 13 13:34:04.988071 systemd[1]: run-netns-cni\x2dfa790859\x2d953f\x2d0608\x2d8fbc\x2d8c9261b0c903.mount: Deactivated successfully. Dec 13 13:34:04.988812 containerd[1509]: time="2024-12-13T13:34:04.988374433Z" level=info msg="StopPodSandbox for \"a9bf62381fd7d265186d3afe14c3485c6255b0c5ca6e1afe0ac78b8786d1952d\"" Dec 13 13:34:04.988812 containerd[1509]: time="2024-12-13T13:34:04.988479473Z" level=info msg="TearDown network for sandbox \"a9bf62381fd7d265186d3afe14c3485c6255b0c5ca6e1afe0ac78b8786d1952d\" successfully" Dec 13 13:34:04.988812 containerd[1509]: time="2024-12-13T13:34:04.988497445Z" level=info msg="StopPodSandbox for \"a9bf62381fd7d265186d3afe14c3485c6255b0c5ca6e1afe0ac78b8786d1952d\" returns successfully" Dec 13 13:34:04.988812 containerd[1509]: time="2024-12-13T13:34:04.988594007Z" level=info msg="StopPodSandbox for \"c556daef8ffe98ac226475fb70ade009d48aaa8e4e8d00690a5e566dd5ae9b02\"" Dec 13 13:34:04.989520 containerd[1509]: time="2024-12-13T13:34:04.989251017Z" level=info msg="StopPodSandbox for \"5aa3a0a1ee3b4659ed4fc0ea1c42835fcbde52c773ea9e270ede34f5c29d644c\"" Dec 13 13:34:04.989520 containerd[1509]: time="2024-12-13T13:34:04.989384280Z" level=info msg="TearDown network for sandbox \"5aa3a0a1ee3b4659ed4fc0ea1c42835fcbde52c773ea9e270ede34f5c29d644c\" successfully" Dec 13 13:34:04.989520 containerd[1509]: time="2024-12-13T13:34:04.989402417Z" level=info msg="StopPodSandbox for \"5aa3a0a1ee3b4659ed4fc0ea1c42835fcbde52c773ea9e270ede34f5c29d644c\" returns successfully" Dec 13 13:34:04.990194 containerd[1509]: time="2024-12-13T13:34:04.990165351Z" level=info msg="TearDown network for sandbox \"c556daef8ffe98ac226475fb70ade009d48aaa8e4e8d00690a5e566dd5ae9b02\" successfully" Dec 13 13:34:04.990473 containerd[1509]: time="2024-12-13T13:34:04.990269785Z" level=info msg="StopPodSandbox for \"c556daef8ffe98ac226475fb70ade009d48aaa8e4e8d00690a5e566dd5ae9b02\" returns successfully" Dec 13 13:34:04.991400 containerd[1509]: time="2024-12-13T13:34:04.990983088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l9zzm,Uid:20230d91-0ce8-452a-b397-e3c0f73a38ab,Namespace:calico-system,Attempt:10,}" Dec 13 13:34:04.991717 containerd[1509]: time="2024-12-13T13:34:04.991688746Z" level=info msg="StopPodSandbox for \"a49cd802272c3dd080ae1bd6e830bd122e212bff968b2dd6260081ce8ef5ae96\"" Dec 13 13:34:04.991919 containerd[1509]: time="2024-12-13T13:34:04.991892025Z" level=info msg="TearDown network for sandbox \"a49cd802272c3dd080ae1bd6e830bd122e212bff968b2dd6260081ce8ef5ae96\" successfully" Dec 13 13:34:04.992769 containerd[1509]: time="2024-12-13T13:34:04.992134706Z" level=info msg="StopPodSandbox for \"a49cd802272c3dd080ae1bd6e830bd122e212bff968b2dd6260081ce8ef5ae96\" returns successfully" Dec 13 13:34:04.993541 containerd[1509]: time="2024-12-13T13:34:04.993510690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-bqgq7,Uid:9c7bd7a9-f050-43e9-9f01-dbc63acc5f4e,Namespace:default,Attempt:5,}" Dec 13 13:34:05.034678 kubelet[1925]: I1213 13:34:05.032416 1925 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-smzkf" podStartSLOduration=4.886535188 podStartE2EDuration="25.032331665s" podCreationTimestamp="2024-12-13 13:33:40 +0000 UTC" firstStartedPulling="2024-12-13 13:33:44.191710477 +0000 UTC m=+4.749917739" lastFinishedPulling="2024-12-13 13:34:04.337506942 +0000 UTC m=+24.895714216" observedRunningTime="2024-12-13 13:34:05.026178199 +0000 UTC m=+25.584385502" watchObservedRunningTime="2024-12-13 13:34:05.032331665 +0000 UTC m=+25.590538924" Dec 13 13:34:05.538801 systemd-networkd[1441]: calib2aec9d08ec: Link UP Dec 13 13:34:05.539194 systemd-networkd[1441]: calib2aec9d08ec: Gained carrier Dec 13 13:34:05.561129 containerd[1509]: 2024-12-13 13:34:05.179 [INFO][2947] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 13:34:05.561129 containerd[1509]: 2024-12-13 13:34:05.277 [INFO][2947] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.230.34.102-k8s-nginx--deployment--6d5f899847--bqgq7-eth0 nginx-deployment-6d5f899847- default 9c7bd7a9-f050-43e9-9f01-dbc63acc5f4e 1173 0 2024-12-13 13:34:00 +0000 UTC map[app:nginx pod-template-hash:6d5f899847 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.230.34.102 nginx-deployment-6d5f899847-bqgq7 eth0 default [] [] [kns.default ksa.default.default] calib2aec9d08ec [] []}} ContainerID="e22bc38f0e672cba79efd19ae999d71ada32f2681541e1adc59cb580c5703944" Namespace="default" Pod="nginx-deployment-6d5f899847-bqgq7" WorkloadEndpoint="10.230.34.102-k8s-nginx--deployment--6d5f899847--bqgq7-" Dec 13 13:34:05.561129 containerd[1509]: 2024-12-13 13:34:05.278 [INFO][2947] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e22bc38f0e672cba79efd19ae999d71ada32f2681541e1adc59cb580c5703944" Namespace="default" Pod="nginx-deployment-6d5f899847-bqgq7" WorkloadEndpoint="10.230.34.102-k8s-nginx--deployment--6d5f899847--bqgq7-eth0" Dec 13 13:34:05.561129 containerd[1509]: 2024-12-13 13:34:05.436 [INFO][3050] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e22bc38f0e672cba79efd19ae999d71ada32f2681541e1adc59cb580c5703944" HandleID="k8s-pod-network.e22bc38f0e672cba79efd19ae999d71ada32f2681541e1adc59cb580c5703944" Workload="10.230.34.102-k8s-nginx--deployment--6d5f899847--bqgq7-eth0" Dec 13 13:34:05.561129 containerd[1509]: 2024-12-13 13:34:05.461 [INFO][3050] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e22bc38f0e672cba79efd19ae999d71ada32f2681541e1adc59cb580c5703944" HandleID="k8s-pod-network.e22bc38f0e672cba79efd19ae999d71ada32f2681541e1adc59cb580c5703944" Workload="10.230.34.102-k8s-nginx--deployment--6d5f899847--bqgq7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003babb0), Attrs:map[string]string{"namespace":"default", "node":"10.230.34.102", "pod":"nginx-deployment-6d5f899847-bqgq7", "timestamp":"2024-12-13 13:34:05.436459456 +0000 UTC"}, Hostname:"10.230.34.102", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:34:05.561129 containerd[1509]: 2024-12-13 13:34:05.461 [INFO][3050] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:34:05.561129 containerd[1509]: 2024-12-13 13:34:05.461 [INFO][3050] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:34:05.561129 containerd[1509]: 2024-12-13 13:34:05.461 [INFO][3050] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.230.34.102' Dec 13 13:34:05.561129 containerd[1509]: 2024-12-13 13:34:05.465 [INFO][3050] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e22bc38f0e672cba79efd19ae999d71ada32f2681541e1adc59cb580c5703944" host="10.230.34.102" Dec 13 13:34:05.561129 containerd[1509]: 2024-12-13 13:34:05.474 [INFO][3050] ipam/ipam.go 372: Looking up existing affinities for host host="10.230.34.102" Dec 13 13:34:05.561129 containerd[1509]: 2024-12-13 13:34:05.481 [INFO][3050] ipam/ipam.go 489: Trying affinity for 192.168.114.128/26 host="10.230.34.102" Dec 13 13:34:05.561129 containerd[1509]: 2024-12-13 13:34:05.485 [INFO][3050] ipam/ipam.go 155: Attempting to load block cidr=192.168.114.128/26 host="10.230.34.102" Dec 13 13:34:05.561129 containerd[1509]: 2024-12-13 13:34:05.489 [INFO][3050] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.114.128/26 host="10.230.34.102" Dec 13 13:34:05.561129 containerd[1509]: 2024-12-13 13:34:05.489 [INFO][3050] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.114.128/26 handle="k8s-pod-network.e22bc38f0e672cba79efd19ae999d71ada32f2681541e1adc59cb580c5703944" host="10.230.34.102" Dec 13 13:34:05.561129 containerd[1509]: 2024-12-13 13:34:05.491 [INFO][3050] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e22bc38f0e672cba79efd19ae999d71ada32f2681541e1adc59cb580c5703944 Dec 13 13:34:05.561129 containerd[1509]: 2024-12-13 13:34:05.499 [INFO][3050] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.114.128/26 handle="k8s-pod-network.e22bc38f0e672cba79efd19ae999d71ada32f2681541e1adc59cb580c5703944" host="10.230.34.102" Dec 13 13:34:05.561129 containerd[1509]: 2024-12-13 13:34:05.508 [INFO][3050] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.114.129/26] block=192.168.114.128/26 handle="k8s-pod-network.e22bc38f0e672cba79efd19ae999d71ada32f2681541e1adc59cb580c5703944" host="10.230.34.102" Dec 13 13:34:05.561129 containerd[1509]: 2024-12-13 13:34:05.508 [INFO][3050] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.114.129/26] handle="k8s-pod-network.e22bc38f0e672cba79efd19ae999d71ada32f2681541e1adc59cb580c5703944" host="10.230.34.102" Dec 13 13:34:05.561129 containerd[1509]: 2024-12-13 13:34:05.508 [INFO][3050] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:34:05.561129 containerd[1509]: 2024-12-13 13:34:05.508 [INFO][3050] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.114.129/26] IPv6=[] ContainerID="e22bc38f0e672cba79efd19ae999d71ada32f2681541e1adc59cb580c5703944" HandleID="k8s-pod-network.e22bc38f0e672cba79efd19ae999d71ada32f2681541e1adc59cb580c5703944" Workload="10.230.34.102-k8s-nginx--deployment--6d5f899847--bqgq7-eth0" Dec 13 13:34:05.567319 containerd[1509]: 2024-12-13 13:34:05.514 [INFO][2947] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e22bc38f0e672cba79efd19ae999d71ada32f2681541e1adc59cb580c5703944" Namespace="default" Pod="nginx-deployment-6d5f899847-bqgq7" WorkloadEndpoint="10.230.34.102-k8s-nginx--deployment--6d5f899847--bqgq7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.34.102-k8s-nginx--deployment--6d5f899847--bqgq7-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"9c7bd7a9-f050-43e9-9f01-dbc63acc5f4e", ResourceVersion:"1173", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 34, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.34.102", ContainerID:"", Pod:"nginx-deployment-6d5f899847-bqgq7", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.114.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calib2aec9d08ec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:34:05.567319 containerd[1509]: 2024-12-13 13:34:05.514 [INFO][2947] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.114.129/32] ContainerID="e22bc38f0e672cba79efd19ae999d71ada32f2681541e1adc59cb580c5703944" Namespace="default" Pod="nginx-deployment-6d5f899847-bqgq7" WorkloadEndpoint="10.230.34.102-k8s-nginx--deployment--6d5f899847--bqgq7-eth0" Dec 13 13:34:05.567319 containerd[1509]: 2024-12-13 13:34:05.514 [INFO][2947] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib2aec9d08ec ContainerID="e22bc38f0e672cba79efd19ae999d71ada32f2681541e1adc59cb580c5703944" Namespace="default" Pod="nginx-deployment-6d5f899847-bqgq7" WorkloadEndpoint="10.230.34.102-k8s-nginx--deployment--6d5f899847--bqgq7-eth0" Dec 13 13:34:05.567319 containerd[1509]: 2024-12-13 13:34:05.540 [INFO][2947] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e22bc38f0e672cba79efd19ae999d71ada32f2681541e1adc59cb580c5703944" Namespace="default" Pod="nginx-deployment-6d5f899847-bqgq7" WorkloadEndpoint="10.230.34.102-k8s-nginx--deployment--6d5f899847--bqgq7-eth0" Dec 13 13:34:05.567319 containerd[1509]: 2024-12-13 13:34:05.541 [INFO][2947] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e22bc38f0e672cba79efd19ae999d71ada32f2681541e1adc59cb580c5703944" Namespace="default" Pod="nginx-deployment-6d5f899847-bqgq7" WorkloadEndpoint="10.230.34.102-k8s-nginx--deployment--6d5f899847--bqgq7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.34.102-k8s-nginx--deployment--6d5f899847--bqgq7-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"9c7bd7a9-f050-43e9-9f01-dbc63acc5f4e", ResourceVersion:"1173", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 34, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.34.102", ContainerID:"e22bc38f0e672cba79efd19ae999d71ada32f2681541e1adc59cb580c5703944", Pod:"nginx-deployment-6d5f899847-bqgq7", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.114.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calib2aec9d08ec", MAC:"a2:7a:94:9b:5e:4a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:34:05.567319 containerd[1509]: 2024-12-13 13:34:05.556 [INFO][2947] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e22bc38f0e672cba79efd19ae999d71ada32f2681541e1adc59cb580c5703944" Namespace="default" Pod="nginx-deployment-6d5f899847-bqgq7" WorkloadEndpoint="10.230.34.102-k8s-nginx--deployment--6d5f899847--bqgq7-eth0" Dec 13 13:34:05.581732 systemd-networkd[1441]: cali694e291dcf0: Link UP Dec 13 13:34:05.584353 systemd-networkd[1441]: cali694e291dcf0: Gained carrier Dec 13 13:34:05.621403 containerd[1509]: 2024-12-13 13:34:05.170 [INFO][2938] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 13:34:05.621403 containerd[1509]: 2024-12-13 13:34:05.278 [INFO][2938] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.230.34.102-k8s-csi--node--driver--l9zzm-eth0 csi-node-driver- calico-system 20230d91-0ce8-452a-b397-e3c0f73a38ab 1062 0 2024-12-13 13:33:40 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.230.34.102 csi-node-driver-l9zzm eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali694e291dcf0 [] []}} ContainerID="54c37c8d4f5f89004cc48511b27f5a65e809642092aeee9f683c44d84a653bdb" Namespace="calico-system" Pod="csi-node-driver-l9zzm" WorkloadEndpoint="10.230.34.102-k8s-csi--node--driver--l9zzm-" Dec 13 13:34:05.621403 containerd[1509]: 2024-12-13 13:34:05.278 [INFO][2938] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="54c37c8d4f5f89004cc48511b27f5a65e809642092aeee9f683c44d84a653bdb" Namespace="calico-system" Pod="csi-node-driver-l9zzm" WorkloadEndpoint="10.230.34.102-k8s-csi--node--driver--l9zzm-eth0" Dec 13 13:34:05.621403 containerd[1509]: 2024-12-13 13:34:05.431 [INFO][3049] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="54c37c8d4f5f89004cc48511b27f5a65e809642092aeee9f683c44d84a653bdb" HandleID="k8s-pod-network.54c37c8d4f5f89004cc48511b27f5a65e809642092aeee9f683c44d84a653bdb" Workload="10.230.34.102-k8s-csi--node--driver--l9zzm-eth0" Dec 13 13:34:05.621403 containerd[1509]: 2024-12-13 13:34:05.464 [INFO][3049] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="54c37c8d4f5f89004cc48511b27f5a65e809642092aeee9f683c44d84a653bdb" HandleID="k8s-pod-network.54c37c8d4f5f89004cc48511b27f5a65e809642092aeee9f683c44d84a653bdb" Workload="10.230.34.102-k8s-csi--node--driver--l9zzm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003af170), Attrs:map[string]string{"namespace":"calico-system", "node":"10.230.34.102", "pod":"csi-node-driver-l9zzm", "timestamp":"2024-12-13 13:34:05.429509449 +0000 UTC"}, Hostname:"10.230.34.102", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:34:05.621403 containerd[1509]: 2024-12-13 13:34:05.465 [INFO][3049] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:34:05.621403 containerd[1509]: 2024-12-13 13:34:05.508 [INFO][3049] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:34:05.621403 containerd[1509]: 2024-12-13 13:34:05.509 [INFO][3049] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.230.34.102' Dec 13 13:34:05.621403 containerd[1509]: 2024-12-13 13:34:05.512 [INFO][3049] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.54c37c8d4f5f89004cc48511b27f5a65e809642092aeee9f683c44d84a653bdb" host="10.230.34.102" Dec 13 13:34:05.621403 containerd[1509]: 2024-12-13 13:34:05.520 [INFO][3049] ipam/ipam.go 372: Looking up existing affinities for host host="10.230.34.102" Dec 13 13:34:05.621403 containerd[1509]: 2024-12-13 13:34:05.531 [INFO][3049] ipam/ipam.go 489: Trying affinity for 192.168.114.128/26 host="10.230.34.102" Dec 13 13:34:05.621403 containerd[1509]: 2024-12-13 13:34:05.538 [INFO][3049] ipam/ipam.go 155: Attempting to load block cidr=192.168.114.128/26 host="10.230.34.102" Dec 13 13:34:05.621403 containerd[1509]: 2024-12-13 13:34:05.543 [INFO][3049] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.114.128/26 host="10.230.34.102" Dec 13 13:34:05.621403 containerd[1509]: 2024-12-13 13:34:05.543 [INFO][3049] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.114.128/26 handle="k8s-pod-network.54c37c8d4f5f89004cc48511b27f5a65e809642092aeee9f683c44d84a653bdb" host="10.230.34.102" Dec 13 13:34:05.621403 containerd[1509]: 2024-12-13 13:34:05.546 [INFO][3049] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.54c37c8d4f5f89004cc48511b27f5a65e809642092aeee9f683c44d84a653bdb Dec 13 13:34:05.621403 containerd[1509]: 2024-12-13 13:34:05.555 [INFO][3049] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.114.128/26 handle="k8s-pod-network.54c37c8d4f5f89004cc48511b27f5a65e809642092aeee9f683c44d84a653bdb" host="10.230.34.102" Dec 13 13:34:05.621403 containerd[1509]: 2024-12-13 13:34:05.563 [INFO][3049] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.114.130/26] block=192.168.114.128/26 handle="k8s-pod-network.54c37c8d4f5f89004cc48511b27f5a65e809642092aeee9f683c44d84a653bdb" host="10.230.34.102" Dec 13 13:34:05.621403 containerd[1509]: 2024-12-13 13:34:05.563 [INFO][3049] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.114.130/26] handle="k8s-pod-network.54c37c8d4f5f89004cc48511b27f5a65e809642092aeee9f683c44d84a653bdb" host="10.230.34.102" Dec 13 13:34:05.621403 containerd[1509]: 2024-12-13 13:34:05.563 [INFO][3049] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:34:05.621403 containerd[1509]: 2024-12-13 13:34:05.563 [INFO][3049] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.114.130/26] IPv6=[] ContainerID="54c37c8d4f5f89004cc48511b27f5a65e809642092aeee9f683c44d84a653bdb" HandleID="k8s-pod-network.54c37c8d4f5f89004cc48511b27f5a65e809642092aeee9f683c44d84a653bdb" Workload="10.230.34.102-k8s-csi--node--driver--l9zzm-eth0" Dec 13 13:34:05.622591 containerd[1509]: 2024-12-13 13:34:05.570 [INFO][2938] cni-plugin/k8s.go 386: Populated endpoint ContainerID="54c37c8d4f5f89004cc48511b27f5a65e809642092aeee9f683c44d84a653bdb" Namespace="calico-system" Pod="csi-node-driver-l9zzm" WorkloadEndpoint="10.230.34.102-k8s-csi--node--driver--l9zzm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.34.102-k8s-csi--node--driver--l9zzm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"20230d91-0ce8-452a-b397-e3c0f73a38ab", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 33, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.34.102", ContainerID:"", Pod:"csi-node-driver-l9zzm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.114.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali694e291dcf0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:34:05.622591 containerd[1509]: 2024-12-13 13:34:05.571 [INFO][2938] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.114.130/32] ContainerID="54c37c8d4f5f89004cc48511b27f5a65e809642092aeee9f683c44d84a653bdb" Namespace="calico-system" Pod="csi-node-driver-l9zzm" WorkloadEndpoint="10.230.34.102-k8s-csi--node--driver--l9zzm-eth0" Dec 13 13:34:05.622591 containerd[1509]: 2024-12-13 13:34:05.571 [INFO][2938] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali694e291dcf0 ContainerID="54c37c8d4f5f89004cc48511b27f5a65e809642092aeee9f683c44d84a653bdb" Namespace="calico-system" Pod="csi-node-driver-l9zzm" WorkloadEndpoint="10.230.34.102-k8s-csi--node--driver--l9zzm-eth0" Dec 13 13:34:05.622591 containerd[1509]: 2024-12-13 13:34:05.588 [INFO][2938] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="54c37c8d4f5f89004cc48511b27f5a65e809642092aeee9f683c44d84a653bdb" Namespace="calico-system" Pod="csi-node-driver-l9zzm" WorkloadEndpoint="10.230.34.102-k8s-csi--node--driver--l9zzm-eth0" Dec 13 13:34:05.622591 containerd[1509]: 2024-12-13 13:34:05.591 [INFO][2938] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="54c37c8d4f5f89004cc48511b27f5a65e809642092aeee9f683c44d84a653bdb" Namespace="calico-system" Pod="csi-node-driver-l9zzm" WorkloadEndpoint="10.230.34.102-k8s-csi--node--driver--l9zzm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.34.102-k8s-csi--node--driver--l9zzm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"20230d91-0ce8-452a-b397-e3c0f73a38ab", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 33, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.34.102", ContainerID:"54c37c8d4f5f89004cc48511b27f5a65e809642092aeee9f683c44d84a653bdb", Pod:"csi-node-driver-l9zzm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.114.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali694e291dcf0", MAC:"e2:3e:3d:6b:a5:e1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:34:05.622591 containerd[1509]: 2024-12-13 13:34:05.614 [INFO][2938] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="54c37c8d4f5f89004cc48511b27f5a65e809642092aeee9f683c44d84a653bdb" Namespace="calico-system" Pod="csi-node-driver-l9zzm" WorkloadEndpoint="10.230.34.102-k8s-csi--node--driver--l9zzm-eth0" Dec 13 13:34:05.674455 kubelet[1925]: E1213 13:34:05.674317 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:34:05.676068 containerd[1509]: time="2024-12-13T13:34:05.673853876Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:34:05.676068 containerd[1509]: time="2024-12-13T13:34:05.673987459Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:34:05.676068 containerd[1509]: time="2024-12-13T13:34:05.674013209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:34:05.676068 containerd[1509]: time="2024-12-13T13:34:05.674133637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:34:05.694967 containerd[1509]: time="2024-12-13T13:34:05.694077786Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:34:05.694967 containerd[1509]: time="2024-12-13T13:34:05.694187609Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:34:05.694967 containerd[1509]: time="2024-12-13T13:34:05.694211742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:34:05.694967 containerd[1509]: time="2024-12-13T13:34:05.694346415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:34:05.710836 kernel: bpftool[3164]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 13:34:05.728028 systemd[1]: Started cri-containerd-e22bc38f0e672cba79efd19ae999d71ada32f2681541e1adc59cb580c5703944.scope - libcontainer container e22bc38f0e672cba79efd19ae999d71ada32f2681541e1adc59cb580c5703944. Dec 13 13:34:05.744073 systemd[1]: Started cri-containerd-54c37c8d4f5f89004cc48511b27f5a65e809642092aeee9f683c44d84a653bdb.scope - libcontainer container 54c37c8d4f5f89004cc48511b27f5a65e809642092aeee9f683c44d84a653bdb. Dec 13 13:34:05.807392 containerd[1509]: time="2024-12-13T13:34:05.807252127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l9zzm,Uid:20230d91-0ce8-452a-b397-e3c0f73a38ab,Namespace:calico-system,Attempt:10,} returns sandbox id \"54c37c8d4f5f89004cc48511b27f5a65e809642092aeee9f683c44d84a653bdb\"" Dec 13 13:34:05.815558 containerd[1509]: time="2024-12-13T13:34:05.812978862Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 13:34:05.862085 containerd[1509]: time="2024-12-13T13:34:05.861948514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-bqgq7,Uid:9c7bd7a9-f050-43e9-9f01-dbc63acc5f4e,Namespace:default,Attempt:5,} returns sandbox id \"e22bc38f0e672cba79efd19ae999d71ada32f2681541e1adc59cb580c5703944\"" Dec 13 13:34:06.038306 systemd[1]: run-containerd-runc-k8s.io-8d07e5f658ff6c7bcb1f08dedd4ce67f651de0cc1d7b267a9bcecf059474a8b5-runc.S7J6AV.mount: Deactivated successfully. Dec 13 13:34:06.112746 systemd-networkd[1441]: vxlan.calico: Link UP Dec 13 13:34:06.112879 systemd-networkd[1441]: vxlan.calico: Gained carrier Dec 13 13:34:06.674829 kubelet[1925]: E1213 13:34:06.674755 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:34:06.917910 systemd-networkd[1441]: calib2aec9d08ec: Gained IPv6LL Dec 13 13:34:07.301884 systemd-networkd[1441]: cali694e291dcf0: Gained IPv6LL Dec 13 13:34:07.345624 containerd[1509]: time="2024-12-13T13:34:07.345551284Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:07.346941 containerd[1509]: time="2024-12-13T13:34:07.346859094Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Dec 13 13:34:07.347747 containerd[1509]: time="2024-12-13T13:34:07.347679991Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:07.351631 containerd[1509]: time="2024-12-13T13:34:07.351550792Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:07.352860 containerd[1509]: time="2024-12-13T13:34:07.352571712Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.539192576s" Dec 13 13:34:07.352860 containerd[1509]: time="2024-12-13T13:34:07.352715495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Dec 13 13:34:07.354255 containerd[1509]: time="2024-12-13T13:34:07.353921815Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 13:34:07.355685 containerd[1509]: time="2024-12-13T13:34:07.355476795Z" level=info msg="CreateContainer within sandbox \"54c37c8d4f5f89004cc48511b27f5a65e809642092aeee9f683c44d84a653bdb\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 13:34:07.365882 systemd-networkd[1441]: vxlan.calico: Gained IPv6LL Dec 13 13:34:07.376320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount239407659.mount: Deactivated successfully. Dec 13 13:34:07.381604 containerd[1509]: time="2024-12-13T13:34:07.381549241Z" level=info msg="CreateContainer within sandbox \"54c37c8d4f5f89004cc48511b27f5a65e809642092aeee9f683c44d84a653bdb\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"b6bf76342b6c8dab3461cc4331358bac3be68fcb9f92d90b5d3439a7d0cf4dd8\"" Dec 13 13:34:07.382455 containerd[1509]: time="2024-12-13T13:34:07.382415616Z" level=info msg="StartContainer for \"b6bf76342b6c8dab3461cc4331358bac3be68fcb9f92d90b5d3439a7d0cf4dd8\"" Dec 13 13:34:07.434890 systemd[1]: Started cri-containerd-b6bf76342b6c8dab3461cc4331358bac3be68fcb9f92d90b5d3439a7d0cf4dd8.scope - libcontainer container b6bf76342b6c8dab3461cc4331358bac3be68fcb9f92d90b5d3439a7d0cf4dd8. Dec 13 13:34:07.479446 containerd[1509]: time="2024-12-13T13:34:07.479399106Z" level=info msg="StartContainer for \"b6bf76342b6c8dab3461cc4331358bac3be68fcb9f92d90b5d3439a7d0cf4dd8\" returns successfully" Dec 13 13:34:07.675320 kubelet[1925]: E1213 13:34:07.675145 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:34:08.675723 kubelet[1925]: E1213 13:34:08.675617 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:34:09.676825 kubelet[1925]: E1213 13:34:09.676738 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:34:10.677753 kubelet[1925]: E1213 13:34:10.677630 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:34:11.133511 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1230386588.mount: Deactivated successfully. Dec 13 13:34:11.678242 kubelet[1925]: E1213 13:34:11.678161 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:34:12.681706 kubelet[1925]: E1213 13:34:12.680149 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:34:12.942677 containerd[1509]: time="2024-12-13T13:34:12.942466445Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:12.944542 containerd[1509]: time="2024-12-13T13:34:12.944495664Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71036027" Dec 13 13:34:12.945237 containerd[1509]: time="2024-12-13T13:34:12.944867496Z" level=info msg="ImageCreate event name:\"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:12.948337 containerd[1509]: time="2024-12-13T13:34:12.948263649Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:12.949971 containerd[1509]: time="2024-12-13T13:34:12.949765674Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\", size \"71035905\" in 5.595787991s" Dec 13 13:34:12.949971 containerd[1509]: time="2024-12-13T13:34:12.949820913Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 13:34:12.951686 containerd[1509]: time="2024-12-13T13:34:12.951391429Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 13:34:13.000260 containerd[1509]: time="2024-12-13T13:34:12.999083382Z" level=info msg="CreateContainer within sandbox \"e22bc38f0e672cba79efd19ae999d71ada32f2681541e1adc59cb580c5703944\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 13:34:13.029563 containerd[1509]: time="2024-12-13T13:34:13.029503147Z" level=info msg="CreateContainer within sandbox \"e22bc38f0e672cba79efd19ae999d71ada32f2681541e1adc59cb580c5703944\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"daa0cf1518829edc257aad512f986e5b1f29155cc4c65037f1d80e982caf65bb\"" Dec 13 13:34:13.030620 containerd[1509]: time="2024-12-13T13:34:13.030589235Z" level=info msg="StartContainer for \"daa0cf1518829edc257aad512f986e5b1f29155cc4c65037f1d80e982caf65bb\"" Dec 13 13:34:13.089904 systemd[1]: Started cri-containerd-daa0cf1518829edc257aad512f986e5b1f29155cc4c65037f1d80e982caf65bb.scope - libcontainer container daa0cf1518829edc257aad512f986e5b1f29155cc4c65037f1d80e982caf65bb. Dec 13 13:34:13.132941 containerd[1509]: time="2024-12-13T13:34:13.132577815Z" level=info msg="StartContainer for \"daa0cf1518829edc257aad512f986e5b1f29155cc4c65037f1d80e982caf65bb\" returns successfully" Dec 13 13:34:13.680832 kubelet[1925]: E1213 13:34:13.680762 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:34:14.067542 kubelet[1925]: I1213 13:34:14.067443 1925 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-bqgq7" podStartSLOduration=6.980687931 podStartE2EDuration="14.067343803s" podCreationTimestamp="2024-12-13 13:34:00 +0000 UTC" firstStartedPulling="2024-12-13 13:34:05.863982313 +0000 UTC m=+26.422189571" lastFinishedPulling="2024-12-13 13:34:12.950638178 +0000 UTC m=+33.508845443" observedRunningTime="2024-12-13 13:34:14.066507398 +0000 UTC m=+34.624714676" watchObservedRunningTime="2024-12-13 13:34:14.067343803 +0000 UTC m=+34.625551070" Dec 13 13:34:14.681712 kubelet[1925]: E1213 13:34:14.681269 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:34:14.750147 containerd[1509]: time="2024-12-13T13:34:14.750079366Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:14.751545 containerd[1509]: time="2024-12-13T13:34:14.751476483Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Dec 13 13:34:14.752534 containerd[1509]: time="2024-12-13T13:34:14.752476829Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:14.755265 containerd[1509]: time="2024-12-13T13:34:14.755232372Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:14.756499 containerd[1509]: time="2024-12-13T13:34:14.756336110Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.804901925s" Dec 13 13:34:14.756499 containerd[1509]: time="2024-12-13T13:34:14.756376613Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Dec 13 13:34:14.773944 containerd[1509]: time="2024-12-13T13:34:14.773896829Z" level=info msg="CreateContainer within sandbox \"54c37c8d4f5f89004cc48511b27f5a65e809642092aeee9f683c44d84a653bdb\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 13:34:14.789737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2862418694.mount: Deactivated successfully. Dec 13 13:34:14.804751 containerd[1509]: time="2024-12-13T13:34:14.804569257Z" level=info msg="CreateContainer within sandbox \"54c37c8d4f5f89004cc48511b27f5a65e809642092aeee9f683c44d84a653bdb\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"67f718a358d7998e8039bb29d82bb73d8331e5d3c01bdb29163912ff0e8eb83b\"" Dec 13 13:34:14.805427 containerd[1509]: time="2024-12-13T13:34:14.805384498Z" level=info msg="StartContainer for \"67f718a358d7998e8039bb29d82bb73d8331e5d3c01bdb29163912ff0e8eb83b\"" Dec 13 13:34:14.852887 systemd[1]: Started cri-containerd-67f718a358d7998e8039bb29d82bb73d8331e5d3c01bdb29163912ff0e8eb83b.scope - libcontainer container 67f718a358d7998e8039bb29d82bb73d8331e5d3c01bdb29163912ff0e8eb83b. Dec 13 13:34:14.912596 containerd[1509]: time="2024-12-13T13:34:14.912389535Z" level=info msg="StartContainer for \"67f718a358d7998e8039bb29d82bb73d8331e5d3c01bdb29163912ff0e8eb83b\" returns successfully" Dec 13 13:34:15.089161 kubelet[1925]: I1213 13:34:15.089118 1925 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-l9zzm" podStartSLOduration=26.143674668 podStartE2EDuration="35.089053306s" podCreationTimestamp="2024-12-13 13:33:40 +0000 UTC" firstStartedPulling="2024-12-13 13:34:05.811529293 +0000 UTC m=+26.369736552" lastFinishedPulling="2024-12-13 13:34:14.756907921 +0000 UTC m=+35.315115190" observedRunningTime="2024-12-13 13:34:15.086318601 +0000 UTC m=+35.644525897" watchObservedRunningTime="2024-12-13 13:34:15.089053306 +0000 UTC m=+35.647260575" Dec 13 13:34:15.682620 kubelet[1925]: E1213 13:34:15.682539 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:34:15.830814 kubelet[1925]: I1213 13:34:15.830765 1925 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 13:34:15.832052 kubelet[1925]: I1213 13:34:15.832004 1925 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 13:34:16.683564 kubelet[1925]: E1213 13:34:16.683464 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:34:17.684518 kubelet[1925]: E1213 13:34:17.684451 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:34:18.685675 kubelet[1925]: E1213 13:34:18.685609 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:34:19.686075 kubelet[1925]: E1213 13:34:19.685987 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:34:20.651002 kubelet[1925]: E1213 13:34:20.650947 1925 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:34:20.686627 kubelet[1925]: E1213 13:34:20.686557 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:34:21.687076 kubelet[1925]: E1213 13:34:21.686783 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:34:22.687100 kubelet[1925]: E1213 13:34:22.686998 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:34:22.687100 kubelet[1925]: I1213 13:34:22.687070 1925 topology_manager.go:215] "Topology Admit Handler" podUID="d09112b6-8532-4f3f-89f8-c21df651dac7" podNamespace="default" podName="nfs-server-provisioner-0" Dec 13 13:34:22.695459 systemd[1]: Created slice kubepods-besteffort-podd09112b6_8532_4f3f_89f8_c21df651dac7.slice - libcontainer container kubepods-besteffort-podd09112b6_8532_4f3f_89f8_c21df651dac7.slice. Dec 13 13:34:22.847747 kubelet[1925]: I1213 13:34:22.847433 1925 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zmfd\" (UniqueName: \"kubernetes.io/projected/d09112b6-8532-4f3f-89f8-c21df651dac7-kube-api-access-4zmfd\") pod \"nfs-server-provisioner-0\" (UID: \"d09112b6-8532-4f3f-89f8-c21df651dac7\") " pod="default/nfs-server-provisioner-0" Dec 13 13:34:22.847747 kubelet[1925]: I1213 13:34:22.847506 1925 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/d09112b6-8532-4f3f-89f8-c21df651dac7-data\") pod \"nfs-server-provisioner-0\" (UID: \"d09112b6-8532-4f3f-89f8-c21df651dac7\") " pod="default/nfs-server-provisioner-0" Dec 13 13:34:23.000068 containerd[1509]: time="2024-12-13T13:34:22.999863534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d09112b6-8532-4f3f-89f8-c21df651dac7,Namespace:default,Attempt:0,}" Dec 13 13:34:23.205210 systemd-networkd[1441]: cali60e51b789ff: Link UP Dec 13 13:34:23.205755 systemd-networkd[1441]: cali60e51b789ff: Gained carrier Dec 13 13:34:23.225741 containerd[1509]: 2024-12-13 13:34:23.072 [INFO][3520] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.230.34.102-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default d09112b6-8532-4f3f-89f8-c21df651dac7 1287 0 2024-12-13 13:34:22 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.230.34.102 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="491e9ce2ab5ba21e0953b9bea1cad183dc1700e7cc26aaab3539c401f152fe3a" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.34.102-k8s-nfs--server--provisioner--0-" Dec 13 13:34:23.225741 containerd[1509]: 2024-12-13 13:34:23.072 [INFO][3520] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="491e9ce2ab5ba21e0953b9bea1cad183dc1700e7cc26aaab3539c401f152fe3a" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.34.102-k8s-nfs--server--provisioner--0-eth0" Dec 13 13:34:23.225741 containerd[1509]: 2024-12-13 13:34:23.127 [INFO][3531] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="491e9ce2ab5ba21e0953b9bea1cad183dc1700e7cc26aaab3539c401f152fe3a" HandleID="k8s-pod-network.491e9ce2ab5ba21e0953b9bea1cad183dc1700e7cc26aaab3539c401f152fe3a" Workload="10.230.34.102-k8s-nfs--server--provisioner--0-eth0" Dec 13 13:34:23.225741 containerd[1509]: 2024-12-13 13:34:23.143 [INFO][3531] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="491e9ce2ab5ba21e0953b9bea1cad183dc1700e7cc26aaab3539c401f152fe3a" HandleID="k8s-pod-network.491e9ce2ab5ba21e0953b9bea1cad183dc1700e7cc26aaab3539c401f152fe3a" Workload="10.230.34.102-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002642b0), Attrs:map[string]string{"namespace":"default", "node":"10.230.34.102", "pod":"nfs-server-provisioner-0", "timestamp":"2024-12-13 13:34:23.127920143 +0000 UTC"}, Hostname:"10.230.34.102", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:34:23.225741 containerd[1509]: 2024-12-13 13:34:23.143 [INFO][3531] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:34:23.225741 containerd[1509]: 2024-12-13 13:34:23.143 [INFO][3531] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:34:23.225741 containerd[1509]: 2024-12-13 13:34:23.144 [INFO][3531] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.230.34.102' Dec 13 13:34:23.225741 containerd[1509]: 2024-12-13 13:34:23.147 [INFO][3531] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.491e9ce2ab5ba21e0953b9bea1cad183dc1700e7cc26aaab3539c401f152fe3a" host="10.230.34.102" Dec 13 13:34:23.225741 containerd[1509]: 2024-12-13 13:34:23.156 [INFO][3531] ipam/ipam.go 372: Looking up existing affinities for host host="10.230.34.102" Dec 13 13:34:23.225741 containerd[1509]: 2024-12-13 13:34:23.162 [INFO][3531] ipam/ipam.go 489: Trying affinity for 192.168.114.128/26 host="10.230.34.102" Dec 13 13:34:23.225741 containerd[1509]: 2024-12-13 13:34:23.166 [INFO][3531] ipam/ipam.go 155: Attempting to load block cidr=192.168.114.128/26 host="10.230.34.102" Dec 13 13:34:23.225741 containerd[1509]: 2024-12-13 13:34:23.170 [INFO][3531] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.114.128/26 host="10.230.34.102" Dec 13 13:34:23.225741 containerd[1509]: 2024-12-13 13:34:23.170 [INFO][3531] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.114.128/26 handle="k8s-pod-network.491e9ce2ab5ba21e0953b9bea1cad183dc1700e7cc26aaab3539c401f152fe3a" host="10.230.34.102" Dec 13 13:34:23.225741 containerd[1509]: 2024-12-13 13:34:23.173 [INFO][3531] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.491e9ce2ab5ba21e0953b9bea1cad183dc1700e7cc26aaab3539c401f152fe3a Dec 13 13:34:23.225741 containerd[1509]: 2024-12-13 13:34:23.183 [INFO][3531] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.114.128/26 handle="k8s-pod-network.491e9ce2ab5ba21e0953b9bea1cad183dc1700e7cc26aaab3539c401f152fe3a" host="10.230.34.102" Dec 13 13:34:23.225741 containerd[1509]: 2024-12-13 13:34:23.198 [INFO][3531] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.114.131/26] block=192.168.114.128/26 handle="k8s-pod-network.491e9ce2ab5ba21e0953b9bea1cad183dc1700e7cc26aaab3539c401f152fe3a" host="10.230.34.102" Dec 13 13:34:23.225741 containerd[1509]: 2024-12-13 13:34:23.198 [INFO][3531] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.114.131/26] handle="k8s-pod-network.491e9ce2ab5ba21e0953b9bea1cad183dc1700e7cc26aaab3539c401f152fe3a" host="10.230.34.102" Dec 13 13:34:23.225741 containerd[1509]: 2024-12-13 13:34:23.198 [INFO][3531] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:34:23.225741 containerd[1509]: 2024-12-13 13:34:23.198 [INFO][3531] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.114.131/26] IPv6=[] ContainerID="491e9ce2ab5ba21e0953b9bea1cad183dc1700e7cc26aaab3539c401f152fe3a" HandleID="k8s-pod-network.491e9ce2ab5ba21e0953b9bea1cad183dc1700e7cc26aaab3539c401f152fe3a" Workload="10.230.34.102-k8s-nfs--server--provisioner--0-eth0" Dec 13 13:34:23.227131 containerd[1509]: 2024-12-13 13:34:23.200 [INFO][3520] cni-plugin/k8s.go 386: Populated endpoint ContainerID="491e9ce2ab5ba21e0953b9bea1cad183dc1700e7cc26aaab3539c401f152fe3a" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.34.102-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.34.102-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"d09112b6-8532-4f3f-89f8-c21df651dac7", ResourceVersion:"1287", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 34, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.34.102", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.114.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:34:23.227131 containerd[1509]: 2024-12-13 13:34:23.200 [INFO][3520] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.114.131/32] ContainerID="491e9ce2ab5ba21e0953b9bea1cad183dc1700e7cc26aaab3539c401f152fe3a" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.34.102-k8s-nfs--server--provisioner--0-eth0" Dec 13 13:34:23.227131 containerd[1509]: 2024-12-13 13:34:23.200 [INFO][3520] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="491e9ce2ab5ba21e0953b9bea1cad183dc1700e7cc26aaab3539c401f152fe3a" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.34.102-k8s-nfs--server--provisioner--0-eth0" Dec 13 13:34:23.227131 containerd[1509]: 2024-12-13 13:34:23.206 [INFO][3520] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="491e9ce2ab5ba21e0953b9bea1cad183dc1700e7cc26aaab3539c401f152fe3a" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.34.102-k8s-nfs--server--provisioner--0-eth0" Dec 13 13:34:23.227426 containerd[1509]: 2024-12-13 13:34:23.207 [INFO][3520] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="491e9ce2ab5ba21e0953b9bea1cad183dc1700e7cc26aaab3539c401f152fe3a" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.34.102-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.34.102-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"d09112b6-8532-4f3f-89f8-c21df651dac7", ResourceVersion:"1287", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 34, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.34.102", ContainerID:"491e9ce2ab5ba21e0953b9bea1cad183dc1700e7cc26aaab3539c401f152fe3a", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.114.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"ca:ed:03:ba:71:ae", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:34:23.227426 containerd[1509]: 2024-12-13 13:34:23.223 [INFO][3520] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="491e9ce2ab5ba21e0953b9bea1cad183dc1700e7cc26aaab3539c401f152fe3a" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.34.102-k8s-nfs--server--provisioner--0-eth0" Dec 13 13:34:23.266926 containerd[1509]: time="2024-12-13T13:34:23.266597664Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:34:23.267121 containerd[1509]: time="2024-12-13T13:34:23.266970792Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:34:23.267684 containerd[1509]: time="2024-12-13T13:34:23.267106075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:34:23.267917 containerd[1509]: time="2024-12-13T13:34:23.267428338Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:34:23.302921 systemd[1]: Started cri-containerd-491e9ce2ab5ba21e0953b9bea1cad183dc1700e7cc26aaab3539c401f152fe3a.scope - libcontainer container 491e9ce2ab5ba21e0953b9bea1cad183dc1700e7cc26aaab3539c401f152fe3a. Dec 13 13:34:23.368905 containerd[1509]: time="2024-12-13T13:34:23.368840165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d09112b6-8532-4f3f-89f8-c21df651dac7,Namespace:default,Attempt:0,} returns sandbox id \"491e9ce2ab5ba21e0953b9bea1cad183dc1700e7cc26aaab3539c401f152fe3a\"" Dec 13 13:34:23.372546 containerd[1509]: time="2024-12-13T13:34:23.371950740Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 13:34:23.687939 kubelet[1925]: E1213 13:34:23.687851 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:34:24.521144 systemd-networkd[1441]: cali60e51b789ff: Gained IPv6LL Dec 13 13:34:24.689164 kubelet[1925]: E1213 13:34:24.689065 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:34:25.691024 kubelet[1925]: E1213 13:34:25.690810 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:34:26.662133 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount426188002.mount: Deactivated successfully. Dec 13 13:34:26.692770 kubelet[1925]: E1213 13:34:26.692708 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:34:27.693821 kubelet[1925]: E1213 13:34:27.693745 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:34:28.695002 kubelet[1925]: E1213 13:34:28.694890 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:34:29.669029 containerd[1509]: time="2024-12-13T13:34:29.668906197Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:29.670801 containerd[1509]: time="2024-12-13T13:34:29.670639362Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039414" Dec 13 13:34:29.673728 containerd[1509]: time="2024-12-13T13:34:29.672325777Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:29.696407 kubelet[1925]: E1213 13:34:29.696359 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:34:29.697807 containerd[1509]: time="2024-12-13T13:34:29.697773488Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:29.699668 containerd[1509]: time="2024-12-13T13:34:29.699598431Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 6.327603612s" Dec 13 13:34:29.699768 containerd[1509]: time="2024-12-13T13:34:29.699691068Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 13 13:34:29.703610 containerd[1509]: time="2024-12-13T13:34:29.703554949Z" level=info msg="CreateContainer within sandbox \"491e9ce2ab5ba21e0953b9bea1cad183dc1700e7cc26aaab3539c401f152fe3a\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 13:34:29.723610 containerd[1509]: time="2024-12-13T13:34:29.723571785Z" level=info msg="CreateContainer within sandbox \"491e9ce2ab5ba21e0953b9bea1cad183dc1700e7cc26aaab3539c401f152fe3a\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"7b4251b881c129ad5f8ec51cd563ea705128a6ee6543f8131f4e8874216d9c78\"" Dec 13 13:34:29.724479 containerd[1509]: time="2024-12-13T13:34:29.724418538Z" level=info msg="StartContainer for \"7b4251b881c129ad5f8ec51cd563ea705128a6ee6543f8131f4e8874216d9c78\"" Dec 13 13:34:29.773293 systemd[1]: run-containerd-runc-k8s.io-7b4251b881c129ad5f8ec51cd563ea705128a6ee6543f8131f4e8874216d9c78-runc.vDF5gc.mount: Deactivated successfully. Dec 13 13:34:29.782894 systemd[1]: Started cri-containerd-7b4251b881c129ad5f8ec51cd563ea705128a6ee6543f8131f4e8874216d9c78.scope - libcontainer container 7b4251b881c129ad5f8ec51cd563ea705128a6ee6543f8131f4e8874216d9c78. Dec 13 13:34:29.825389 containerd[1509]: time="2024-12-13T13:34:29.825331282Z" level=info msg="StartContainer for \"7b4251b881c129ad5f8ec51cd563ea705128a6ee6543f8131f4e8874216d9c78\" returns successfully" Dec 13 13:34:30.161713 kubelet[1925]: I1213 13:34:30.161498 1925 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.832647385 podStartE2EDuration="8.1613336s" podCreationTimestamp="2024-12-13 13:34:22 +0000 UTC" firstStartedPulling="2024-12-13 13:34:23.371551814 +0000 UTC m=+43.929759073" lastFinishedPulling="2024-12-13 13:34:29.700238018 +0000 UTC m=+50.258445288" observedRunningTime="2024-12-13 13:34:30.160840467 +0000 UTC m=+50.719047753" watchObservedRunningTime="2024-12-13 13:34:30.1613336 +0000 UTC m=+50.719540869" Dec 13 13:34:30.697388 kubelet[1925]: E1213 13:34:30.697321 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:34:31.697815 kubelet[1925]: E1213 13:34:31.697735 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:34:32.698429 kubelet[1925]: E1213 13:34:32.698368 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:34:33.699477 kubelet[1925]: E1213 13:34:33.699392 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:34:34.700249 kubelet[1925]: E1213 13:34:34.700144 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:34:35.701469 kubelet[1925]: E1213 13:34:35.701376 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:34:36.701620 kubelet[1925]: E1213 13:34:36.701553 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:34:37.702146 kubelet[1925]: E1213 13:34:37.702079 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:34:38.703004 kubelet[1925]: E1213 13:34:38.702925 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:34:39.683757 kubelet[1925]: I1213 13:34:39.683594 1925 topology_manager.go:215] "Topology Admit Handler" podUID="c2672d28-26bb-430e-add5-5639407ed78a" podNamespace="default" podName="test-pod-1" Dec 13 13:34:39.694548 systemd[1]: Created slice kubepods-besteffort-podc2672d28_26bb_430e_add5_5639407ed78a.slice - libcontainer container kubepods-besteffort-podc2672d28_26bb_430e_add5_5639407ed78a.slice. Dec 13 13:34:39.703311 kubelet[1925]: E1213 13:34:39.703274 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:34:39.865903 kubelet[1925]: I1213 13:34:39.865836 1925 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4c1eed81-90db-4f16-bc56-082064ffe40b\" (UniqueName: \"kubernetes.io/nfs/c2672d28-26bb-430e-add5-5639407ed78a-pvc-4c1eed81-90db-4f16-bc56-082064ffe40b\") pod \"test-pod-1\" (UID: \"c2672d28-26bb-430e-add5-5639407ed78a\") " pod="default/test-pod-1" Dec 13 13:34:39.865903 kubelet[1925]: I1213 13:34:39.865906 1925 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6fq2\" (UniqueName: \"kubernetes.io/projected/c2672d28-26bb-430e-add5-5639407ed78a-kube-api-access-z6fq2\") pod \"test-pod-1\" (UID: \"c2672d28-26bb-430e-add5-5639407ed78a\") " pod="default/test-pod-1" Dec 13 13:34:40.017702 kernel: FS-Cache: Loaded Dec 13 13:34:40.101215 kernel: RPC: Registered named UNIX socket transport module. Dec 13 13:34:40.101439 kernel: RPC: Registered udp transport module. Dec 13 13:34:40.101494 kernel: RPC: Registered tcp transport module. Dec 13 13:34:40.102071 kernel: RPC: Registered tcp-with-tls transport module. Dec 13 13:34:40.103151 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 13:34:40.375913 kernel: NFS: Registering the id_resolver key type Dec 13 13:34:40.376250 kernel: Key type id_resolver registered Dec 13 13:34:40.376772 kernel: Key type id_legacy registered Dec 13 13:34:40.427894 nfsidmap[3721]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Dec 13 13:34:40.437389 nfsidmap[3724]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Dec 13 13:34:40.602219 containerd[1509]: time="2024-12-13T13:34:40.602136624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:c2672d28-26bb-430e-add5-5639407ed78a,Namespace:default,Attempt:0,}" Dec 13 13:34:40.652421 kubelet[1925]: E1213 13:34:40.651794 1925 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:34:40.683787 containerd[1509]: time="2024-12-13T13:34:40.683165078Z" level=info msg="StopPodSandbox for \"5aa3a0a1ee3b4659ed4fc0ea1c42835fcbde52c773ea9e270ede34f5c29d644c\"" Dec 13 13:34:40.683787 containerd[1509]: time="2024-12-13T13:34:40.683345861Z" level=info msg="TearDown network for sandbox \"5aa3a0a1ee3b4659ed4fc0ea1c42835fcbde52c773ea9e270ede34f5c29d644c\" successfully" Dec 13 13:34:40.683787 containerd[1509]: time="2024-12-13T13:34:40.683365238Z" level=info msg="StopPodSandbox for \"5aa3a0a1ee3b4659ed4fc0ea1c42835fcbde52c773ea9e270ede34f5c29d644c\" returns successfully" Dec 13 13:34:40.688916 containerd[1509]: time="2024-12-13T13:34:40.688884141Z" level=info msg="RemovePodSandbox for \"5aa3a0a1ee3b4659ed4fc0ea1c42835fcbde52c773ea9e270ede34f5c29d644c\"" Dec 13 13:34:40.697478 containerd[1509]: time="2024-12-13T13:34:40.697427527Z" level=info msg="Forcibly stopping sandbox \"5aa3a0a1ee3b4659ed4fc0ea1c42835fcbde52c773ea9e270ede34f5c29d644c\"" Dec 13 13:34:40.697772 containerd[1509]: time="2024-12-13T13:34:40.697683291Z" level=info msg="TearDown network for sandbox \"5aa3a0a1ee3b4659ed4fc0ea1c42835fcbde52c773ea9e270ede34f5c29d644c\" successfully" Dec 13 13:34:40.704138 kubelet[1925]: E1213 13:34:40.703929 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:34:40.711755 containerd[1509]: time="2024-12-13T13:34:40.711706636Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5aa3a0a1ee3b4659ed4fc0ea1c42835fcbde52c773ea9e270ede34f5c29d644c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:40.711952 containerd[1509]: time="2024-12-13T13:34:40.711923165Z" level=info msg="RemovePodSandbox \"5aa3a0a1ee3b4659ed4fc0ea1c42835fcbde52c773ea9e270ede34f5c29d644c\" returns successfully" Dec 13 13:34:40.712841 containerd[1509]: time="2024-12-13T13:34:40.712810865Z" level=info msg="StopPodSandbox for \"a9bf62381fd7d265186d3afe14c3485c6255b0c5ca6e1afe0ac78b8786d1952d\"" Dec 13 13:34:40.713114 containerd[1509]: time="2024-12-13T13:34:40.713078431Z" level=info msg="TearDown network for sandbox \"a9bf62381fd7d265186d3afe14c3485c6255b0c5ca6e1afe0ac78b8786d1952d\" successfully" Dec 13 13:34:40.713279 containerd[1509]: time="2024-12-13T13:34:40.713206939Z" level=info msg="StopPodSandbox for \"a9bf62381fd7d265186d3afe14c3485c6255b0c5ca6e1afe0ac78b8786d1952d\" returns successfully" Dec 13 13:34:40.713817 containerd[1509]: time="2024-12-13T13:34:40.713783502Z" level=info msg="RemovePodSandbox for \"a9bf62381fd7d265186d3afe14c3485c6255b0c5ca6e1afe0ac78b8786d1952d\"" Dec 13 13:34:40.713928 containerd[1509]: time="2024-12-13T13:34:40.713821128Z" level=info msg="Forcibly stopping sandbox \"a9bf62381fd7d265186d3afe14c3485c6255b0c5ca6e1afe0ac78b8786d1952d\"" Dec 13 13:34:40.713977 containerd[1509]: time="2024-12-13T13:34:40.713921099Z" level=info msg="TearDown network for sandbox \"a9bf62381fd7d265186d3afe14c3485c6255b0c5ca6e1afe0ac78b8786d1952d\" successfully" Dec 13 13:34:40.719752 containerd[1509]: time="2024-12-13T13:34:40.719712514Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a9bf62381fd7d265186d3afe14c3485c6255b0c5ca6e1afe0ac78b8786d1952d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:40.719852 containerd[1509]: time="2024-12-13T13:34:40.719777782Z" level=info msg="RemovePodSandbox \"a9bf62381fd7d265186d3afe14c3485c6255b0c5ca6e1afe0ac78b8786d1952d\" returns successfully" Dec 13 13:34:40.727078 containerd[1509]: time="2024-12-13T13:34:40.726361568Z" level=info msg="StopPodSandbox for \"806034164e3bd1b2f0c2218df74bc898f9548b6fdccc6244916402da1e2bacf7\"" Dec 13 13:34:40.727078 containerd[1509]: time="2024-12-13T13:34:40.726514153Z" level=info msg="TearDown network for sandbox \"806034164e3bd1b2f0c2218df74bc898f9548b6fdccc6244916402da1e2bacf7\" successfully" Dec 13 13:34:40.727078 containerd[1509]: time="2024-12-13T13:34:40.726553598Z" level=info msg="StopPodSandbox for \"806034164e3bd1b2f0c2218df74bc898f9548b6fdccc6244916402da1e2bacf7\" returns successfully" Dec 13 13:34:40.729812 containerd[1509]: time="2024-12-13T13:34:40.729781070Z" level=info msg="RemovePodSandbox for \"806034164e3bd1b2f0c2218df74bc898f9548b6fdccc6244916402da1e2bacf7\"" Dec 13 13:34:40.730000 containerd[1509]: time="2024-12-13T13:34:40.729954511Z" level=info msg="Forcibly stopping sandbox \"806034164e3bd1b2f0c2218df74bc898f9548b6fdccc6244916402da1e2bacf7\"" Dec 13 13:34:40.730229 containerd[1509]: time="2024-12-13T13:34:40.730184906Z" level=info msg="TearDown network for sandbox \"806034164e3bd1b2f0c2218df74bc898f9548b6fdccc6244916402da1e2bacf7\" successfully" Dec 13 13:34:40.734499 containerd[1509]: time="2024-12-13T13:34:40.734461186Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"806034164e3bd1b2f0c2218df74bc898f9548b6fdccc6244916402da1e2bacf7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:40.734929 containerd[1509]: time="2024-12-13T13:34:40.734690067Z" level=info msg="RemovePodSandbox \"806034164e3bd1b2f0c2218df74bc898f9548b6fdccc6244916402da1e2bacf7\" returns successfully" Dec 13 13:34:40.736510 containerd[1509]: time="2024-12-13T13:34:40.735718222Z" level=info msg="StopPodSandbox for \"2e8d6721a59d314b3d0a8bcdf38bda10f22b356813469000dd1a437c0b9250e8\"" Dec 13 13:34:40.736510 containerd[1509]: time="2024-12-13T13:34:40.736356133Z" level=info msg="TearDown network for sandbox \"2e8d6721a59d314b3d0a8bcdf38bda10f22b356813469000dd1a437c0b9250e8\" successfully" Dec 13 13:34:40.736510 containerd[1509]: time="2024-12-13T13:34:40.736430404Z" level=info msg="StopPodSandbox for \"2e8d6721a59d314b3d0a8bcdf38bda10f22b356813469000dd1a437c0b9250e8\" returns successfully" Dec 13 13:34:40.737542 containerd[1509]: time="2024-12-13T13:34:40.736817623Z" level=info msg="RemovePodSandbox for \"2e8d6721a59d314b3d0a8bcdf38bda10f22b356813469000dd1a437c0b9250e8\"" Dec 13 13:34:40.737542 containerd[1509]: time="2024-12-13T13:34:40.736846769Z" level=info msg="Forcibly stopping sandbox \"2e8d6721a59d314b3d0a8bcdf38bda10f22b356813469000dd1a437c0b9250e8\"" Dec 13 13:34:40.737542 containerd[1509]: time="2024-12-13T13:34:40.736937619Z" level=info msg="TearDown network for sandbox \"2e8d6721a59d314b3d0a8bcdf38bda10f22b356813469000dd1a437c0b9250e8\" successfully" Dec 13 13:34:40.739498 containerd[1509]: time="2024-12-13T13:34:40.739456535Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2e8d6721a59d314b3d0a8bcdf38bda10f22b356813469000dd1a437c0b9250e8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:40.739596 containerd[1509]: time="2024-12-13T13:34:40.739506490Z" level=info msg="RemovePodSandbox \"2e8d6721a59d314b3d0a8bcdf38bda10f22b356813469000dd1a437c0b9250e8\" returns successfully" Dec 13 13:34:40.740123 containerd[1509]: time="2024-12-13T13:34:40.739918438Z" level=info msg="StopPodSandbox for \"a1ad99f6aeb4e47962dd648c30f639895fc1d776f030c9b5bf7d999b684f8be0\"" Dec 13 13:34:40.740123 containerd[1509]: time="2024-12-13T13:34:40.740039545Z" level=info msg="TearDown network for sandbox \"a1ad99f6aeb4e47962dd648c30f639895fc1d776f030c9b5bf7d999b684f8be0\" successfully" Dec 13 13:34:40.740123 containerd[1509]: time="2024-12-13T13:34:40.740058839Z" level=info msg="StopPodSandbox for \"a1ad99f6aeb4e47962dd648c30f639895fc1d776f030c9b5bf7d999b684f8be0\" returns successfully" Dec 13 13:34:40.740971 containerd[1509]: time="2024-12-13T13:34:40.740942803Z" level=info msg="RemovePodSandbox for \"a1ad99f6aeb4e47962dd648c30f639895fc1d776f030c9b5bf7d999b684f8be0\"" Dec 13 13:34:40.741236 containerd[1509]: time="2024-12-13T13:34:40.741126462Z" level=info msg="Forcibly stopping sandbox \"a1ad99f6aeb4e47962dd648c30f639895fc1d776f030c9b5bf7d999b684f8be0\"" Dec 13 13:34:40.741607 containerd[1509]: time="2024-12-13T13:34:40.741561495Z" level=info msg="TearDown network for sandbox \"a1ad99f6aeb4e47962dd648c30f639895fc1d776f030c9b5bf7d999b684f8be0\" successfully" Dec 13 13:34:40.745296 containerd[1509]: time="2024-12-13T13:34:40.745178302Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a1ad99f6aeb4e47962dd648c30f639895fc1d776f030c9b5bf7d999b684f8be0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:40.745296 containerd[1509]: time="2024-12-13T13:34:40.745223082Z" level=info msg="RemovePodSandbox \"a1ad99f6aeb4e47962dd648c30f639895fc1d776f030c9b5bf7d999b684f8be0\" returns successfully" Dec 13 13:34:40.745665 containerd[1509]: time="2024-12-13T13:34:40.745578214Z" level=info msg="StopPodSandbox for \"5778ece67220be491b52f677544a3acfa0f15338385d7cb4e18345a3abcb18fe\"" Dec 13 13:34:40.745749 containerd[1509]: time="2024-12-13T13:34:40.745722604Z" level=info msg="TearDown network for sandbox \"5778ece67220be491b52f677544a3acfa0f15338385d7cb4e18345a3abcb18fe\" successfully" Dec 13 13:34:40.745749 containerd[1509]: time="2024-12-13T13:34:40.745741043Z" level=info msg="StopPodSandbox for \"5778ece67220be491b52f677544a3acfa0f15338385d7cb4e18345a3abcb18fe\" returns successfully" Dec 13 13:34:40.746771 containerd[1509]: time="2024-12-13T13:34:40.746072000Z" level=info msg="RemovePodSandbox for \"5778ece67220be491b52f677544a3acfa0f15338385d7cb4e18345a3abcb18fe\"" Dec 13 13:34:40.746771 containerd[1509]: time="2024-12-13T13:34:40.746121012Z" level=info msg="Forcibly stopping sandbox \"5778ece67220be491b52f677544a3acfa0f15338385d7cb4e18345a3abcb18fe\"" Dec 13 13:34:40.746771 containerd[1509]: time="2024-12-13T13:34:40.746223648Z" level=info msg="TearDown network for sandbox \"5778ece67220be491b52f677544a3acfa0f15338385d7cb4e18345a3abcb18fe\" successfully" Dec 13 13:34:40.750119 containerd[1509]: time="2024-12-13T13:34:40.750076811Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5778ece67220be491b52f677544a3acfa0f15338385d7cb4e18345a3abcb18fe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:40.750376 containerd[1509]: time="2024-12-13T13:34:40.750133992Z" level=info msg="RemovePodSandbox \"5778ece67220be491b52f677544a3acfa0f15338385d7cb4e18345a3abcb18fe\" returns successfully" Dec 13 13:34:40.750793 containerd[1509]: time="2024-12-13T13:34:40.750570700Z" level=info msg="StopPodSandbox for \"fa291a37c125db69b561f6552ca6e154c4d30dcd88ec1d735fad43de536fb560\"" Dec 13 13:34:40.750793 containerd[1509]: time="2024-12-13T13:34:40.750721118Z" level=info msg="TearDown network for sandbox \"fa291a37c125db69b561f6552ca6e154c4d30dcd88ec1d735fad43de536fb560\" successfully" Dec 13 13:34:40.750793 containerd[1509]: time="2024-12-13T13:34:40.750742034Z" level=info msg="StopPodSandbox for \"fa291a37c125db69b561f6552ca6e154c4d30dcd88ec1d735fad43de536fb560\" returns successfully" Dec 13 13:34:40.751372 containerd[1509]: time="2024-12-13T13:34:40.751344186Z" level=info msg="RemovePodSandbox for \"fa291a37c125db69b561f6552ca6e154c4d30dcd88ec1d735fad43de536fb560\"" Dec 13 13:34:40.752776 containerd[1509]: time="2024-12-13T13:34:40.751526913Z" level=info msg="Forcibly stopping sandbox \"fa291a37c125db69b561f6552ca6e154c4d30dcd88ec1d735fad43de536fb560\"" Dec 13 13:34:40.752776 containerd[1509]: time="2024-12-13T13:34:40.751736109Z" level=info msg="TearDown network for sandbox \"fa291a37c125db69b561f6552ca6e154c4d30dcd88ec1d735fad43de536fb560\" successfully" Dec 13 13:34:40.754706 containerd[1509]: time="2024-12-13T13:34:40.754675299Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fa291a37c125db69b561f6552ca6e154c4d30dcd88ec1d735fad43de536fb560\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:40.754890 containerd[1509]: time="2024-12-13T13:34:40.754861025Z" level=info msg="RemovePodSandbox \"fa291a37c125db69b561f6552ca6e154c4d30dcd88ec1d735fad43de536fb560\" returns successfully" Dec 13 13:34:40.755662 containerd[1509]: time="2024-12-13T13:34:40.755552421Z" level=info msg="StopPodSandbox for \"efa1d538b4bc9add1bd187e60d1ab94df81de2ddad15d703ac710156a8c89a79\"" Dec 13 13:34:40.756137 containerd[1509]: time="2024-12-13T13:34:40.756097852Z" level=info msg="TearDown network for sandbox \"efa1d538b4bc9add1bd187e60d1ab94df81de2ddad15d703ac710156a8c89a79\" successfully" Dec 13 13:34:40.756284 containerd[1509]: time="2024-12-13T13:34:40.756253229Z" level=info msg="StopPodSandbox for \"efa1d538b4bc9add1bd187e60d1ab94df81de2ddad15d703ac710156a8c89a79\" returns successfully" Dec 13 13:34:40.756772 containerd[1509]: time="2024-12-13T13:34:40.756743636Z" level=info msg="RemovePodSandbox for \"efa1d538b4bc9add1bd187e60d1ab94df81de2ddad15d703ac710156a8c89a79\"" Dec 13 13:34:40.757039 containerd[1509]: time="2024-12-13T13:34:40.757015449Z" level=info msg="Forcibly stopping sandbox \"efa1d538b4bc9add1bd187e60d1ab94df81de2ddad15d703ac710156a8c89a79\"" Dec 13 13:34:40.757266 containerd[1509]: time="2024-12-13T13:34:40.757215479Z" level=info msg="TearDown network for sandbox \"efa1d538b4bc9add1bd187e60d1ab94df81de2ddad15d703ac710156a8c89a79\" successfully" Dec 13 13:34:40.760234 containerd[1509]: time="2024-12-13T13:34:40.760202552Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"efa1d538b4bc9add1bd187e60d1ab94df81de2ddad15d703ac710156a8c89a79\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:40.760383 containerd[1509]: time="2024-12-13T13:34:40.760356736Z" level=info msg="RemovePodSandbox \"efa1d538b4bc9add1bd187e60d1ab94df81de2ddad15d703ac710156a8c89a79\" returns successfully" Dec 13 13:34:40.760875 containerd[1509]: time="2024-12-13T13:34:40.760843932Z" level=info msg="StopPodSandbox for \"80e4f575bc36ab713d593c7a3e2746da73417526ecd30560b3bd3e02ad6dbd08\"" Dec 13 13:34:40.761023 containerd[1509]: time="2024-12-13T13:34:40.760997293Z" level=info msg="TearDown network for sandbox \"80e4f575bc36ab713d593c7a3e2746da73417526ecd30560b3bd3e02ad6dbd08\" successfully" Dec 13 13:34:40.761092 containerd[1509]: time="2024-12-13T13:34:40.761022511Z" level=info msg="StopPodSandbox for \"80e4f575bc36ab713d593c7a3e2746da73417526ecd30560b3bd3e02ad6dbd08\" returns successfully" Dec 13 13:34:40.761483 containerd[1509]: time="2024-12-13T13:34:40.761454790Z" level=info msg="RemovePodSandbox for \"80e4f575bc36ab713d593c7a3e2746da73417526ecd30560b3bd3e02ad6dbd08\"" Dec 13 13:34:40.761977 containerd[1509]: time="2024-12-13T13:34:40.761739663Z" level=info msg="Forcibly stopping sandbox \"80e4f575bc36ab713d593c7a3e2746da73417526ecd30560b3bd3e02ad6dbd08\"" Dec 13 13:34:40.761977 containerd[1509]: time="2024-12-13T13:34:40.761897007Z" level=info msg="TearDown network for sandbox \"80e4f575bc36ab713d593c7a3e2746da73417526ecd30560b3bd3e02ad6dbd08\" successfully" Dec 13 13:34:40.773676 containerd[1509]: time="2024-12-13T13:34:40.773603391Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"80e4f575bc36ab713d593c7a3e2746da73417526ecd30560b3bd3e02ad6dbd08\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:40.773776 containerd[1509]: time="2024-12-13T13:34:40.773693355Z" level=info msg="RemovePodSandbox \"80e4f575bc36ab713d593c7a3e2746da73417526ecd30560b3bd3e02ad6dbd08\" returns successfully" Dec 13 13:34:40.774085 containerd[1509]: time="2024-12-13T13:34:40.774046617Z" level=info msg="StopPodSandbox for \"8f35c4f9188c376bfb4e6e7920d9e43f784db9b7caa5db8a15101c74d8676273\"" Dec 13 13:34:40.774245 containerd[1509]: time="2024-12-13T13:34:40.774177833Z" level=info msg="TearDown network for sandbox \"8f35c4f9188c376bfb4e6e7920d9e43f784db9b7caa5db8a15101c74d8676273\" successfully" Dec 13 13:34:40.774245 containerd[1509]: time="2024-12-13T13:34:40.774199465Z" level=info msg="StopPodSandbox for \"8f35c4f9188c376bfb4e6e7920d9e43f784db9b7caa5db8a15101c74d8676273\" returns successfully" Dec 13 13:34:40.774571 containerd[1509]: time="2024-12-13T13:34:40.774526823Z" level=info msg="RemovePodSandbox for \"8f35c4f9188c376bfb4e6e7920d9e43f784db9b7caa5db8a15101c74d8676273\"" Dec 13 13:34:40.774571 containerd[1509]: time="2024-12-13T13:34:40.774554682Z" level=info msg="Forcibly stopping sandbox \"8f35c4f9188c376bfb4e6e7920d9e43f784db9b7caa5db8a15101c74d8676273\"" Dec 13 13:34:40.774908 containerd[1509]: time="2024-12-13T13:34:40.774672298Z" level=info msg="TearDown network for sandbox \"8f35c4f9188c376bfb4e6e7920d9e43f784db9b7caa5db8a15101c74d8676273\" successfully" Dec 13 13:34:40.777244 containerd[1509]: time="2024-12-13T13:34:40.777196829Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8f35c4f9188c376bfb4e6e7920d9e43f784db9b7caa5db8a15101c74d8676273\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:40.777414 containerd[1509]: time="2024-12-13T13:34:40.777242674Z" level=info msg="RemovePodSandbox \"8f35c4f9188c376bfb4e6e7920d9e43f784db9b7caa5db8a15101c74d8676273\" returns successfully" Dec 13 13:34:40.777865 containerd[1509]: time="2024-12-13T13:34:40.777833463Z" level=info msg="StopPodSandbox for \"a49cd802272c3dd080ae1bd6e830bd122e212bff968b2dd6260081ce8ef5ae96\"" Dec 13 13:34:40.777998 containerd[1509]: time="2024-12-13T13:34:40.777963232Z" level=info msg="TearDown network for sandbox \"a49cd802272c3dd080ae1bd6e830bd122e212bff968b2dd6260081ce8ef5ae96\" successfully" Dec 13 13:34:40.778069 containerd[1509]: time="2024-12-13T13:34:40.777996083Z" level=info msg="StopPodSandbox for \"a49cd802272c3dd080ae1bd6e830bd122e212bff968b2dd6260081ce8ef5ae96\" returns successfully" Dec 13 13:34:40.778331 containerd[1509]: time="2024-12-13T13:34:40.778290693Z" level=info msg="RemovePodSandbox for \"a49cd802272c3dd080ae1bd6e830bd122e212bff968b2dd6260081ce8ef5ae96\"" Dec 13 13:34:40.778423 containerd[1509]: time="2024-12-13T13:34:40.778335922Z" level=info msg="Forcibly stopping sandbox \"a49cd802272c3dd080ae1bd6e830bd122e212bff968b2dd6260081ce8ef5ae96\"" Dec 13 13:34:40.778479 containerd[1509]: time="2024-12-13T13:34:40.778452453Z" level=info msg="TearDown network for sandbox \"a49cd802272c3dd080ae1bd6e830bd122e212bff968b2dd6260081ce8ef5ae96\" successfully" Dec 13 13:34:40.781199 containerd[1509]: time="2024-12-13T13:34:40.781131487Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a49cd802272c3dd080ae1bd6e830bd122e212bff968b2dd6260081ce8ef5ae96\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:40.781449 containerd[1509]: time="2024-12-13T13:34:40.781197791Z" level=info msg="RemovePodSandbox \"a49cd802272c3dd080ae1bd6e830bd122e212bff968b2dd6260081ce8ef5ae96\" returns successfully" Dec 13 13:34:40.782134 containerd[1509]: time="2024-12-13T13:34:40.781781342Z" level=info msg="StopPodSandbox for \"c556daef8ffe98ac226475fb70ade009d48aaa8e4e8d00690a5e566dd5ae9b02\"" Dec 13 13:34:40.782134 containerd[1509]: time="2024-12-13T13:34:40.781889767Z" level=info msg="TearDown network for sandbox \"c556daef8ffe98ac226475fb70ade009d48aaa8e4e8d00690a5e566dd5ae9b02\" successfully" Dec 13 13:34:40.782134 containerd[1509]: time="2024-12-13T13:34:40.781908282Z" level=info msg="StopPodSandbox for \"c556daef8ffe98ac226475fb70ade009d48aaa8e4e8d00690a5e566dd5ae9b02\" returns successfully" Dec 13 13:34:40.782422 containerd[1509]: time="2024-12-13T13:34:40.782347453Z" level=info msg="RemovePodSandbox for \"c556daef8ffe98ac226475fb70ade009d48aaa8e4e8d00690a5e566dd5ae9b02\"" Dec 13 13:34:40.782547 containerd[1509]: time="2024-12-13T13:34:40.782419388Z" level=info msg="Forcibly stopping sandbox \"c556daef8ffe98ac226475fb70ade009d48aaa8e4e8d00690a5e566dd5ae9b02\"" Dec 13 13:34:40.782743 containerd[1509]: time="2024-12-13T13:34:40.782667022Z" level=info msg="TearDown network for sandbox \"c556daef8ffe98ac226475fb70ade009d48aaa8e4e8d00690a5e566dd5ae9b02\" successfully" Dec 13 13:34:40.785491 containerd[1509]: time="2024-12-13T13:34:40.785440878Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c556daef8ffe98ac226475fb70ade009d48aaa8e4e8d00690a5e566dd5ae9b02\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:40.785666 containerd[1509]: time="2024-12-13T13:34:40.785509082Z" level=info msg="RemovePodSandbox \"c556daef8ffe98ac226475fb70ade009d48aaa8e4e8d00690a5e566dd5ae9b02\" returns successfully" Dec 13 13:34:40.786558 containerd[1509]: time="2024-12-13T13:34:40.786044238Z" level=info msg="StopPodSandbox for \"e3ca0f9dfa9d9900aea0fe001174697b50e107cb8b14e69d2d740c280a765645\"" Dec 13 13:34:40.786558 containerd[1509]: time="2024-12-13T13:34:40.786294852Z" level=info msg="TearDown network for sandbox \"e3ca0f9dfa9d9900aea0fe001174697b50e107cb8b14e69d2d740c280a765645\" successfully" Dec 13 13:34:40.786558 containerd[1509]: time="2024-12-13T13:34:40.786326311Z" level=info msg="StopPodSandbox for \"e3ca0f9dfa9d9900aea0fe001174697b50e107cb8b14e69d2d740c280a765645\" returns successfully" Dec 13 13:34:40.787594 containerd[1509]: time="2024-12-13T13:34:40.787332762Z" level=info msg="RemovePodSandbox for \"e3ca0f9dfa9d9900aea0fe001174697b50e107cb8b14e69d2d740c280a765645\"" Dec 13 13:34:40.787594 containerd[1509]: time="2024-12-13T13:34:40.787452914Z" level=info msg="Forcibly stopping sandbox \"e3ca0f9dfa9d9900aea0fe001174697b50e107cb8b14e69d2d740c280a765645\"" Dec 13 13:34:40.787822 containerd[1509]: time="2024-12-13T13:34:40.787571591Z" level=info msg="TearDown network for sandbox \"e3ca0f9dfa9d9900aea0fe001174697b50e107cb8b14e69d2d740c280a765645\" successfully" Dec 13 13:34:40.792906 containerd[1509]: time="2024-12-13T13:34:40.792858287Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e3ca0f9dfa9d9900aea0fe001174697b50e107cb8b14e69d2d740c280a765645\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:40.795771 containerd[1509]: time="2024-12-13T13:34:40.792915122Z" level=info msg="RemovePodSandbox \"e3ca0f9dfa9d9900aea0fe001174697b50e107cb8b14e69d2d740c280a765645\" returns successfully" Dec 13 13:34:40.796118 containerd[1509]: time="2024-12-13T13:34:40.796087422Z" level=info msg="StopPodSandbox for \"e00c79de7aa774063e3a9a2ebb81462662288851eafb8b5b9fdce324e8f95844\"" Dec 13 13:34:40.799757 containerd[1509]: time="2024-12-13T13:34:40.796352239Z" level=info msg="TearDown network for sandbox \"e00c79de7aa774063e3a9a2ebb81462662288851eafb8b5b9fdce324e8f95844\" successfully" Dec 13 13:34:40.799757 containerd[1509]: time="2024-12-13T13:34:40.796379387Z" level=info msg="StopPodSandbox for \"e00c79de7aa774063e3a9a2ebb81462662288851eafb8b5b9fdce324e8f95844\" returns successfully" Dec 13 13:34:40.799757 containerd[1509]: time="2024-12-13T13:34:40.797271889Z" level=info msg="RemovePodSandbox for \"e00c79de7aa774063e3a9a2ebb81462662288851eafb8b5b9fdce324e8f95844\"" Dec 13 13:34:40.799757 containerd[1509]: time="2024-12-13T13:34:40.797676364Z" level=info msg="Forcibly stopping sandbox \"e00c79de7aa774063e3a9a2ebb81462662288851eafb8b5b9fdce324e8f95844\"" Dec 13 13:34:40.799757 containerd[1509]: time="2024-12-13T13:34:40.797771006Z" level=info msg="TearDown network for sandbox \"e00c79de7aa774063e3a9a2ebb81462662288851eafb8b5b9fdce324e8f95844\" successfully" Dec 13 13:34:40.804077 containerd[1509]: time="2024-12-13T13:34:40.803928346Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e00c79de7aa774063e3a9a2ebb81462662288851eafb8b5b9fdce324e8f95844\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:40.804850 containerd[1509]: time="2024-12-13T13:34:40.804278956Z" level=info msg="RemovePodSandbox \"e00c79de7aa774063e3a9a2ebb81462662288851eafb8b5b9fdce324e8f95844\" returns successfully" Dec 13 13:34:40.805278 containerd[1509]: time="2024-12-13T13:34:40.805238228Z" level=info msg="StopPodSandbox for \"ec6e45a3152036bb99b885ae7ab10d9f25eca3e142499b3ce6d99952ad42c8a2\"" Dec 13 13:34:40.805411 containerd[1509]: time="2024-12-13T13:34:40.805370911Z" level=info msg="TearDown network for sandbox \"ec6e45a3152036bb99b885ae7ab10d9f25eca3e142499b3ce6d99952ad42c8a2\" successfully" Dec 13 13:34:40.805515 containerd[1509]: time="2024-12-13T13:34:40.805409920Z" level=info msg="StopPodSandbox for \"ec6e45a3152036bb99b885ae7ab10d9f25eca3e142499b3ce6d99952ad42c8a2\" returns successfully" Dec 13 13:34:40.815070 containerd[1509]: time="2024-12-13T13:34:40.807882580Z" level=info msg="RemovePodSandbox for \"ec6e45a3152036bb99b885ae7ab10d9f25eca3e142499b3ce6d99952ad42c8a2\"" Dec 13 13:34:40.815070 containerd[1509]: time="2024-12-13T13:34:40.807917646Z" level=info msg="Forcibly stopping sandbox \"ec6e45a3152036bb99b885ae7ab10d9f25eca3e142499b3ce6d99952ad42c8a2\"" Dec 13 13:34:40.815070 containerd[1509]: time="2024-12-13T13:34:40.809692799Z" level=info msg="TearDown network for sandbox \"ec6e45a3152036bb99b885ae7ab10d9f25eca3e142499b3ce6d99952ad42c8a2\" successfully" Dec 13 13:34:40.817426 containerd[1509]: time="2024-12-13T13:34:40.817393563Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ec6e45a3152036bb99b885ae7ab10d9f25eca3e142499b3ce6d99952ad42c8a2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:40.818198 containerd[1509]: time="2024-12-13T13:34:40.818170367Z" level=info msg="RemovePodSandbox \"ec6e45a3152036bb99b885ae7ab10d9f25eca3e142499b3ce6d99952ad42c8a2\" returns successfully" Dec 13 13:34:40.830129 systemd-networkd[1441]: cali5ec59c6bf6e: Link UP Dec 13 13:34:40.831533 systemd-networkd[1441]: cali5ec59c6bf6e: Gained carrier Dec 13 13:34:40.847251 containerd[1509]: 2024-12-13 13:34:40.672 [INFO][3728] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.230.34.102-k8s-test--pod--1-eth0 default c2672d28-26bb-430e-add5-5639407ed78a 1363 0 2024-12-13 13:34:24 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.230.34.102 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="589a79c27bbb68422c924ff893b86b9dd4b5dc093261bc859dcc0889837613cc" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.34.102-k8s-test--pod--1-" Dec 13 13:34:40.847251 containerd[1509]: 2024-12-13 13:34:40.672 [INFO][3728] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="589a79c27bbb68422c924ff893b86b9dd4b5dc093261bc859dcc0889837613cc" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.34.102-k8s-test--pod--1-eth0" Dec 13 13:34:40.847251 containerd[1509]: 2024-12-13 13:34:40.734 [INFO][3738] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="589a79c27bbb68422c924ff893b86b9dd4b5dc093261bc859dcc0889837613cc" HandleID="k8s-pod-network.589a79c27bbb68422c924ff893b86b9dd4b5dc093261bc859dcc0889837613cc" Workload="10.230.34.102-k8s-test--pod--1-eth0" Dec 13 13:34:40.847251 containerd[1509]: 2024-12-13 13:34:40.752 [INFO][3738] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="589a79c27bbb68422c924ff893b86b9dd4b5dc093261bc859dcc0889837613cc" HandleID="k8s-pod-network.589a79c27bbb68422c924ff893b86b9dd4b5dc093261bc859dcc0889837613cc" Workload="10.230.34.102-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ed730), Attrs:map[string]string{"namespace":"default", "node":"10.230.34.102", "pod":"test-pod-1", "timestamp":"2024-12-13 13:34:40.734079439 +0000 UTC"}, Hostname:"10.230.34.102", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:34:40.847251 containerd[1509]: 2024-12-13 13:34:40.752 [INFO][3738] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:34:40.847251 containerd[1509]: 2024-12-13 13:34:40.752 [INFO][3738] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:34:40.847251 containerd[1509]: 2024-12-13 13:34:40.753 [INFO][3738] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.230.34.102' Dec 13 13:34:40.847251 containerd[1509]: 2024-12-13 13:34:40.755 [INFO][3738] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.589a79c27bbb68422c924ff893b86b9dd4b5dc093261bc859dcc0889837613cc" host="10.230.34.102" Dec 13 13:34:40.847251 containerd[1509]: 2024-12-13 13:34:40.763 [INFO][3738] ipam/ipam.go 372: Looking up existing affinities for host host="10.230.34.102" Dec 13 13:34:40.847251 containerd[1509]: 2024-12-13 13:34:40.772 [INFO][3738] ipam/ipam.go 489: Trying affinity for 192.168.114.128/26 host="10.230.34.102" Dec 13 13:34:40.847251 containerd[1509]: 2024-12-13 13:34:40.775 [INFO][3738] ipam/ipam.go 155: Attempting to load block cidr=192.168.114.128/26 host="10.230.34.102" Dec 13 13:34:40.847251 containerd[1509]: 2024-12-13 13:34:40.779 [INFO][3738] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.114.128/26 host="10.230.34.102" Dec 13 13:34:40.847251 containerd[1509]: 2024-12-13 13:34:40.779 [INFO][3738] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.114.128/26 handle="k8s-pod-network.589a79c27bbb68422c924ff893b86b9dd4b5dc093261bc859dcc0889837613cc" host="10.230.34.102" Dec 13 13:34:40.847251 containerd[1509]: 2024-12-13 13:34:40.786 [INFO][3738] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.589a79c27bbb68422c924ff893b86b9dd4b5dc093261bc859dcc0889837613cc Dec 13 13:34:40.847251 containerd[1509]: 2024-12-13 13:34:40.800 [INFO][3738] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.114.128/26 handle="k8s-pod-network.589a79c27bbb68422c924ff893b86b9dd4b5dc093261bc859dcc0889837613cc" host="10.230.34.102" Dec 13 13:34:40.847251 containerd[1509]: 2024-12-13 13:34:40.816 [INFO][3738] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.114.132/26] block=192.168.114.128/26 handle="k8s-pod-network.589a79c27bbb68422c924ff893b86b9dd4b5dc093261bc859dcc0889837613cc" host="10.230.34.102" Dec 13 13:34:40.847251 containerd[1509]: 2024-12-13 13:34:40.816 [INFO][3738] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.114.132/26] handle="k8s-pod-network.589a79c27bbb68422c924ff893b86b9dd4b5dc093261bc859dcc0889837613cc" host="10.230.34.102" Dec 13 13:34:40.847251 containerd[1509]: 2024-12-13 13:34:40.816 [INFO][3738] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:34:40.847251 containerd[1509]: 2024-12-13 13:34:40.816 [INFO][3738] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.114.132/26] IPv6=[] ContainerID="589a79c27bbb68422c924ff893b86b9dd4b5dc093261bc859dcc0889837613cc" HandleID="k8s-pod-network.589a79c27bbb68422c924ff893b86b9dd4b5dc093261bc859dcc0889837613cc" Workload="10.230.34.102-k8s-test--pod--1-eth0" Dec 13 13:34:40.847251 containerd[1509]: 2024-12-13 13:34:40.823 [INFO][3728] cni-plugin/k8s.go 386: Populated endpoint ContainerID="589a79c27bbb68422c924ff893b86b9dd4b5dc093261bc859dcc0889837613cc" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.34.102-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.34.102-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"c2672d28-26bb-430e-add5-5639407ed78a", ResourceVersion:"1363", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 34, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.34.102", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.114.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:34:40.848492 containerd[1509]: 2024-12-13 13:34:40.823 [INFO][3728] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.114.132/32] ContainerID="589a79c27bbb68422c924ff893b86b9dd4b5dc093261bc859dcc0889837613cc" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.34.102-k8s-test--pod--1-eth0" Dec 13 13:34:40.848492 containerd[1509]: 2024-12-13 13:34:40.823 [INFO][3728] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="589a79c27bbb68422c924ff893b86b9dd4b5dc093261bc859dcc0889837613cc" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.34.102-k8s-test--pod--1-eth0" Dec 13 13:34:40.848492 containerd[1509]: 2024-12-13 13:34:40.832 [INFO][3728] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="589a79c27bbb68422c924ff893b86b9dd4b5dc093261bc859dcc0889837613cc" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.34.102-k8s-test--pod--1-eth0" Dec 13 13:34:40.848492 containerd[1509]: 2024-12-13 13:34:40.833 [INFO][3728] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="589a79c27bbb68422c924ff893b86b9dd4b5dc093261bc859dcc0889837613cc" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.34.102-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.34.102-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"c2672d28-26bb-430e-add5-5639407ed78a", ResourceVersion:"1363", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 34, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.34.102", ContainerID:"589a79c27bbb68422c924ff893b86b9dd4b5dc093261bc859dcc0889837613cc", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.114.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"de:80:98:91:b9:ff", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:34:40.848492 containerd[1509]: 2024-12-13 13:34:40.844 [INFO][3728] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="589a79c27bbb68422c924ff893b86b9dd4b5dc093261bc859dcc0889837613cc" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.34.102-k8s-test--pod--1-eth0" Dec 13 13:34:40.894494 containerd[1509]: time="2024-12-13T13:34:40.894307911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:34:40.894494 containerd[1509]: time="2024-12-13T13:34:40.894450770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:34:40.895979 containerd[1509]: time="2024-12-13T13:34:40.895723732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:34:40.895979 containerd[1509]: time="2024-12-13T13:34:40.895913486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:34:40.925873 systemd[1]: Started cri-containerd-589a79c27bbb68422c924ff893b86b9dd4b5dc093261bc859dcc0889837613cc.scope - libcontainer container 589a79c27bbb68422c924ff893b86b9dd4b5dc093261bc859dcc0889837613cc. Dec 13 13:34:41.003803 containerd[1509]: time="2024-12-13T13:34:41.003749119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:c2672d28-26bb-430e-add5-5639407ed78a,Namespace:default,Attempt:0,} returns sandbox id \"589a79c27bbb68422c924ff893b86b9dd4b5dc093261bc859dcc0889837613cc\"" Dec 13 13:34:41.009173 containerd[1509]: time="2024-12-13T13:34:41.008327708Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 13:34:41.705175 kubelet[1925]: E1213 13:34:41.705108 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:34:41.772511 containerd[1509]: time="2024-12-13T13:34:41.772328862Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:41.773682 containerd[1509]: time="2024-12-13T13:34:41.773607972Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Dec 13 13:34:41.778086 containerd[1509]: time="2024-12-13T13:34:41.777960162Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\", size \"71035905\" in 769.585143ms" Dec 13 13:34:41.778086 containerd[1509]: time="2024-12-13T13:34:41.778020018Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 13:34:41.780399 containerd[1509]: time="2024-12-13T13:34:41.780361700Z" level=info msg="CreateContainer within sandbox \"589a79c27bbb68422c924ff893b86b9dd4b5dc093261bc859dcc0889837613cc\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 13:34:41.796285 containerd[1509]: time="2024-12-13T13:34:41.796240649Z" level=info msg="CreateContainer within sandbox \"589a79c27bbb68422c924ff893b86b9dd4b5dc093261bc859dcc0889837613cc\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"3235f6f4ee0436b1fe9cfbfb5ee47e00b11cbc32f3a54b7cacfba9e8a7ffe9e5\"" Dec 13 13:34:41.796979 containerd[1509]: time="2024-12-13T13:34:41.796938962Z" level=info msg="StartContainer for \"3235f6f4ee0436b1fe9cfbfb5ee47e00b11cbc32f3a54b7cacfba9e8a7ffe9e5\"" Dec 13 13:34:41.840863 systemd[1]: Started cri-containerd-3235f6f4ee0436b1fe9cfbfb5ee47e00b11cbc32f3a54b7cacfba9e8a7ffe9e5.scope - libcontainer container 3235f6f4ee0436b1fe9cfbfb5ee47e00b11cbc32f3a54b7cacfba9e8a7ffe9e5. Dec 13 13:34:41.875682 containerd[1509]: time="2024-12-13T13:34:41.875262051Z" level=info msg="StartContainer for \"3235f6f4ee0436b1fe9cfbfb5ee47e00b11cbc32f3a54b7cacfba9e8a7ffe9e5\" returns successfully" Dec 13 13:34:41.925883 systemd-networkd[1441]: cali5ec59c6bf6e: Gained IPv6LL Dec 13 13:34:42.195200 kubelet[1925]: I1213 13:34:42.195118 1925 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=17.423458944 podStartE2EDuration="18.195076247s" podCreationTimestamp="2024-12-13 13:34:24 +0000 UTC" firstStartedPulling="2024-12-13 13:34:41.006693991 +0000 UTC m=+61.564901250" lastFinishedPulling="2024-12-13 13:34:41.778311291 +0000 UTC m=+62.336518553" observedRunningTime="2024-12-13 13:34:42.191938897 +0000 UTC m=+62.750146187" watchObservedRunningTime="2024-12-13 13:34:42.195076247 +0000 UTC m=+62.753283515" Dec 13 13:34:42.705503 kubelet[1925]: E1213 13:34:42.705407 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:34:43.705951 kubelet[1925]: E1213 13:34:43.705878 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:34:44.706962 kubelet[1925]: E1213 13:34:44.706886 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:34:45.707890 kubelet[1925]: E1213 13:34:45.707810 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:34:46.435352 systemd[1]: run-containerd-runc-k8s.io-8d07e5f658ff6c7bcb1f08dedd4ce67f651de0cc1d7b267a9bcecf059474a8b5-runc.eJ5pUG.mount: Deactivated successfully. Dec 13 13:34:46.708523 kubelet[1925]: E1213 13:34:46.708333 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:34:47.709058 kubelet[1925]: E1213 13:34:47.708849 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:34:48.709175 kubelet[1925]: E1213 13:34:48.709126 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:34:49.709675 kubelet[1925]: E1213 13:34:49.709598 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:34:50.710202 kubelet[1925]: E1213 13:34:50.710133 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:34:51.710833 kubelet[1925]: E1213 13:34:51.710767 1925 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"