Jan 15 14:12:08.032911 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:40:50 -00 2025 Jan 15 14:12:08.032947 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 15 14:12:08.032961 kernel: BIOS-provided physical RAM map: Jan 15 14:12:08.032977 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 15 14:12:08.032987 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 15 14:12:08.032997 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 15 14:12:08.033009 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jan 15 14:12:08.033020 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jan 15 14:12:08.033030 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 15 14:12:08.033041 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 15 14:12:08.033052 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 15 14:12:08.033062 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 15 14:12:08.033078 kernel: NX (Execute Disable) protection: active Jan 15 14:12:08.033089 kernel: APIC: Static calls initialized Jan 15 14:12:08.033102 kernel: SMBIOS 2.8 present. Jan 15 14:12:08.033113 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Jan 15 14:12:08.033125 kernel: Hypervisor detected: KVM Jan 15 14:12:08.033141 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 15 14:12:08.033153 kernel: kvm-clock: using sched offset of 4659672957 cycles Jan 15 14:12:08.033165 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 15 14:12:08.033177 kernel: tsc: Detected 2499.998 MHz processor Jan 15 14:12:08.033189 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 15 14:12:08.033201 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 15 14:12:08.033212 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 15 14:12:08.033224 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 15 14:12:08.033236 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 15 14:12:08.033252 kernel: Using GB pages for direct mapping Jan 15 14:12:08.033264 kernel: ACPI: Early table checksum verification disabled Jan 15 14:12:08.033276 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Jan 15 14:12:08.033287 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 14:12:08.033299 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 14:12:08.033311 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 14:12:08.033322 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jan 15 14:12:08.033334 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 14:12:08.033345 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 14:12:08.033362 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 14:12:08.033373 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 14:12:08.033385 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jan 15 14:12:08.033397 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jan 15 14:12:08.033408 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jan 15 14:12:08.033426 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jan 15 14:12:08.033438 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jan 15 14:12:08.033455 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jan 15 14:12:08.033468 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jan 15 14:12:08.033480 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 15 14:12:08.033492 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 15 14:12:08.033504 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jan 15 14:12:08.033516 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Jan 15 14:12:08.033528 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jan 15 14:12:08.033544 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Jan 15 14:12:08.033557 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jan 15 14:12:08.033569 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Jan 15 14:12:08.033581 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jan 15 14:12:08.033593 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Jan 15 14:12:08.033605 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jan 15 14:12:08.033617 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Jan 15 14:12:08.033629 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jan 15 14:12:08.033653 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Jan 15 14:12:08.033667 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jan 15 14:12:08.033684 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Jan 15 14:12:08.033696 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 15 14:12:08.033708 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 15 14:12:08.033721 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jan 15 14:12:08.033733 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Jan 15 14:12:08.033765 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Jan 15 14:12:08.033778 kernel: Zone ranges: Jan 15 14:12:08.033791 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 15 14:12:08.033803 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jan 15 14:12:08.033821 kernel: Normal empty Jan 15 14:12:08.033834 kernel: Movable zone start for each node Jan 15 14:12:08.033857 kernel: Early memory node ranges Jan 15 14:12:08.033875 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 15 14:12:08.033887 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jan 15 14:12:08.033899 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jan 15 14:12:08.033911 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 15 14:12:08.033923 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 15 14:12:08.033935 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jan 15 14:12:08.033947 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 15 14:12:08.033965 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 15 14:12:08.033978 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 15 14:12:08.033990 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 15 14:12:08.034002 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 15 14:12:08.034014 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 15 14:12:08.034026 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 15 14:12:08.034038 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 15 14:12:08.034050 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 15 14:12:08.034062 kernel: TSC deadline timer available Jan 15 14:12:08.034080 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Jan 15 14:12:08.034092 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 15 14:12:08.034104 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 15 14:12:08.034116 kernel: Booting paravirtualized kernel on KVM Jan 15 14:12:08.034129 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 15 14:12:08.034141 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 15 14:12:08.034154 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 15 14:12:08.034166 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 15 14:12:08.034178 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 15 14:12:08.034195 kernel: kvm-guest: PV spinlocks enabled Jan 15 14:12:08.034207 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 15 14:12:08.034221 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 15 14:12:08.034234 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 15 14:12:08.034246 kernel: random: crng init done Jan 15 14:12:08.034259 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 15 14:12:08.034271 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 15 14:12:08.034283 kernel: Fallback order for Node 0: 0 Jan 15 14:12:08.034300 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Jan 15 14:12:08.034312 kernel: Policy zone: DMA32 Jan 15 14:12:08.034325 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 15 14:12:08.034337 kernel: software IO TLB: area num 16. Jan 15 14:12:08.034350 kernel: Memory: 1901524K/2096616K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42844K init, 2348K bss, 194832K reserved, 0K cma-reserved) Jan 15 14:12:08.034362 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 15 14:12:08.034374 kernel: Kernel/User page tables isolation: enabled Jan 15 14:12:08.034387 kernel: ftrace: allocating 37918 entries in 149 pages Jan 15 14:12:08.034399 kernel: ftrace: allocated 149 pages with 4 groups Jan 15 14:12:08.034416 kernel: Dynamic Preempt: voluntary Jan 15 14:12:08.034428 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 15 14:12:08.034446 kernel: rcu: RCU event tracing is enabled. Jan 15 14:12:08.034459 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 15 14:12:08.034472 kernel: Trampoline variant of Tasks RCU enabled. Jan 15 14:12:08.034496 kernel: Rude variant of Tasks RCU enabled. Jan 15 14:12:08.034514 kernel: Tracing variant of Tasks RCU enabled. Jan 15 14:12:08.034527 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 15 14:12:08.034540 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 15 14:12:08.034553 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jan 15 14:12:08.034565 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 15 14:12:08.034578 kernel: Console: colour VGA+ 80x25 Jan 15 14:12:08.034596 kernel: printk: console [tty0] enabled Jan 15 14:12:08.034609 kernel: printk: console [ttyS0] enabled Jan 15 14:12:08.034622 kernel: ACPI: Core revision 20230628 Jan 15 14:12:08.034635 kernel: APIC: Switch to symmetric I/O mode setup Jan 15 14:12:08.034659 kernel: x2apic enabled Jan 15 14:12:08.034679 kernel: APIC: Switched APIC routing to: physical x2apic Jan 15 14:12:08.034692 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 15 14:12:08.034705 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Jan 15 14:12:08.034718 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 15 14:12:08.034730 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 15 14:12:08.034825 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 15 14:12:08.034844 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 15 14:12:08.034857 kernel: Spectre V2 : Mitigation: Retpolines Jan 15 14:12:08.034870 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 15 14:12:08.034889 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 15 14:12:08.034902 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 15 14:12:08.034915 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 15 14:12:08.034927 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 15 14:12:08.034940 kernel: MDS: Mitigation: Clear CPU buffers Jan 15 14:12:08.034953 kernel: MMIO Stale Data: Unknown: No mitigations Jan 15 14:12:08.034965 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 15 14:12:08.034978 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 15 14:12:08.034991 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 15 14:12:08.035003 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 15 14:12:08.035016 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 15 14:12:08.035033 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 15 14:12:08.035047 kernel: Freeing SMP alternatives memory: 32K Jan 15 14:12:08.035060 kernel: pid_max: default: 32768 minimum: 301 Jan 15 14:12:08.035072 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 15 14:12:08.035085 kernel: landlock: Up and running. Jan 15 14:12:08.035098 kernel: SELinux: Initializing. Jan 15 14:12:08.035110 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 15 14:12:08.035123 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 15 14:12:08.035136 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Jan 15 14:12:08.035149 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 15 14:12:08.035162 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 15 14:12:08.035180 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 15 14:12:08.035193 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Jan 15 14:12:08.035206 kernel: signal: max sigframe size: 1776 Jan 15 14:12:08.035219 kernel: rcu: Hierarchical SRCU implementation. Jan 15 14:12:08.035232 kernel: rcu: Max phase no-delay instances is 400. Jan 15 14:12:08.035245 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 15 14:12:08.035258 kernel: smp: Bringing up secondary CPUs ... Jan 15 14:12:08.035271 kernel: smpboot: x86: Booting SMP configuration: Jan 15 14:12:08.035284 kernel: .... node #0, CPUs: #1 Jan 15 14:12:08.035302 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jan 15 14:12:08.035315 kernel: smp: Brought up 1 node, 2 CPUs Jan 15 14:12:08.035327 kernel: smpboot: Max logical packages: 16 Jan 15 14:12:08.035340 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Jan 15 14:12:08.035353 kernel: devtmpfs: initialized Jan 15 14:12:08.035366 kernel: x86/mm: Memory block size: 128MB Jan 15 14:12:08.035379 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 15 14:12:08.035392 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 15 14:12:08.035405 kernel: pinctrl core: initialized pinctrl subsystem Jan 15 14:12:08.035422 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 15 14:12:08.035435 kernel: audit: initializing netlink subsys (disabled) Jan 15 14:12:08.035448 kernel: audit: type=2000 audit(1736950326.475:1): state=initialized audit_enabled=0 res=1 Jan 15 14:12:08.035461 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 15 14:12:08.035474 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 15 14:12:08.035486 kernel: cpuidle: using governor menu Jan 15 14:12:08.035499 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 15 14:12:08.035512 kernel: dca service started, version 1.12.1 Jan 15 14:12:08.035525 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 15 14:12:08.035543 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 15 14:12:08.035556 kernel: PCI: Using configuration type 1 for base access Jan 15 14:12:08.035569 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 15 14:12:08.035582 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 15 14:12:08.035595 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 15 14:12:08.035607 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 15 14:12:08.035620 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 15 14:12:08.035633 kernel: ACPI: Added _OSI(Module Device) Jan 15 14:12:08.035661 kernel: ACPI: Added _OSI(Processor Device) Jan 15 14:12:08.035681 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 15 14:12:08.035694 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 15 14:12:08.035706 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 15 14:12:08.035719 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 15 14:12:08.035732 kernel: ACPI: Interpreter enabled Jan 15 14:12:08.035758 kernel: ACPI: PM: (supports S0 S5) Jan 15 14:12:08.035771 kernel: ACPI: Using IOAPIC for interrupt routing Jan 15 14:12:08.035784 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 15 14:12:08.035797 kernel: PCI: Using E820 reservations for host bridge windows Jan 15 14:12:08.035816 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 15 14:12:08.035829 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 15 14:12:08.036062 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 15 14:12:08.036243 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 15 14:12:08.036406 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 15 14:12:08.036426 kernel: PCI host bridge to bus 0000:00 Jan 15 14:12:08.036603 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 15 14:12:08.036800 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 15 14:12:08.036955 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 15 14:12:08.037105 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 15 14:12:08.037297 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 15 14:12:08.037464 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jan 15 14:12:08.037615 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 15 14:12:08.037847 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 15 14:12:08.038045 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Jan 15 14:12:08.038216 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Jan 15 14:12:08.038381 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Jan 15 14:12:08.038548 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Jan 15 14:12:08.038728 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 15 14:12:08.038922 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 15 14:12:08.039129 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Jan 15 14:12:08.039319 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 15 14:12:08.039488 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Jan 15 14:12:08.039679 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 15 14:12:08.040472 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Jan 15 14:12:08.040710 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 15 14:12:08.040939 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Jan 15 14:12:08.041120 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 15 14:12:08.041289 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Jan 15 14:12:08.041465 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 15 14:12:08.041654 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Jan 15 14:12:08.041888 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 15 14:12:08.042065 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Jan 15 14:12:08.042247 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 15 14:12:08.042411 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Jan 15 14:12:08.042582 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 15 14:12:08.042778 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 15 14:12:08.042944 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Jan 15 14:12:08.043109 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jan 15 14:12:08.043281 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Jan 15 14:12:08.043459 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 15 14:12:08.043637 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 15 14:12:08.043836 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Jan 15 14:12:08.044004 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Jan 15 14:12:08.044217 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 15 14:12:08.044419 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 15 14:12:08.044605 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 15 14:12:08.044856 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Jan 15 14:12:08.045022 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Jan 15 14:12:08.045198 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 15 14:12:08.045365 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 15 14:12:08.045548 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Jan 15 14:12:08.045752 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Jan 15 14:12:08.045937 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 15 14:12:08.046128 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 15 14:12:08.046294 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 15 14:12:08.046496 kernel: pci_bus 0000:02: extended config space not accessible Jan 15 14:12:08.046706 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Jan 15 14:12:08.048924 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Jan 15 14:12:08.049110 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 15 14:12:08.049284 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 15 14:12:08.049467 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 15 14:12:08.049651 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Jan 15 14:12:08.050705 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 15 14:12:08.050942 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 15 14:12:08.051121 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 15 14:12:08.051306 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 15 14:12:08.051479 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jan 15 14:12:08.051659 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 15 14:12:08.052905 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 15 14:12:08.053076 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 15 14:12:08.053242 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 15 14:12:08.053408 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 15 14:12:08.053580 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 15 14:12:08.054826 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 15 14:12:08.055007 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 15 14:12:08.055175 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 15 14:12:08.055343 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 15 14:12:08.055508 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 15 14:12:08.055688 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 15 14:12:08.055873 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 15 14:12:08.056046 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 15 14:12:08.056209 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 15 14:12:08.056374 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 15 14:12:08.056535 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 15 14:12:08.056719 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 15 14:12:08.056740 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 15 14:12:08.058794 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 15 14:12:08.058808 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 15 14:12:08.058829 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 15 14:12:08.058843 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 15 14:12:08.058856 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 15 14:12:08.058869 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 15 14:12:08.058882 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 15 14:12:08.058895 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 15 14:12:08.058908 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 15 14:12:08.058921 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 15 14:12:08.058934 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 15 14:12:08.058952 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 15 14:12:08.058965 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 15 14:12:08.058978 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 15 14:12:08.058991 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 15 14:12:08.059004 kernel: iommu: Default domain type: Translated Jan 15 14:12:08.059017 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 15 14:12:08.059030 kernel: PCI: Using ACPI for IRQ routing Jan 15 14:12:08.059043 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 15 14:12:08.059056 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 15 14:12:08.059074 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jan 15 14:12:08.059268 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 15 14:12:08.059455 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 15 14:12:08.059637 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 15 14:12:08.059671 kernel: vgaarb: loaded Jan 15 14:12:08.059685 kernel: clocksource: Switched to clocksource kvm-clock Jan 15 14:12:08.059697 kernel: VFS: Disk quotas dquot_6.6.0 Jan 15 14:12:08.059711 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 15 14:12:08.059731 kernel: pnp: PnP ACPI init Jan 15 14:12:08.060950 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 15 14:12:08.060974 kernel: pnp: PnP ACPI: found 5 devices Jan 15 14:12:08.060988 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 15 14:12:08.061001 kernel: NET: Registered PF_INET protocol family Jan 15 14:12:08.061014 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 15 14:12:08.061028 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 15 14:12:08.061041 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 15 14:12:08.061054 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 15 14:12:08.061076 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 15 14:12:08.061089 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 15 14:12:08.061102 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 15 14:12:08.061116 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 15 14:12:08.061129 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 15 14:12:08.061142 kernel: NET: Registered PF_XDP protocol family Jan 15 14:12:08.061310 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Jan 15 14:12:08.061481 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 15 14:12:08.061673 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 15 14:12:08.061906 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 15 14:12:08.062073 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 15 14:12:08.062237 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 15 14:12:08.062402 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 15 14:12:08.062565 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 15 14:12:08.063806 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 15 14:12:08.063984 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 15 14:12:08.064150 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 15 14:12:08.064313 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 15 14:12:08.064477 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 15 14:12:08.064656 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 15 14:12:08.064861 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 15 14:12:08.065052 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 15 14:12:08.065253 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 15 14:12:08.065434 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 15 14:12:08.065620 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 15 14:12:08.066837 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 15 14:12:08.067011 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 15 14:12:08.067179 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 15 14:12:08.067345 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 15 14:12:08.067508 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 15 14:12:08.067699 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 15 14:12:08.069234 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 15 14:12:08.069408 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 15 14:12:08.069574 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 15 14:12:08.070804 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 15 14:12:08.070993 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 15 14:12:08.071169 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 15 14:12:08.071334 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 15 14:12:08.071499 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 15 14:12:08.071680 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 15 14:12:08.072901 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 15 14:12:08.073073 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 15 14:12:08.073240 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 15 14:12:08.073405 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 15 14:12:08.073571 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 15 14:12:08.074818 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 15 14:12:08.074997 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 15 14:12:08.075164 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 15 14:12:08.075330 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 15 14:12:08.075496 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 15 14:12:08.075687 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 15 14:12:08.076906 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 15 14:12:08.077089 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 15 14:12:08.077256 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 15 14:12:08.077425 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 15 14:12:08.077592 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 15 14:12:08.079805 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 15 14:12:08.079974 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 15 14:12:08.080140 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 15 14:12:08.080293 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 15 14:12:08.080474 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 15 14:12:08.080628 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jan 15 14:12:08.082859 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 15 14:12:08.083025 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jan 15 14:12:08.083183 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 15 14:12:08.083359 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 15 14:12:08.083526 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Jan 15 14:12:08.083709 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 15 14:12:08.083887 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 15 14:12:08.084059 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Jan 15 14:12:08.084218 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 15 14:12:08.084376 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 15 14:12:08.084566 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jan 15 14:12:08.086634 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 15 14:12:08.086972 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 15 14:12:08.087236 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Jan 15 14:12:08.087859 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 15 14:12:08.088040 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 15 14:12:08.088211 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Jan 15 14:12:08.088384 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 15 14:12:08.088542 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 15 14:12:08.088726 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Jan 15 14:12:08.090921 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jan 15 14:12:08.091080 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 15 14:12:08.091244 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Jan 15 14:12:08.091400 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 15 14:12:08.091570 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 15 14:12:08.091592 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 15 14:12:08.091607 kernel: PCI: CLS 0 bytes, default 64 Jan 15 14:12:08.091621 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 15 14:12:08.091636 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jan 15 14:12:08.091666 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 15 14:12:08.091680 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 15 14:12:08.091694 kernel: Initialise system trusted keyrings Jan 15 14:12:08.091715 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 15 14:12:08.091729 kernel: Key type asymmetric registered Jan 15 14:12:08.091803 kernel: Asymmetric key parser 'x509' registered Jan 15 14:12:08.091820 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 15 14:12:08.091834 kernel: io scheduler mq-deadline registered Jan 15 14:12:08.091848 kernel: io scheduler kyber registered Jan 15 14:12:08.091861 kernel: io scheduler bfq registered Jan 15 14:12:08.092033 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 15 14:12:08.092202 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 15 14:12:08.092398 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 15 14:12:08.092568 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 15 14:12:08.092807 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 15 14:12:08.092975 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 15 14:12:08.093141 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 15 14:12:08.093305 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 15 14:12:08.093477 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 15 14:12:08.093656 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 15 14:12:08.093858 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 15 14:12:08.094026 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 15 14:12:08.094191 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 15 14:12:08.094355 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 15 14:12:08.094530 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 15 14:12:08.094735 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 15 14:12:08.094929 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 15 14:12:08.095094 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 15 14:12:08.095262 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 15 14:12:08.095426 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 15 14:12:08.095598 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 15 14:12:08.095808 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 15 14:12:08.095975 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 15 14:12:08.096141 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 15 14:12:08.096163 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 15 14:12:08.096178 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 15 14:12:08.096200 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 15 14:12:08.096214 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 15 14:12:08.096229 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 15 14:12:08.096242 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 15 14:12:08.096257 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 15 14:12:08.096270 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 15 14:12:08.096284 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 15 14:12:08.096452 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 15 14:12:08.096621 kernel: rtc_cmos 00:03: registered as rtc0 Jan 15 14:12:08.096845 kernel: rtc_cmos 00:03: setting system clock to 2025-01-15T14:12:07 UTC (1736950327) Jan 15 14:12:08.097001 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 15 14:12:08.097022 kernel: intel_pstate: CPU model not supported Jan 15 14:12:08.097036 kernel: NET: Registered PF_INET6 protocol family Jan 15 14:12:08.097050 kernel: Segment Routing with IPv6 Jan 15 14:12:08.097063 kernel: In-situ OAM (IOAM) with IPv6 Jan 15 14:12:08.097077 kernel: NET: Registered PF_PACKET protocol family Jan 15 14:12:08.097090 kernel: Key type dns_resolver registered Jan 15 14:12:08.097111 kernel: IPI shorthand broadcast: enabled Jan 15 14:12:08.097126 kernel: sched_clock: Marking stable (1304003677, 235935907)->(1672690685, -132751101) Jan 15 14:12:08.097140 kernel: registered taskstats version 1 Jan 15 14:12:08.097153 kernel: Loading compiled-in X.509 certificates Jan 15 14:12:08.097167 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e8ca4908f7ff887d90a0430272c92dde55624447' Jan 15 14:12:08.097181 kernel: Key type .fscrypt registered Jan 15 14:12:08.097194 kernel: Key type fscrypt-provisioning registered Jan 15 14:12:08.097207 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 15 14:12:08.097221 kernel: ima: Allocated hash algorithm: sha1 Jan 15 14:12:08.097240 kernel: ima: No architecture policies found Jan 15 14:12:08.097254 kernel: clk: Disabling unused clocks Jan 15 14:12:08.097267 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 15 14:12:08.097281 kernel: Write protecting the kernel read-only data: 36864k Jan 15 14:12:08.097295 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 15 14:12:08.097308 kernel: Run /init as init process Jan 15 14:12:08.097321 kernel: with arguments: Jan 15 14:12:08.097335 kernel: /init Jan 15 14:12:08.097349 kernel: with environment: Jan 15 14:12:08.097367 kernel: HOME=/ Jan 15 14:12:08.097381 kernel: TERM=linux Jan 15 14:12:08.097394 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 15 14:12:08.097411 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 15 14:12:08.097428 systemd[1]: Detected virtualization kvm. Jan 15 14:12:08.097442 systemd[1]: Detected architecture x86-64. Jan 15 14:12:08.097456 systemd[1]: Running in initrd. Jan 15 14:12:08.097476 systemd[1]: No hostname configured, using default hostname. Jan 15 14:12:08.097490 systemd[1]: Hostname set to . Jan 15 14:12:08.097505 systemd[1]: Initializing machine ID from VM UUID. Jan 15 14:12:08.097519 systemd[1]: Queued start job for default target initrd.target. Jan 15 14:12:08.097533 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 14:12:08.097548 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 14:12:08.097563 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 15 14:12:08.097578 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 15 14:12:08.097598 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 15 14:12:08.097613 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 15 14:12:08.097629 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 15 14:12:08.097658 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 15 14:12:08.097674 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 14:12:08.097688 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 15 14:12:08.097703 systemd[1]: Reached target paths.target - Path Units. Jan 15 14:12:08.097724 systemd[1]: Reached target slices.target - Slice Units. Jan 15 14:12:08.097738 systemd[1]: Reached target swap.target - Swaps. Jan 15 14:12:08.097780 systemd[1]: Reached target timers.target - Timer Units. Jan 15 14:12:08.097796 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 15 14:12:08.097811 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 15 14:12:08.097825 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 15 14:12:08.097840 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 15 14:12:08.097855 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 15 14:12:08.097870 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 15 14:12:08.097891 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 14:12:08.097906 systemd[1]: Reached target sockets.target - Socket Units. Jan 15 14:12:08.097921 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 15 14:12:08.097935 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 15 14:12:08.097950 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 15 14:12:08.097965 systemd[1]: Starting systemd-fsck-usr.service... Jan 15 14:12:08.097979 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 15 14:12:08.097994 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 15 14:12:08.098013 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 14:12:08.098081 systemd-journald[201]: Collecting audit messages is disabled. Jan 15 14:12:08.098116 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 15 14:12:08.098132 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 14:12:08.098153 systemd[1]: Finished systemd-fsck-usr.service. Jan 15 14:12:08.098168 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 15 14:12:08.098184 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 15 14:12:08.098198 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 15 14:12:08.098213 kernel: Bridge firewalling registered Jan 15 14:12:08.098232 systemd-journald[201]: Journal started Jan 15 14:12:08.098258 systemd-journald[201]: Runtime Journal (/run/log/journal/a52a0504ed914fadab745e61fb0ddaea) is 4.7M, max 38.0M, 33.2M free. Jan 15 14:12:08.040527 systemd-modules-load[202]: Inserted module 'overlay' Jan 15 14:12:08.160172 systemd[1]: Started systemd-journald.service - Journal Service. Jan 15 14:12:08.090405 systemd-modules-load[202]: Inserted module 'br_netfilter' Jan 15 14:12:08.162327 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 15 14:12:08.163367 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 14:12:08.177959 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 15 14:12:08.181434 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 15 14:12:08.183377 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 15 14:12:08.192981 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 15 14:12:08.205012 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 15 14:12:08.214648 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 14:12:08.215929 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 14:12:08.217880 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 14:12:08.224939 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 15 14:12:08.228916 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 15 14:12:08.247246 dracut-cmdline[234]: dracut-dracut-053 Jan 15 14:12:08.253767 dracut-cmdline[234]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 15 14:12:08.281888 systemd-resolved[235]: Positive Trust Anchors: Jan 15 14:12:08.281908 systemd-resolved[235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 15 14:12:08.281953 systemd-resolved[235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 15 14:12:08.286206 systemd-resolved[235]: Defaulting to hostname 'linux'. Jan 15 14:12:08.287799 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 15 14:12:08.291834 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 15 14:12:08.369811 kernel: SCSI subsystem initialized Jan 15 14:12:08.381775 kernel: Loading iSCSI transport class v2.0-870. Jan 15 14:12:08.395805 kernel: iscsi: registered transport (tcp) Jan 15 14:12:08.422126 kernel: iscsi: registered transport (qla4xxx) Jan 15 14:12:08.422207 kernel: QLogic iSCSI HBA Driver Jan 15 14:12:08.478389 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 15 14:12:08.484964 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 15 14:12:08.519299 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 15 14:12:08.519415 kernel: device-mapper: uevent: version 1.0.3 Jan 15 14:12:08.519439 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 15 14:12:08.568789 kernel: raid6: sse2x4 gen() 13974 MB/s Jan 15 14:12:08.586803 kernel: raid6: sse2x2 gen() 9585 MB/s Jan 15 14:12:08.605463 kernel: raid6: sse2x1 gen() 10303 MB/s Jan 15 14:12:08.605541 kernel: raid6: using algorithm sse2x4 gen() 13974 MB/s Jan 15 14:12:08.624773 kernel: raid6: .... xor() 7839 MB/s, rmw enabled Jan 15 14:12:08.624857 kernel: raid6: using ssse3x2 recovery algorithm Jan 15 14:12:08.650798 kernel: xor: automatically using best checksumming function avx Jan 15 14:12:08.844873 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 15 14:12:08.862254 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 15 14:12:08.870227 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 14:12:08.898206 systemd-udevd[419]: Using default interface naming scheme 'v255'. Jan 15 14:12:08.905393 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 14:12:08.917176 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 15 14:12:08.948523 dracut-pre-trigger[431]: rd.md=0: removing MD RAID activation Jan 15 14:12:08.990391 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 15 14:12:08.998073 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 15 14:12:09.110384 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 14:12:09.120599 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 15 14:12:09.157576 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 15 14:12:09.161859 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 15 14:12:09.163895 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 14:12:09.166476 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 15 14:12:09.177040 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 15 14:12:09.204456 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 15 14:12:09.239802 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jan 15 14:12:09.348013 kernel: cryptd: max_cpu_qlen set to 1000 Jan 15 14:12:09.348043 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 15 14:12:09.348251 kernel: ACPI: bus type USB registered Jan 15 14:12:09.348274 kernel: usbcore: registered new interface driver usbfs Jan 15 14:12:09.348292 kernel: usbcore: registered new interface driver hub Jan 15 14:12:09.348309 kernel: usbcore: registered new device driver usb Jan 15 14:12:09.348327 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 15 14:12:09.348345 kernel: GPT:17805311 != 125829119 Jan 15 14:12:09.348369 kernel: AVX version of gcm_enc/dec engaged. Jan 15 14:12:09.348388 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 15 14:12:09.348406 kernel: libata version 3.00 loaded. Jan 15 14:12:09.348424 kernel: GPT:17805311 != 125829119 Jan 15 14:12:09.348441 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 15 14:12:09.348458 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 15 14:12:09.348476 kernel: AES CTR mode by8 optimization enabled Jan 15 14:12:09.325250 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 15 14:12:09.325438 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 14:12:09.326487 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 15 14:12:09.327268 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 14:12:09.327468 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 14:12:09.328361 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 14:12:09.337121 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 14:12:09.358932 kernel: ahci 0000:00:1f.2: version 3.0 Jan 15 14:12:09.383372 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 15 14:12:09.383409 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 15 14:12:09.383658 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 15 14:12:09.383891 kernel: scsi host0: ahci Jan 15 14:12:09.384126 kernel: scsi host1: ahci Jan 15 14:12:09.384356 kernel: scsi host2: ahci Jan 15 14:12:09.384553 kernel: scsi host3: ahci Jan 15 14:12:09.384823 kernel: scsi host4: ahci Jan 15 14:12:09.385031 kernel: scsi host5: ahci Jan 15 14:12:09.385221 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Jan 15 14:12:09.385251 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Jan 15 14:12:09.385270 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Jan 15 14:12:09.385288 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Jan 15 14:12:09.385306 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Jan 15 14:12:09.385324 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Jan 15 14:12:09.389893 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 15 14:12:09.397064 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jan 15 14:12:09.397307 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 15 14:12:09.397517 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 15 14:12:09.399315 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jan 15 14:12:09.400096 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jan 15 14:12:09.400310 kernel: hub 1-0:1.0: USB hub found Jan 15 14:12:09.400546 kernel: hub 1-0:1.0: 4 ports detected Jan 15 14:12:09.402903 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 15 14:12:09.403158 kernel: hub 2-0:1.0: USB hub found Jan 15 14:12:09.403391 kernel: hub 2-0:1.0: 4 ports detected Jan 15 14:12:09.427432 kernel: BTRFS: device fsid b8e2d3c5-4bed-4339-bed5-268c66823686 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (471) Jan 15 14:12:09.433783 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (469) Jan 15 14:12:09.464074 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 15 14:12:09.511320 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 14:12:09.519087 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 15 14:12:09.525451 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 15 14:12:09.526322 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 15 14:12:09.534247 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 15 14:12:09.541973 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 15 14:12:09.546934 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 15 14:12:09.552728 disk-uuid[562]: Primary Header is updated. Jan 15 14:12:09.552728 disk-uuid[562]: Secondary Entries is updated. Jan 15 14:12:09.552728 disk-uuid[562]: Secondary Header is updated. Jan 15 14:12:09.555199 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 15 14:12:09.581423 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 14:12:09.642771 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 15 14:12:09.691821 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 15 14:12:09.693763 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 15 14:12:09.701764 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 15 14:12:09.701810 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 15 14:12:09.704667 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 15 14:12:09.706496 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 15 14:12:09.789772 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 15 14:12:09.797218 kernel: usbcore: registered new interface driver usbhid Jan 15 14:12:09.797282 kernel: usbhid: USB HID core driver Jan 15 14:12:09.804819 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 15 14:12:09.804865 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jan 15 14:12:10.568242 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 15 14:12:10.569167 disk-uuid[563]: The operation has completed successfully. Jan 15 14:12:10.625553 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 15 14:12:10.625789 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 15 14:12:10.644996 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 15 14:12:10.651415 sh[591]: Success Jan 15 14:12:10.670028 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Jan 15 14:12:10.737077 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 15 14:12:10.742193 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 15 14:12:10.745281 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 15 14:12:10.771996 kernel: BTRFS info (device dm-0): first mount of filesystem b8e2d3c5-4bed-4339-bed5-268c66823686 Jan 15 14:12:10.772066 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 15 14:12:10.772087 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 15 14:12:10.774441 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 15 14:12:10.777728 kernel: BTRFS info (device dm-0): using free space tree Jan 15 14:12:10.788486 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 15 14:12:10.789956 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 15 14:12:10.795984 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 15 14:12:10.799929 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 15 14:12:10.819143 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 15 14:12:10.819221 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 15 14:12:10.819243 kernel: BTRFS info (device vda6): using free space tree Jan 15 14:12:10.823877 kernel: BTRFS info (device vda6): auto enabling async discard Jan 15 14:12:10.839446 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 15 14:12:10.840874 kernel: BTRFS info (device vda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 15 14:12:10.848284 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 15 14:12:10.855979 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 15 14:12:10.939395 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 15 14:12:10.951058 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 15 14:12:10.991119 systemd-networkd[774]: lo: Link UP Jan 15 14:12:10.991133 systemd-networkd[774]: lo: Gained carrier Jan 15 14:12:10.993444 systemd-networkd[774]: Enumeration completed Jan 15 14:12:10.993991 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 14:12:10.993997 systemd-networkd[774]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 15 14:12:10.995241 systemd-networkd[774]: eth0: Link UP Jan 15 14:12:10.995247 systemd-networkd[774]: eth0: Gained carrier Jan 15 14:12:10.995258 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 14:12:11.001429 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 15 14:12:11.003998 systemd[1]: Reached target network.target - Network. Jan 15 14:12:11.014097 ignition[690]: Ignition 2.19.0 Jan 15 14:12:11.014118 ignition[690]: Stage: fetch-offline Jan 15 14:12:11.014217 ignition[690]: no configs at "/usr/lib/ignition/base.d" Jan 15 14:12:11.017374 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 15 14:12:11.014243 ignition[690]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 15 14:12:11.014466 ignition[690]: parsed url from cmdline: "" Jan 15 14:12:11.014473 ignition[690]: no config URL provided Jan 15 14:12:11.014483 ignition[690]: reading system config file "/usr/lib/ignition/user.ign" Jan 15 14:12:11.014499 ignition[690]: no config at "/usr/lib/ignition/user.ign" Jan 15 14:12:11.014508 ignition[690]: failed to fetch config: resource requires networking Jan 15 14:12:11.014961 ignition[690]: Ignition finished successfully Jan 15 14:12:11.028144 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 15 14:12:11.049936 systemd-networkd[774]: eth0: DHCPv4 address 10.244.21.14/30, gateway 10.244.21.13 acquired from 10.244.21.13 Jan 15 14:12:11.053111 ignition[781]: Ignition 2.19.0 Jan 15 14:12:11.054204 ignition[781]: Stage: fetch Jan 15 14:12:11.055268 ignition[781]: no configs at "/usr/lib/ignition/base.d" Jan 15 14:12:11.055290 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 15 14:12:11.055456 ignition[781]: parsed url from cmdline: "" Jan 15 14:12:11.055462 ignition[781]: no config URL provided Jan 15 14:12:11.055472 ignition[781]: reading system config file "/usr/lib/ignition/user.ign" Jan 15 14:12:11.055489 ignition[781]: no config at "/usr/lib/ignition/user.ign" Jan 15 14:12:11.055724 ignition[781]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 15 14:12:11.056447 ignition[781]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 15 14:12:11.056486 ignition[781]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 15 14:12:11.071964 ignition[781]: GET result: OK Jan 15 14:12:11.072650 ignition[781]: parsing config with SHA512: a25ae9540748a5469d341dc62796490102bae71415c83efec3130525756df8bd353ac25cfa572e13bab23ec7d05a829a6d10c356bad463bea711ced806a71666 Jan 15 14:12:11.078660 unknown[781]: fetched base config from "system" Jan 15 14:12:11.079649 unknown[781]: fetched base config from "system" Jan 15 14:12:11.080447 unknown[781]: fetched user config from "openstack" Jan 15 14:12:11.081689 ignition[781]: fetch: fetch complete Jan 15 14:12:11.081698 ignition[781]: fetch: fetch passed Jan 15 14:12:11.083208 ignition[781]: Ignition finished successfully Jan 15 14:12:11.085695 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 15 14:12:11.097003 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 15 14:12:11.128058 ignition[789]: Ignition 2.19.0 Jan 15 14:12:11.128077 ignition[789]: Stage: kargs Jan 15 14:12:11.128357 ignition[789]: no configs at "/usr/lib/ignition/base.d" Jan 15 14:12:11.128378 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 15 14:12:11.131176 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 15 14:12:11.129676 ignition[789]: kargs: kargs passed Jan 15 14:12:11.129772 ignition[789]: Ignition finished successfully Jan 15 14:12:11.153250 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 15 14:12:11.171184 ignition[795]: Ignition 2.19.0 Jan 15 14:12:11.171207 ignition[795]: Stage: disks Jan 15 14:12:11.171477 ignition[795]: no configs at "/usr/lib/ignition/base.d" Jan 15 14:12:11.174170 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 15 14:12:11.171498 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 15 14:12:11.176192 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 15 14:12:11.172958 ignition[795]: disks: disks passed Jan 15 14:12:11.177088 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 15 14:12:11.173037 ignition[795]: Ignition finished successfully Jan 15 14:12:11.178934 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 15 14:12:11.180737 systemd[1]: Reached target sysinit.target - System Initialization. Jan 15 14:12:11.182261 systemd[1]: Reached target basic.target - Basic System. Jan 15 14:12:11.197170 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 15 14:12:11.217679 systemd-fsck[803]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 15 14:12:11.221557 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 15 14:12:11.228297 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 15 14:12:11.352090 kernel: EXT4-fs (vda9): mounted filesystem 39899d4c-a8b1-4feb-9875-e812cc535888 r/w with ordered data mode. Quota mode: none. Jan 15 14:12:11.353710 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 15 14:12:11.356781 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 15 14:12:11.370098 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 15 14:12:11.375654 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 15 14:12:11.377356 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 15 14:12:11.381148 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 15 14:12:11.382601 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 15 14:12:11.382656 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 15 14:12:11.394310 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 15 14:12:11.403335 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (811) Jan 15 14:12:11.406060 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 15 14:12:11.418292 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 15 14:12:11.418382 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 15 14:12:11.418404 kernel: BTRFS info (device vda6): using free space tree Jan 15 14:12:11.429902 kernel: BTRFS info (device vda6): auto enabling async discard Jan 15 14:12:11.445258 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 15 14:12:11.523713 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Jan 15 14:12:11.530794 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Jan 15 14:12:11.538986 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Jan 15 14:12:11.549033 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Jan 15 14:12:11.666926 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 15 14:12:11.673931 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 15 14:12:11.678987 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 15 14:12:11.691830 kernel: BTRFS info (device vda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 15 14:12:11.721478 ignition[930]: INFO : Ignition 2.19.0 Jan 15 14:12:11.722821 ignition[930]: INFO : Stage: mount Jan 15 14:12:11.723521 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 14:12:11.723521 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 15 14:12:11.726185 ignition[930]: INFO : mount: mount passed Jan 15 14:12:11.726185 ignition[930]: INFO : Ignition finished successfully Jan 15 14:12:11.726759 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 15 14:12:11.728975 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 15 14:12:11.769041 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 15 14:12:12.716665 systemd-networkd[774]: eth0: Gained IPv6LL Jan 15 14:12:13.143355 systemd-networkd[774]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:543:24:19ff:fef4:150e/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:543:24:19ff:fef4:150e/64 assigned by NDisc. Jan 15 14:12:13.143381 systemd-networkd[774]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 15 14:12:18.620884 coreos-metadata[813]: Jan 15 14:12:18.620 WARN failed to locate config-drive, using the metadata service API instead Jan 15 14:12:18.631432 coreos-metadata[813]: Jan 15 14:12:18.631 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 15 14:12:18.649311 coreos-metadata[813]: Jan 15 14:12:18.647 INFO Fetch successful Jan 15 14:12:18.650222 coreos-metadata[813]: Jan 15 14:12:18.649 INFO wrote hostname srv-8ino3.gb1.brightbox.com to /sysroot/etc/hostname Jan 15 14:12:18.651352 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 15 14:12:18.651556 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 15 14:12:18.671836 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 15 14:12:18.695107 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 15 14:12:18.729772 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (946) Jan 15 14:12:18.729867 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 15 14:12:18.733089 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 15 14:12:18.733172 kernel: BTRFS info (device vda6): using free space tree Jan 15 14:12:18.739802 kernel: BTRFS info (device vda6): auto enabling async discard Jan 15 14:12:18.743489 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 15 14:12:18.788452 ignition[964]: INFO : Ignition 2.19.0 Jan 15 14:12:18.788452 ignition[964]: INFO : Stage: files Jan 15 14:12:18.790645 ignition[964]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 14:12:18.790645 ignition[964]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 15 14:12:18.790645 ignition[964]: DEBUG : files: compiled without relabeling support, skipping Jan 15 14:12:18.793686 ignition[964]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 15 14:12:18.793686 ignition[964]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 15 14:12:18.796111 ignition[964]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 15 14:12:18.797190 ignition[964]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 15 14:12:18.797190 ignition[964]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 15 14:12:18.796953 unknown[964]: wrote ssh authorized keys file for user: core Jan 15 14:12:18.800417 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 15 14:12:18.800417 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 15 14:12:19.014975 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 15 14:12:19.675544 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 15 14:12:19.677182 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 15 14:12:19.677182 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 15 14:12:19.677182 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 15 14:12:19.677182 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 15 14:12:19.677182 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 15 14:12:19.677182 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 15 14:12:19.677182 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 15 14:12:19.677182 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 15 14:12:19.693370 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 15 14:12:19.693370 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 15 14:12:19.693370 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 15 14:12:19.693370 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 15 14:12:19.693370 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 15 14:12:19.693370 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 15 14:12:20.441593 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 15 14:12:25.116569 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 15 14:12:25.116569 ignition[964]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 15 14:12:25.122159 ignition[964]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 15 14:12:25.128190 ignition[964]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 15 14:12:25.128190 ignition[964]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 15 14:12:25.128190 ignition[964]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 15 14:12:25.132775 ignition[964]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 15 14:12:25.132775 ignition[964]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 15 14:12:25.132775 ignition[964]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 15 14:12:25.132775 ignition[964]: INFO : files: files passed Jan 15 14:12:25.132775 ignition[964]: INFO : Ignition finished successfully Jan 15 14:12:25.132313 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 15 14:12:25.155374 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 15 14:12:25.158977 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 15 14:12:25.167372 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 15 14:12:25.175083 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 15 14:12:25.191324 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 15 14:12:25.191324 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 15 14:12:25.194558 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 15 14:12:25.193937 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 15 14:12:25.196227 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 15 14:12:25.203144 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 15 14:12:25.248821 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 15 14:12:25.249013 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 15 14:12:25.250979 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 15 14:12:25.252516 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 15 14:12:25.254271 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 15 14:12:25.261144 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 15 14:12:25.281513 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 15 14:12:25.296161 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 15 14:12:25.309785 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 15 14:12:25.311767 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 14:12:25.313932 systemd[1]: Stopped target timers.target - Timer Units. Jan 15 14:12:25.314711 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 15 14:12:25.314925 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 15 14:12:25.317712 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 15 14:12:25.320554 systemd[1]: Stopped target basic.target - Basic System. Jan 15 14:12:25.322204 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 15 14:12:25.325995 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 15 14:12:25.327082 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 15 14:12:25.328102 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 15 14:12:25.330932 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 15 14:12:25.331925 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 15 14:12:25.332835 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 15 14:12:25.333723 systemd[1]: Stopped target swap.target - Swaps. Jan 15 14:12:25.334480 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 15 14:12:25.334783 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 15 14:12:25.335951 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 15 14:12:25.337157 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 14:12:25.338088 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 15 14:12:25.338289 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 14:12:25.340879 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 15 14:12:25.341103 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 15 14:12:25.342891 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 15 14:12:25.343090 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 15 14:12:25.344719 systemd[1]: ignition-files.service: Deactivated successfully. Jan 15 14:12:25.344909 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 15 14:12:25.355175 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 15 14:12:25.361932 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 15 14:12:25.362665 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 15 14:12:25.362889 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 14:12:25.370856 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 15 14:12:25.371128 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 15 14:12:25.381270 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 15 14:12:25.383563 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 15 14:12:25.385911 ignition[1016]: INFO : Ignition 2.19.0 Jan 15 14:12:25.385911 ignition[1016]: INFO : Stage: umount Jan 15 14:12:25.385911 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 14:12:25.385911 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 15 14:12:25.397739 ignition[1016]: INFO : umount: umount passed Jan 15 14:12:25.397739 ignition[1016]: INFO : Ignition finished successfully Jan 15 14:12:25.393828 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 15 14:12:25.394045 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 15 14:12:25.396251 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 15 14:12:25.396681 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 15 14:12:25.400734 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 15 14:12:25.400924 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 15 14:12:25.401661 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 15 14:12:25.401730 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 15 14:12:25.402475 systemd[1]: Stopped target network.target - Network. Jan 15 14:12:25.404873 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 15 14:12:25.404982 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 15 14:12:25.406077 systemd[1]: Stopped target paths.target - Path Units. Jan 15 14:12:25.406706 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 15 14:12:25.410909 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 14:12:25.411762 systemd[1]: Stopped target slices.target - Slice Units. Jan 15 14:12:25.412402 systemd[1]: Stopped target sockets.target - Socket Units. Jan 15 14:12:25.414931 systemd[1]: iscsid.socket: Deactivated successfully. Jan 15 14:12:25.415026 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 15 14:12:25.416768 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 15 14:12:25.416843 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 15 14:12:25.418386 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 15 14:12:25.418471 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 15 14:12:25.421973 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 15 14:12:25.422074 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 15 14:12:25.423233 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 15 14:12:25.424307 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 15 14:12:25.428861 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 15 14:12:25.429842 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 15 14:12:25.430972 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 15 14:12:25.431150 systemd-networkd[774]: eth0: DHCPv6 lease lost Jan 15 14:12:25.434316 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 15 14:12:25.434527 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 15 14:12:25.438459 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 15 14:12:25.438758 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 15 14:12:25.443293 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 15 14:12:25.443603 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 15 14:12:25.444515 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 15 14:12:25.444599 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 15 14:12:25.451962 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 15 14:12:25.452821 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 15 14:12:25.452929 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 15 14:12:25.454473 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 15 14:12:25.454553 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 15 14:12:25.457286 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 15 14:12:25.457397 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 15 14:12:25.458932 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 15 14:12:25.459007 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 14:12:25.462945 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 14:12:25.476263 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 15 14:12:25.476451 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 15 14:12:25.479237 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 15 14:12:25.479529 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 14:12:25.481535 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 15 14:12:25.481614 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 15 14:12:25.482632 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 15 14:12:25.482688 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 14:12:25.484346 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 15 14:12:25.484443 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 15 14:12:25.486993 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 15 14:12:25.487072 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 15 14:12:25.487891 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 15 14:12:25.487969 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 14:12:25.499979 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 15 14:12:25.500797 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 15 14:12:25.500895 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 14:12:25.504201 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 15 14:12:25.504308 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 15 14:12:25.507056 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 15 14:12:25.507159 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 14:12:25.512728 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 14:12:25.512869 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 14:12:25.515482 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 15 14:12:25.515674 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 15 14:12:25.517344 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 15 14:12:25.528033 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 15 14:12:25.540844 systemd[1]: Switching root. Jan 15 14:12:25.578800 systemd-journald[201]: Journal stopped Jan 15 14:12:27.130904 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Jan 15 14:12:27.131039 kernel: SELinux: policy capability network_peer_controls=1 Jan 15 14:12:27.131073 kernel: SELinux: policy capability open_perms=1 Jan 15 14:12:27.131094 kernel: SELinux: policy capability extended_socket_class=1 Jan 15 14:12:27.131113 kernel: SELinux: policy capability always_check_network=0 Jan 15 14:12:27.131137 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 15 14:12:27.131175 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 15 14:12:27.131195 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 15 14:12:27.131214 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 15 14:12:27.131244 kernel: audit: type=1403 audit(1736950345.827:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 15 14:12:27.131271 systemd[1]: Successfully loaded SELinux policy in 50.522ms. Jan 15 14:12:27.131308 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.409ms. Jan 15 14:12:27.131350 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 15 14:12:27.131374 systemd[1]: Detected virtualization kvm. Jan 15 14:12:27.131395 systemd[1]: Detected architecture x86-64. Jan 15 14:12:27.131432 systemd[1]: Detected first boot. Jan 15 14:12:27.131455 systemd[1]: Hostname set to . Jan 15 14:12:27.131490 systemd[1]: Initializing machine ID from VM UUID. Jan 15 14:12:27.131514 zram_generator::config[1059]: No configuration found. Jan 15 14:12:27.131537 systemd[1]: Populated /etc with preset unit settings. Jan 15 14:12:27.131557 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 15 14:12:27.131577 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 15 14:12:27.131598 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 15 14:12:27.131633 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 15 14:12:27.131655 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 15 14:12:27.132396 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 15 14:12:27.132429 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 15 14:12:27.132459 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 15 14:12:27.132481 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 15 14:12:27.132501 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 15 14:12:27.132522 systemd[1]: Created slice user.slice - User and Session Slice. Jan 15 14:12:27.132559 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 14:12:27.132589 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 14:12:27.132610 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 15 14:12:27.132631 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 15 14:12:27.132651 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 15 14:12:27.132673 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 15 14:12:27.132699 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 15 14:12:27.132720 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 14:12:27.132808 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 15 14:12:27.132849 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 15 14:12:27.132873 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 15 14:12:27.132900 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 15 14:12:27.132927 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 14:12:27.132949 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 15 14:12:27.132970 systemd[1]: Reached target slices.target - Slice Units. Jan 15 14:12:27.132989 systemd[1]: Reached target swap.target - Swaps. Jan 15 14:12:27.133024 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 15 14:12:27.133045 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 15 14:12:27.133065 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 15 14:12:27.133085 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 15 14:12:27.133118 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 14:12:27.133159 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 15 14:12:27.133193 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 15 14:12:27.133216 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 15 14:12:27.133237 systemd[1]: Mounting media.mount - External Media Directory... Jan 15 14:12:27.133266 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 14:12:27.133288 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 15 14:12:27.133309 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 15 14:12:27.133340 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 15 14:12:27.134812 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 15 14:12:27.134852 systemd[1]: Reached target machines.target - Containers. Jan 15 14:12:27.134875 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 15 14:12:27.134895 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 14:12:27.134915 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 15 14:12:27.134936 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 15 14:12:27.134956 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 15 14:12:27.134984 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 15 14:12:27.135005 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 15 14:12:27.135039 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 15 14:12:27.135068 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 15 14:12:27.135091 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 15 14:12:27.135111 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 15 14:12:27.135132 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 15 14:12:27.135151 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 15 14:12:27.135171 systemd[1]: Stopped systemd-fsck-usr.service. Jan 15 14:12:27.135191 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 15 14:12:27.135211 kernel: loop: module loaded Jan 15 14:12:27.135244 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 15 14:12:27.135267 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 15 14:12:27.135287 kernel: ACPI: bus type drm_connector registered Jan 15 14:12:27.135307 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 15 14:12:27.135327 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 15 14:12:27.135366 systemd[1]: verity-setup.service: Deactivated successfully. Jan 15 14:12:27.135389 systemd[1]: Stopped verity-setup.service. Jan 15 14:12:27.135410 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 14:12:27.135430 kernel: fuse: init (API version 7.39) Jan 15 14:12:27.135466 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 15 14:12:27.135495 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 15 14:12:27.135517 systemd[1]: Mounted media.mount - External Media Directory. Jan 15 14:12:27.135538 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 15 14:12:27.135559 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 15 14:12:27.135592 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 15 14:12:27.135615 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 15 14:12:27.135636 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 14:12:27.135656 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 15 14:12:27.135676 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 15 14:12:27.135696 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 15 14:12:27.135730 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 15 14:12:27.135766 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 15 14:12:27.135836 systemd-journald[1155]: Collecting audit messages is disabled. Jan 15 14:12:27.135889 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 15 14:12:27.135914 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 15 14:12:27.135941 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 15 14:12:27.135963 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 15 14:12:27.135998 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 15 14:12:27.136033 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 15 14:12:27.136055 systemd-journald[1155]: Journal started Jan 15 14:12:27.136094 systemd-journald[1155]: Runtime Journal (/run/log/journal/a52a0504ed914fadab745e61fb0ddaea) is 4.7M, max 38.0M, 33.2M free. Jan 15 14:12:27.137797 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 15 14:12:26.669654 systemd[1]: Queued start job for default target multi-user.target. Jan 15 14:12:26.692412 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 15 14:12:26.693177 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 15 14:12:27.141829 systemd[1]: Started systemd-journald.service - Journal Service. Jan 15 14:12:27.142437 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 15 14:12:27.143914 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 15 14:12:27.145119 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 15 14:12:27.163082 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 15 14:12:27.177870 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 15 14:12:27.188004 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 15 14:12:27.190944 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 15 14:12:27.191024 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 15 14:12:27.195481 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 15 14:12:27.204108 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 15 14:12:27.212600 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 15 14:12:27.214224 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 14:12:27.225007 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 15 14:12:27.237302 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 15 14:12:27.238852 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 15 14:12:27.242172 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 15 14:12:27.248730 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 15 14:12:27.249883 systemd-journald[1155]: Time spent on flushing to /var/log/journal/a52a0504ed914fadab745e61fb0ddaea is 91.316ms for 1133 entries. Jan 15 14:12:27.249883 systemd-journald[1155]: System Journal (/var/log/journal/a52a0504ed914fadab745e61fb0ddaea) is 8.0M, max 584.8M, 576.8M free. Jan 15 14:12:27.382983 systemd-journald[1155]: Received client request to flush runtime journal. Jan 15 14:12:27.383059 kernel: loop0: detected capacity change from 0 to 8 Jan 15 14:12:27.383087 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 15 14:12:27.258384 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 15 14:12:27.267340 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 15 14:12:27.271768 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 15 14:12:27.276356 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 15 14:12:27.278195 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 15 14:12:27.280005 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 15 14:12:27.320294 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 15 14:12:27.321362 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 15 14:12:27.329054 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 15 14:12:27.389120 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 15 14:12:27.399770 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 15 14:12:27.409406 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 14:12:27.424002 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 15 14:12:27.427092 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 15 14:12:27.430003 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 15 14:12:27.442832 kernel: loop1: detected capacity change from 0 to 140768 Jan 15 14:12:27.449835 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Jan 15 14:12:27.449857 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Jan 15 14:12:27.458058 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 15 14:12:27.472250 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 15 14:12:27.486066 udevadm[1208]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 15 14:12:27.535767 kernel: loop2: detected capacity change from 0 to 205544 Jan 15 14:12:27.566538 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 15 14:12:27.577689 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 15 14:12:27.618794 kernel: loop3: detected capacity change from 0 to 142488 Jan 15 14:12:27.644257 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. Jan 15 14:12:27.644977 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. Jan 15 14:12:27.673095 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 14:12:27.691782 kernel: loop4: detected capacity change from 0 to 8 Jan 15 14:12:27.695774 kernel: loop5: detected capacity change from 0 to 140768 Jan 15 14:12:27.739782 kernel: loop6: detected capacity change from 0 to 205544 Jan 15 14:12:27.784723 kernel: loop7: detected capacity change from 0 to 142488 Jan 15 14:12:27.814171 (sd-merge)[1220]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 15 14:12:27.815022 (sd-merge)[1220]: Merged extensions into '/usr'. Jan 15 14:12:27.826373 systemd[1]: Reloading requested from client PID 1192 ('systemd-sysext') (unit systemd-sysext.service)... Jan 15 14:12:27.826597 systemd[1]: Reloading... Jan 15 14:12:27.950297 zram_generator::config[1244]: No configuration found. Jan 15 14:12:28.226876 ldconfig[1187]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 15 14:12:28.240501 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 15 14:12:28.308505 systemd[1]: Reloading finished in 481 ms. Jan 15 14:12:28.335013 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 15 14:12:28.339467 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 15 14:12:28.341091 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 15 14:12:28.363283 systemd[1]: Starting ensure-sysext.service... Jan 15 14:12:28.368949 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 15 14:12:28.377001 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 14:12:28.382952 systemd[1]: Reloading requested from client PID 1304 ('systemctl') (unit ensure-sysext.service)... Jan 15 14:12:28.382992 systemd[1]: Reloading... Jan 15 14:12:28.434987 systemd-udevd[1306]: Using default interface naming scheme 'v255'. Jan 15 14:12:28.436895 systemd-tmpfiles[1305]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 15 14:12:28.437500 systemd-tmpfiles[1305]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 15 14:12:28.442043 systemd-tmpfiles[1305]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 15 14:12:28.442486 systemd-tmpfiles[1305]: ACLs are not supported, ignoring. Jan 15 14:12:28.442610 systemd-tmpfiles[1305]: ACLs are not supported, ignoring. Jan 15 14:12:28.464496 systemd-tmpfiles[1305]: Detected autofs mount point /boot during canonicalization of boot. Jan 15 14:12:28.464516 systemd-tmpfiles[1305]: Skipping /boot Jan 15 14:12:28.489818 zram_generator::config[1335]: No configuration found. Jan 15 14:12:28.501826 systemd-tmpfiles[1305]: Detected autofs mount point /boot during canonicalization of boot. Jan 15 14:12:28.501847 systemd-tmpfiles[1305]: Skipping /boot Jan 15 14:12:28.735692 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 15 14:12:28.757774 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1342) Jan 15 14:12:28.810775 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 15 14:12:28.822550 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 15 14:12:28.822673 systemd[1]: Reloading finished in 439 ms. Jan 15 14:12:28.836328 kernel: mousedev: PS/2 mouse device common for all mice Jan 15 14:12:28.843648 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 14:12:28.845098 kernel: ACPI: button: Power Button [PWRF] Jan 15 14:12:28.849563 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 14:12:28.908177 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 15 14:12:28.915100 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 15 14:12:28.925162 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 15 14:12:28.932565 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 15 14:12:28.941113 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 15 14:12:28.950101 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 15 14:12:28.956010 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 14:12:28.956284 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 14:12:28.964185 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 15 14:12:28.968080 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 15 14:12:28.980169 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 15 14:12:28.982023 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 14:12:28.982853 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 14:12:28.989407 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 14:12:28.989678 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 14:12:28.990979 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 14:12:28.991120 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 14:12:28.996435 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 15 14:12:28.996790 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 15 14:12:29.000591 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 15 14:12:29.009771 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 15 14:12:29.017079 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 15 14:12:29.052166 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 15 14:12:29.011233 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 15 14:12:29.014888 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 14:12:29.015207 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 14:12:29.025195 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 15 14:12:29.029908 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 15 14:12:29.031706 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 14:12:29.041806 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 15 14:12:29.044864 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 14:12:29.046292 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 15 14:12:29.049106 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 15 14:12:29.049326 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 15 14:12:29.062527 systemd[1]: Finished ensure-sysext.service. Jan 15 14:12:29.082051 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 15 14:12:29.090006 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 15 14:12:29.091143 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 15 14:12:29.093090 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 15 14:12:29.095234 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 15 14:12:29.101771 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 15 14:12:29.115980 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 15 14:12:29.132366 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 15 14:12:29.133033 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 15 14:12:29.135095 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 15 14:12:29.175372 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 15 14:12:29.184828 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 15 14:12:29.185865 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 15 14:12:29.188292 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 15 14:12:29.189854 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 15 14:12:29.194723 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 15 14:12:29.208508 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 15 14:12:29.224376 augenrules[1455]: No rules Jan 15 14:12:29.226330 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 15 14:12:29.236179 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 14:12:29.267143 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 15 14:12:29.492131 systemd-networkd[1415]: lo: Link UP Jan 15 14:12:29.492145 systemd-networkd[1415]: lo: Gained carrier Jan 15 14:12:29.496811 systemd-networkd[1415]: Enumeration completed Jan 15 14:12:29.497010 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 15 14:12:29.497441 systemd-networkd[1415]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 14:12:29.497447 systemd-networkd[1415]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 15 14:12:29.500919 systemd-networkd[1415]: eth0: Link UP Jan 15 14:12:29.500938 systemd-networkd[1415]: eth0: Gained carrier Jan 15 14:12:29.500963 systemd-networkd[1415]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 14:12:29.529873 systemd-networkd[1415]: eth0: DHCPv4 address 10.244.21.14/30, gateway 10.244.21.13 acquired from 10.244.21.13 Jan 15 14:12:29.532873 systemd-timesyncd[1434]: Network configuration changed, trying to establish connection. Jan 15 14:12:29.534481 systemd-resolved[1416]: Positive Trust Anchors: Jan 15 14:12:29.535801 systemd-resolved[1416]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 15 14:12:29.535854 systemd-resolved[1416]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 15 14:12:29.549619 systemd-resolved[1416]: Using system hostname 'srv-8ino3.gb1.brightbox.com'. Jan 15 14:12:29.572241 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 15 14:12:29.573357 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 15 14:12:29.574616 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 15 14:12:29.575912 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 14:12:29.578529 systemd[1]: Reached target network.target - Network. Jan 15 14:12:29.579251 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 15 14:12:29.580395 systemd[1]: Reached target time-set.target - System Time Set. Jan 15 14:12:29.587069 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 15 14:12:29.591960 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 15 14:12:29.610772 lvm[1476]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 15 14:12:29.649437 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 15 14:12:29.650737 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 15 14:12:29.651598 systemd[1]: Reached target sysinit.target - System Initialization. Jan 15 14:12:29.652534 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 15 14:12:29.653598 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 15 14:12:29.654718 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 15 14:12:29.655691 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 15 14:12:29.656503 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 15 14:12:29.657310 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 15 14:12:29.657380 systemd[1]: Reached target paths.target - Path Units. Jan 15 14:12:29.658060 systemd[1]: Reached target timers.target - Timer Units. Jan 15 14:12:30.199229 systemd-resolved[1416]: Clock change detected. Flushing caches. Jan 15 14:12:30.199232 systemd-timesyncd[1434]: Contacted time server 185.83.169.27:123 (0.flatcar.pool.ntp.org). Jan 15 14:12:30.199367 systemd-timesyncd[1434]: Initial clock synchronization to Wed 2025-01-15 14:12:30.199076 UTC. Jan 15 14:12:30.199382 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 15 14:12:30.202477 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 15 14:12:30.208723 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 15 14:12:30.211581 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 15 14:12:30.213173 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 15 14:12:30.214343 systemd[1]: Reached target sockets.target - Socket Units. Jan 15 14:12:30.215062 systemd[1]: Reached target basic.target - Basic System. Jan 15 14:12:30.215800 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 15 14:12:30.215863 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 15 14:12:30.228061 systemd[1]: Starting containerd.service - containerd container runtime... Jan 15 14:12:30.231376 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 15 14:12:30.234189 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 15 14:12:30.236246 lvm[1481]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 15 14:12:30.239138 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 15 14:12:30.264234 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 15 14:12:30.265112 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 15 14:12:30.267682 jq[1485]: false Jan 15 14:12:30.270841 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 15 14:12:30.281223 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 15 14:12:30.289265 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 15 14:12:30.293684 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 15 14:12:30.305234 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 15 14:12:30.306841 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 15 14:12:30.309171 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 15 14:12:30.313227 systemd[1]: Starting update-engine.service - Update Engine... Jan 15 14:12:30.321215 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 15 14:12:30.324059 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 15 14:12:30.331501 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 15 14:12:30.331794 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 15 14:12:30.342681 dbus-daemon[1484]: [system] SELinux support is enabled Jan 15 14:12:30.343180 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 15 14:12:30.352278 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 15 14:12:30.352336 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 15 14:12:30.354241 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 15 14:12:30.354270 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 15 14:12:30.355215 dbus-daemon[1484]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1415 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 15 14:12:30.356042 dbus-daemon[1484]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 15 14:12:30.369234 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 15 14:12:30.370777 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 15 14:12:30.372109 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 15 14:12:30.393341 tar[1498]: linux-amd64/helm Jan 15 14:12:30.425629 extend-filesystems[1486]: Found loop4 Jan 15 14:12:30.435753 extend-filesystems[1486]: Found loop5 Jan 15 14:12:30.435753 extend-filesystems[1486]: Found loop6 Jan 15 14:12:30.435753 extend-filesystems[1486]: Found loop7 Jan 15 14:12:30.435753 extend-filesystems[1486]: Found vda Jan 15 14:12:30.435753 extend-filesystems[1486]: Found vda1 Jan 15 14:12:30.435753 extend-filesystems[1486]: Found vda2 Jan 15 14:12:30.435753 extend-filesystems[1486]: Found vda3 Jan 15 14:12:30.435753 extend-filesystems[1486]: Found usr Jan 15 14:12:30.435753 extend-filesystems[1486]: Found vda4 Jan 15 14:12:30.435753 extend-filesystems[1486]: Found vda6 Jan 15 14:12:30.435753 extend-filesystems[1486]: Found vda7 Jan 15 14:12:30.435753 extend-filesystems[1486]: Found vda9 Jan 15 14:12:30.435753 extend-filesystems[1486]: Checking size of /dev/vda9 Jan 15 14:12:30.428410 systemd-logind[1493]: Watching system buttons on /dev/input/event2 (Power Button) Jan 15 14:12:30.499545 update_engine[1495]: I20250115 14:12:30.477351 1495 main.cc:92] Flatcar Update Engine starting Jan 15 14:12:30.508509 jq[1496]: true Jan 15 14:12:30.428444 systemd-logind[1493]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 15 14:12:30.432190 systemd-logind[1493]: New seat seat0. Jan 15 14:12:30.517362 update_engine[1495]: I20250115 14:12:30.512655 1495 update_check_scheduler.cc:74] Next update check in 7m52s Jan 15 14:12:30.434196 systemd[1]: Started systemd-logind.service - User Login Management. Jan 15 14:12:30.517730 jq[1517]: true Jan 15 14:12:30.475397 (ntainerd)[1513]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 15 14:12:30.530428 extend-filesystems[1486]: Resized partition /dev/vda9 Jan 15 14:12:30.478662 systemd[1]: motdgen.service: Deactivated successfully. Jan 15 14:12:30.534261 extend-filesystems[1526]: resize2fs 1.47.1 (20-May-2024) Jan 15 14:12:30.561483 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Jan 15 14:12:30.478990 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 15 14:12:30.498879 systemd[1]: Started update-engine.service - Update Engine. Jan 15 14:12:30.509428 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 15 14:12:30.630440 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1349) Jan 15 14:12:30.759706 dbus-daemon[1484]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 15 14:12:30.759955 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 15 14:12:30.763210 dbus-daemon[1484]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1503 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 15 14:12:30.775718 systemd[1]: Starting polkit.service - Authorization Manager... Jan 15 14:12:30.787275 locksmithd[1524]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 15 14:12:30.812397 polkitd[1548]: Started polkitd version 121 Jan 15 14:12:30.825161 polkitd[1548]: Loading rules from directory /etc/polkit-1/rules.d Jan 15 14:12:30.825284 polkitd[1548]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 15 14:12:30.827257 polkitd[1548]: Finished loading, compiling and executing 2 rules Jan 15 14:12:30.829224 dbus-daemon[1484]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 15 14:12:30.829766 systemd[1]: Started polkit.service - Authorization Manager. Jan 15 14:12:30.831105 polkitd[1548]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 15 14:12:30.832165 bash[1542]: Updated "/home/core/.ssh/authorized_keys" Jan 15 14:12:30.833493 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 15 14:12:30.846750 systemd[1]: Starting sshkeys.service... Jan 15 14:12:30.866170 systemd-hostnamed[1503]: Hostname set to (static) Jan 15 14:12:30.940688 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 15 14:12:30.950587 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 15 14:12:30.971173 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 15 14:12:30.999340 extend-filesystems[1526]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 15 14:12:30.999340 extend-filesystems[1526]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 15 14:12:30.999340 extend-filesystems[1526]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 15 14:12:31.013580 extend-filesystems[1486]: Resized filesystem in /dev/vda9 Jan 15 14:12:31.003832 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 15 14:12:31.017564 containerd[1513]: time="2025-01-15T14:12:30.999812762Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 15 14:12:31.004113 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 15 14:12:31.037238 containerd[1513]: time="2025-01-15T14:12:31.036263667Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 15 14:12:31.039789 containerd[1513]: time="2025-01-15T14:12:31.038838804Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 15 14:12:31.039789 containerd[1513]: time="2025-01-15T14:12:31.038883313Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 15 14:12:31.039789 containerd[1513]: time="2025-01-15T14:12:31.038916203Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 15 14:12:31.039789 containerd[1513]: time="2025-01-15T14:12:31.039220616Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 15 14:12:31.039789 containerd[1513]: time="2025-01-15T14:12:31.039263908Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 15 14:12:31.039789 containerd[1513]: time="2025-01-15T14:12:31.039370616Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 15 14:12:31.039789 containerd[1513]: time="2025-01-15T14:12:31.039399780Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 15 14:12:31.039789 containerd[1513]: time="2025-01-15T14:12:31.039685201Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 15 14:12:31.039789 containerd[1513]: time="2025-01-15T14:12:31.039718254Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 15 14:12:31.039789 containerd[1513]: time="2025-01-15T14:12:31.039745581Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 15 14:12:31.039789 containerd[1513]: time="2025-01-15T14:12:31.039764469Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 15 14:12:31.040224 containerd[1513]: time="2025-01-15T14:12:31.039891473Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 15 14:12:31.040916 containerd[1513]: time="2025-01-15T14:12:31.040345673Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 15 14:12:31.040916 containerd[1513]: time="2025-01-15T14:12:31.040601120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 15 14:12:31.040916 containerd[1513]: time="2025-01-15T14:12:31.040627591Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 15 14:12:31.040916 containerd[1513]: time="2025-01-15T14:12:31.040755896Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 15 14:12:31.040916 containerd[1513]: time="2025-01-15T14:12:31.040854346Z" level=info msg="metadata content store policy set" policy=shared Jan 15 14:12:31.046534 containerd[1513]: time="2025-01-15T14:12:31.046115050Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 15 14:12:31.046534 containerd[1513]: time="2025-01-15T14:12:31.046192838Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 15 14:12:31.046534 containerd[1513]: time="2025-01-15T14:12:31.046220778Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 15 14:12:31.046534 containerd[1513]: time="2025-01-15T14:12:31.046244146Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 15 14:12:31.046534 containerd[1513]: time="2025-01-15T14:12:31.046273369Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 15 14:12:31.046534 containerd[1513]: time="2025-01-15T14:12:31.046445126Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 15 14:12:31.046770 containerd[1513]: time="2025-01-15T14:12:31.046739041Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 15 14:12:31.047219 containerd[1513]: time="2025-01-15T14:12:31.046914373Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 15 14:12:31.047219 containerd[1513]: time="2025-01-15T14:12:31.046946576Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 15 14:12:31.048038 containerd[1513]: time="2025-01-15T14:12:31.046967668Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 15 14:12:31.048094 containerd[1513]: time="2025-01-15T14:12:31.048045064Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 15 14:12:31.048094 containerd[1513]: time="2025-01-15T14:12:31.048071566Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 15 14:12:31.048094 containerd[1513]: time="2025-01-15T14:12:31.048090984Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 15 14:12:31.048220 containerd[1513]: time="2025-01-15T14:12:31.048111082Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 15 14:12:31.048220 containerd[1513]: time="2025-01-15T14:12:31.048131769Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 15 14:12:31.048220 containerd[1513]: time="2025-01-15T14:12:31.048151099Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 15 14:12:31.048220 containerd[1513]: time="2025-01-15T14:12:31.048169776Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 15 14:12:31.048220 containerd[1513]: time="2025-01-15T14:12:31.048189148Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 15 14:12:31.048362 containerd[1513]: time="2025-01-15T14:12:31.048225622Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 15 14:12:31.048362 containerd[1513]: time="2025-01-15T14:12:31.048250386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 15 14:12:31.048362 containerd[1513]: time="2025-01-15T14:12:31.048277863Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 15 14:12:31.048362 containerd[1513]: time="2025-01-15T14:12:31.048299578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 15 14:12:31.048362 containerd[1513]: time="2025-01-15T14:12:31.048319959Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 15 14:12:31.048362 containerd[1513]: time="2025-01-15T14:12:31.048340045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 15 14:12:31.048362 containerd[1513]: time="2025-01-15T14:12:31.048357776Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 15 14:12:31.048622 containerd[1513]: time="2025-01-15T14:12:31.048376228Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 15 14:12:31.048622 containerd[1513]: time="2025-01-15T14:12:31.048396330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 15 14:12:31.048622 containerd[1513]: time="2025-01-15T14:12:31.048418009Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 15 14:12:31.048622 containerd[1513]: time="2025-01-15T14:12:31.048449350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 15 14:12:31.048622 containerd[1513]: time="2025-01-15T14:12:31.048472134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 15 14:12:31.048622 containerd[1513]: time="2025-01-15T14:12:31.048492640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 15 14:12:31.048622 containerd[1513]: time="2025-01-15T14:12:31.048532879Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 15 14:12:31.048622 containerd[1513]: time="2025-01-15T14:12:31.048571427Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 15 14:12:31.048622 containerd[1513]: time="2025-01-15T14:12:31.048592925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 15 14:12:31.048894 containerd[1513]: time="2025-01-15T14:12:31.048627631Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 15 14:12:31.048894 containerd[1513]: time="2025-01-15T14:12:31.048713887Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 15 14:12:31.049997 containerd[1513]: time="2025-01-15T14:12:31.049094048Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 15 14:12:31.049997 containerd[1513]: time="2025-01-15T14:12:31.049126507Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 15 14:12:31.049997 containerd[1513]: time="2025-01-15T14:12:31.049158409Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 15 14:12:31.049997 containerd[1513]: time="2025-01-15T14:12:31.049176838Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 15 14:12:31.049997 containerd[1513]: time="2025-01-15T14:12:31.049209111Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 15 14:12:31.049997 containerd[1513]: time="2025-01-15T14:12:31.049237584Z" level=info msg="NRI interface is disabled by configuration." Jan 15 14:12:31.049997 containerd[1513]: time="2025-01-15T14:12:31.049257259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 15 14:12:31.050263 containerd[1513]: time="2025-01-15T14:12:31.049640341Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 15 14:12:31.050263 containerd[1513]: time="2025-01-15T14:12:31.049725801Z" level=info msg="Connect containerd service" Jan 15 14:12:31.050263 containerd[1513]: time="2025-01-15T14:12:31.049781777Z" level=info msg="using legacy CRI server" Jan 15 14:12:31.050263 containerd[1513]: time="2025-01-15T14:12:31.049798132Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 15 14:12:31.052483 containerd[1513]: time="2025-01-15T14:12:31.049966921Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 15 14:12:31.053283 containerd[1513]: time="2025-01-15T14:12:31.053248152Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 15 14:12:31.055157 containerd[1513]: time="2025-01-15T14:12:31.055100935Z" level=info msg="Start subscribing containerd event" Jan 15 14:12:31.055216 containerd[1513]: time="2025-01-15T14:12:31.055180769Z" level=info msg="Start recovering state" Jan 15 14:12:31.055495 containerd[1513]: time="2025-01-15T14:12:31.055299042Z" level=info msg="Start event monitor" Jan 15 14:12:31.055495 containerd[1513]: time="2025-01-15T14:12:31.055345174Z" level=info msg="Start snapshots syncer" Jan 15 14:12:31.055495 containerd[1513]: time="2025-01-15T14:12:31.055368164Z" level=info msg="Start cni network conf syncer for default" Jan 15 14:12:31.055495 containerd[1513]: time="2025-01-15T14:12:31.055383632Z" level=info msg="Start streaming server" Jan 15 14:12:31.058861 containerd[1513]: time="2025-01-15T14:12:31.056270774Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 15 14:12:31.058861 containerd[1513]: time="2025-01-15T14:12:31.056359662Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 15 14:12:31.058861 containerd[1513]: time="2025-01-15T14:12:31.056476211Z" level=info msg="containerd successfully booted in 0.059245s" Jan 15 14:12:31.056659 systemd[1]: Started containerd.service - containerd container runtime. Jan 15 14:12:31.175523 systemd-networkd[1415]: eth0: Gained IPv6LL Jan 15 14:12:31.181196 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 15 14:12:31.185252 systemd[1]: Reached target network-online.target - Network is Online. Jan 15 14:12:31.195368 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 14:12:31.202372 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 15 14:12:31.267870 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 15 14:12:31.542285 tar[1498]: linux-amd64/LICENSE Jan 15 14:12:31.542830 tar[1498]: linux-amd64/README.md Jan 15 14:12:31.572584 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 15 14:12:31.592298 sshd_keygen[1520]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 15 14:12:31.632207 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 15 14:12:31.645350 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 15 14:12:31.655081 systemd[1]: issuegen.service: Deactivated successfully. Jan 15 14:12:31.655452 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 15 14:12:31.667401 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 15 14:12:31.682804 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 15 14:12:31.696708 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 15 14:12:31.704731 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 15 14:12:31.706067 systemd[1]: Reached target getty.target - Login Prompts. Jan 15 14:12:32.222890 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 14:12:32.241231 (kubelet)[1609]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 14:12:32.678213 systemd-networkd[1415]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:543:24:19ff:fef4:150e/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:543:24:19ff:fef4:150e/64 assigned by NDisc. Jan 15 14:12:32.678228 systemd-networkd[1415]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 15 14:12:32.881005 kubelet[1609]: E0115 14:12:32.880791 1609 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 14:12:32.883793 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 14:12:32.884085 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 14:12:32.884741 systemd[1]: kubelet.service: Consumed 1.033s CPU time. Jan 15 14:12:35.433803 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 15 14:12:35.444721 systemd[1]: Started sshd@0-10.244.21.14:22-147.75.109.163:59906.service - OpenSSH per-connection server daemon (147.75.109.163:59906). Jan 15 14:12:36.454888 sshd[1620]: Accepted publickey for core from 147.75.109.163 port 59906 ssh2: RSA SHA256:QG8B548JP5tdUNqGYa2d+pJD6UDQ9KBL2A0BMPySjNw Jan 15 14:12:36.458231 sshd[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 14:12:36.477627 systemd-logind[1493]: New session 1 of user core. Jan 15 14:12:36.480757 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 15 14:12:36.492245 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 15 14:12:36.618186 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 15 14:12:36.626797 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 15 14:12:36.666939 (systemd)[1624]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 15 14:12:36.745902 login[1602]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 15 14:12:36.760062 systemd-logind[1493]: New session 2 of user core. Jan 15 14:12:36.767943 login[1601]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 15 14:12:36.776207 systemd-logind[1493]: New session 3 of user core. Jan 15 14:12:36.833050 systemd[1624]: Queued start job for default target default.target. Jan 15 14:12:36.841904 systemd[1624]: Created slice app.slice - User Application Slice. Jan 15 14:12:36.841964 systemd[1624]: Reached target paths.target - Paths. Jan 15 14:12:36.842013 systemd[1624]: Reached target timers.target - Timers. Jan 15 14:12:36.844234 systemd[1624]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 15 14:12:36.860370 systemd[1624]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 15 14:12:36.860585 systemd[1624]: Reached target sockets.target - Sockets. Jan 15 14:12:36.860612 systemd[1624]: Reached target basic.target - Basic System. Jan 15 14:12:36.860694 systemd[1624]: Reached target default.target - Main User Target. Jan 15 14:12:36.860758 systemd[1624]: Startup finished in 183ms. Jan 15 14:12:36.861072 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 15 14:12:36.873303 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 15 14:12:36.874760 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 15 14:12:36.876206 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 15 14:12:37.404465 coreos-metadata[1483]: Jan 15 14:12:37.404 WARN failed to locate config-drive, using the metadata service API instead Jan 15 14:12:37.432088 coreos-metadata[1483]: Jan 15 14:12:37.432 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 15 14:12:37.440182 coreos-metadata[1483]: Jan 15 14:12:37.440 INFO Fetch failed with 404: resource not found Jan 15 14:12:37.440182 coreos-metadata[1483]: Jan 15 14:12:37.440 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 15 14:12:37.440939 coreos-metadata[1483]: Jan 15 14:12:37.440 INFO Fetch successful Jan 15 14:12:37.441136 coreos-metadata[1483]: Jan 15 14:12:37.441 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 15 14:12:37.455524 coreos-metadata[1483]: Jan 15 14:12:37.455 INFO Fetch successful Jan 15 14:12:37.455783 coreos-metadata[1483]: Jan 15 14:12:37.455 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 15 14:12:37.472703 coreos-metadata[1483]: Jan 15 14:12:37.472 INFO Fetch successful Jan 15 14:12:37.472955 coreos-metadata[1483]: Jan 15 14:12:37.472 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 15 14:12:37.488373 coreos-metadata[1483]: Jan 15 14:12:37.488 INFO Fetch successful Jan 15 14:12:37.488632 coreos-metadata[1483]: Jan 15 14:12:37.488 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 15 14:12:37.517109 coreos-metadata[1483]: Jan 15 14:12:37.517 INFO Fetch successful Jan 15 14:12:37.525544 systemd[1]: Started sshd@1-10.244.21.14:22-147.75.109.163:59918.service - OpenSSH per-connection server daemon (147.75.109.163:59918). Jan 15 14:12:37.561860 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 15 14:12:37.562799 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 15 14:12:38.093109 coreos-metadata[1563]: Jan 15 14:12:38.093 WARN failed to locate config-drive, using the metadata service API instead Jan 15 14:12:38.114966 coreos-metadata[1563]: Jan 15 14:12:38.114 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 15 14:12:38.139390 coreos-metadata[1563]: Jan 15 14:12:38.139 INFO Fetch successful Jan 15 14:12:38.139662 coreos-metadata[1563]: Jan 15 14:12:38.139 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 15 14:12:38.175877 coreos-metadata[1563]: Jan 15 14:12:38.175 INFO Fetch successful Jan 15 14:12:38.178382 unknown[1563]: wrote ssh authorized keys file for user: core Jan 15 14:12:38.210103 update-ssh-keys[1673]: Updated "/home/core/.ssh/authorized_keys" Jan 15 14:12:38.211812 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 15 14:12:38.214413 systemd[1]: Finished sshkeys.service. Jan 15 14:12:38.215836 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 15 14:12:38.218132 systemd[1]: Startup finished in 1.480s (kernel) + 18.076s (initrd) + 11.900s (userspace) = 31.457s. Jan 15 14:12:38.415012 sshd[1663]: Accepted publickey for core from 147.75.109.163 port 59918 ssh2: RSA SHA256:QG8B548JP5tdUNqGYa2d+pJD6UDQ9KBL2A0BMPySjNw Jan 15 14:12:38.417018 sshd[1663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 14:12:38.424429 systemd-logind[1493]: New session 4 of user core. Jan 15 14:12:38.434339 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 15 14:12:39.035811 sshd[1663]: pam_unix(sshd:session): session closed for user core Jan 15 14:12:39.039587 systemd[1]: sshd@1-10.244.21.14:22-147.75.109.163:59918.service: Deactivated successfully. Jan 15 14:12:39.041972 systemd[1]: session-4.scope: Deactivated successfully. Jan 15 14:12:39.043857 systemd-logind[1493]: Session 4 logged out. Waiting for processes to exit. Jan 15 14:12:39.045289 systemd-logind[1493]: Removed session 4. Jan 15 14:12:39.196624 systemd[1]: Started sshd@2-10.244.21.14:22-147.75.109.163:57962.service - OpenSSH per-connection server daemon (147.75.109.163:57962). Jan 15 14:12:40.076746 sshd[1681]: Accepted publickey for core from 147.75.109.163 port 57962 ssh2: RSA SHA256:QG8B548JP5tdUNqGYa2d+pJD6UDQ9KBL2A0BMPySjNw Jan 15 14:12:40.079406 sshd[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 14:12:40.087021 systemd-logind[1493]: New session 5 of user core. Jan 15 14:12:40.095500 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 15 14:12:40.688044 sshd[1681]: pam_unix(sshd:session): session closed for user core Jan 15 14:12:40.691970 systemd[1]: sshd@2-10.244.21.14:22-147.75.109.163:57962.service: Deactivated successfully. Jan 15 14:12:40.694189 systemd[1]: session-5.scope: Deactivated successfully. Jan 15 14:12:40.696198 systemd-logind[1493]: Session 5 logged out. Waiting for processes to exit. Jan 15 14:12:40.697572 systemd-logind[1493]: Removed session 5. Jan 15 14:12:40.856608 systemd[1]: Started sshd@3-10.244.21.14:22-147.75.109.163:57978.service - OpenSSH per-connection server daemon (147.75.109.163:57978). Jan 15 14:12:41.737788 sshd[1688]: Accepted publickey for core from 147.75.109.163 port 57978 ssh2: RSA SHA256:QG8B548JP5tdUNqGYa2d+pJD6UDQ9KBL2A0BMPySjNw Jan 15 14:12:41.740460 sshd[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 14:12:41.748456 systemd-logind[1493]: New session 6 of user core. Jan 15 14:12:41.758274 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 15 14:12:42.358882 sshd[1688]: pam_unix(sshd:session): session closed for user core Jan 15 14:12:42.363662 systemd[1]: sshd@3-10.244.21.14:22-147.75.109.163:57978.service: Deactivated successfully. Jan 15 14:12:42.365904 systemd[1]: session-6.scope: Deactivated successfully. Jan 15 14:12:42.368194 systemd-logind[1493]: Session 6 logged out. Waiting for processes to exit. Jan 15 14:12:42.369725 systemd-logind[1493]: Removed session 6. Jan 15 14:12:42.519451 systemd[1]: Started sshd@4-10.244.21.14:22-147.75.109.163:57992.service - OpenSSH per-connection server daemon (147.75.109.163:57992). Jan 15 14:12:42.917271 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 15 14:12:42.925465 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 14:12:43.083290 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 14:12:43.083857 (kubelet)[1705]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 14:12:43.163778 kubelet[1705]: E0115 14:12:43.163506 1705 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 14:12:43.167564 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 14:12:43.167842 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 14:12:43.411159 sshd[1695]: Accepted publickey for core from 147.75.109.163 port 57992 ssh2: RSA SHA256:QG8B548JP5tdUNqGYa2d+pJD6UDQ9KBL2A0BMPySjNw Jan 15 14:12:43.413776 sshd[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 14:12:43.422885 systemd-logind[1493]: New session 7 of user core. Jan 15 14:12:43.440349 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 15 14:12:43.900047 sudo[1713]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 15 14:12:43.900579 sudo[1713]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 14:12:43.916683 sudo[1713]: pam_unix(sudo:session): session closed for user root Jan 15 14:12:44.059952 sshd[1695]: pam_unix(sshd:session): session closed for user core Jan 15 14:12:44.065055 systemd[1]: sshd@4-10.244.21.14:22-147.75.109.163:57992.service: Deactivated successfully. Jan 15 14:12:44.068653 systemd[1]: session-7.scope: Deactivated successfully. Jan 15 14:12:44.072189 systemd-logind[1493]: Session 7 logged out. Waiting for processes to exit. Jan 15 14:12:44.073752 systemd-logind[1493]: Removed session 7. Jan 15 14:12:44.221409 systemd[1]: Started sshd@5-10.244.21.14:22-147.75.109.163:58004.service - OpenSSH per-connection server daemon (147.75.109.163:58004). Jan 15 14:12:45.097920 sshd[1718]: Accepted publickey for core from 147.75.109.163 port 58004 ssh2: RSA SHA256:QG8B548JP5tdUNqGYa2d+pJD6UDQ9KBL2A0BMPySjNw Jan 15 14:12:45.099914 sshd[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 14:12:45.105957 systemd-logind[1493]: New session 8 of user core. Jan 15 14:12:45.116865 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 15 14:12:45.576156 sudo[1722]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 15 14:12:45.576694 sudo[1722]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 14:12:45.582401 sudo[1722]: pam_unix(sudo:session): session closed for user root Jan 15 14:12:45.590902 sudo[1721]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 15 14:12:45.591453 sudo[1721]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 14:12:45.612495 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 15 14:12:45.618049 auditctl[1725]: No rules Jan 15 14:12:45.617600 systemd[1]: audit-rules.service: Deactivated successfully. Jan 15 14:12:45.617892 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 15 14:12:45.626586 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 15 14:12:45.664732 augenrules[1743]: No rules Jan 15 14:12:45.666522 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 15 14:12:45.668347 sudo[1721]: pam_unix(sudo:session): session closed for user root Jan 15 14:12:45.812429 sshd[1718]: pam_unix(sshd:session): session closed for user core Jan 15 14:12:45.816218 systemd[1]: sshd@5-10.244.21.14:22-147.75.109.163:58004.service: Deactivated successfully. Jan 15 14:12:45.818682 systemd[1]: session-8.scope: Deactivated successfully. Jan 15 14:12:45.820562 systemd-logind[1493]: Session 8 logged out. Waiting for processes to exit. Jan 15 14:12:45.822204 systemd-logind[1493]: Removed session 8. Jan 15 14:12:45.966744 systemd[1]: Started sshd@6-10.244.21.14:22-147.75.109.163:58016.service - OpenSSH per-connection server daemon (147.75.109.163:58016). Jan 15 14:12:46.857384 sshd[1751]: Accepted publickey for core from 147.75.109.163 port 58016 ssh2: RSA SHA256:QG8B548JP5tdUNqGYa2d+pJD6UDQ9KBL2A0BMPySjNw Jan 15 14:12:46.859332 sshd[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 14:12:46.865912 systemd-logind[1493]: New session 9 of user core. Jan 15 14:12:46.874219 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 15 14:12:47.335329 sudo[1754]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 15 14:12:47.335775 sudo[1754]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 14:12:47.815502 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 15 14:12:47.817151 (dockerd)[1769]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 15 14:12:48.281583 dockerd[1769]: time="2025-01-15T14:12:48.280653415Z" level=info msg="Starting up" Jan 15 14:12:48.419037 systemd[1]: var-lib-docker-metacopy\x2dcheck790180533-merged.mount: Deactivated successfully. Jan 15 14:12:48.440845 dockerd[1769]: time="2025-01-15T14:12:48.440421084Z" level=info msg="Loading containers: start." Jan 15 14:12:48.582765 kernel: Initializing XFRM netlink socket Jan 15 14:12:48.690705 systemd-networkd[1415]: docker0: Link UP Jan 15 14:12:48.711829 dockerd[1769]: time="2025-01-15T14:12:48.711765006Z" level=info msg="Loading containers: done." Jan 15 14:12:48.733599 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3038870334-merged.mount: Deactivated successfully. Jan 15 14:12:48.735327 dockerd[1769]: time="2025-01-15T14:12:48.734298459Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 15 14:12:48.735327 dockerd[1769]: time="2025-01-15T14:12:48.734443061Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 15 14:12:48.735327 dockerd[1769]: time="2025-01-15T14:12:48.734627904Z" level=info msg="Daemon has completed initialization" Jan 15 14:12:48.772929 dockerd[1769]: time="2025-01-15T14:12:48.772809598Z" level=info msg="API listen on /run/docker.sock" Jan 15 14:12:48.773486 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 15 14:12:51.754641 containerd[1513]: time="2025-01-15T14:12:51.753547259Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\"" Jan 15 14:12:52.508547 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4167522741.mount: Deactivated successfully. Jan 15 14:12:53.417261 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 15 14:12:53.428480 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 14:12:53.591504 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 14:12:53.605157 (kubelet)[1970]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 14:12:53.729441 kubelet[1970]: E0115 14:12:53.729115 1970 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 14:12:53.731841 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 14:12:53.732134 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 14:12:54.677398 containerd[1513]: time="2025-01-15T14:12:54.676027929Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:12:54.678767 containerd[1513]: time="2025-01-15T14:12:54.678162975Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.4: active requests=0, bytes read=27975491" Jan 15 14:12:54.679607 containerd[1513]: time="2025-01-15T14:12:54.679567540Z" level=info msg="ImageCreate event name:\"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:12:54.686560 containerd[1513]: time="2025-01-15T14:12:54.686508896Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.4\" with image id \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\", size \"27972283\" in 2.932837914s" Jan 15 14:12:54.687037 containerd[1513]: time="2025-01-15T14:12:54.686734265Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:12:54.688176 containerd[1513]: time="2025-01-15T14:12:54.688091351Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\"" Jan 15 14:12:54.698093 containerd[1513]: time="2025-01-15T14:12:54.697882695Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\"" Jan 15 14:12:57.054458 containerd[1513]: time="2025-01-15T14:12:57.054199543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:12:57.056013 containerd[1513]: time="2025-01-15T14:12:57.055933127Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.4: active requests=0, bytes read=24702165" Jan 15 14:12:57.057148 containerd[1513]: time="2025-01-15T14:12:57.057076164Z" level=info msg="ImageCreate event name:\"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:12:57.060921 containerd[1513]: time="2025-01-15T14:12:57.060816462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:12:57.063901 containerd[1513]: time="2025-01-15T14:12:57.063850646Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.4\" with image id \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\", size \"26147269\" in 2.365915984s" Jan 15 14:12:57.064007 containerd[1513]: time="2025-01-15T14:12:57.063899345Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\"" Jan 15 14:12:57.065807 containerd[1513]: time="2025-01-15T14:12:57.065703223Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\"" Jan 15 14:12:58.963918 containerd[1513]: time="2025-01-15T14:12:58.963727844Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:12:58.966338 containerd[1513]: time="2025-01-15T14:12:58.966230944Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.4: active requests=0, bytes read=18652075" Jan 15 14:12:58.967506 containerd[1513]: time="2025-01-15T14:12:58.967426582Z" level=info msg="ImageCreate event name:\"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:12:58.971432 containerd[1513]: time="2025-01-15T14:12:58.971338877Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:12:58.973370 containerd[1513]: time="2025-01-15T14:12:58.973151578Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.4\" with image id \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\", size \"20097197\" in 1.907308438s" Jan 15 14:12:58.973370 containerd[1513]: time="2025-01-15T14:12:58.973204577Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\"" Jan 15 14:12:58.974615 containerd[1513]: time="2025-01-15T14:12:58.974290547Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Jan 15 14:13:00.910134 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4042117808.mount: Deactivated successfully. Jan 15 14:13:01.619376 containerd[1513]: time="2025-01-15T14:13:01.619273942Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:13:01.620839 containerd[1513]: time="2025-01-15T14:13:01.620550890Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=30230251" Jan 15 14:13:01.621800 containerd[1513]: time="2025-01-15T14:13:01.621314312Z" level=info msg="ImageCreate event name:\"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:13:01.625391 containerd[1513]: time="2025-01-15T14:13:01.625328453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:13:01.626659 containerd[1513]: time="2025-01-15T14:13:01.626616509Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"30229262\" in 2.651874622s" Jan 15 14:13:01.626737 containerd[1513]: time="2025-01-15T14:13:01.626690561Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\"" Jan 15 14:13:01.628192 containerd[1513]: time="2025-01-15T14:13:01.628159668Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 15 14:13:02.325350 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount114027252.mount: Deactivated successfully. Jan 15 14:13:02.703731 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 15 14:13:03.513259 containerd[1513]: time="2025-01-15T14:13:03.513183317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:13:03.516066 containerd[1513]: time="2025-01-15T14:13:03.515938661Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jan 15 14:13:03.517204 containerd[1513]: time="2025-01-15T14:13:03.517119851Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:13:03.520603 containerd[1513]: time="2025-01-15T14:13:03.520527940Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:13:03.522409 containerd[1513]: time="2025-01-15T14:13:03.522214518Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.893907966s" Jan 15 14:13:03.522409 containerd[1513]: time="2025-01-15T14:13:03.522262688Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 15 14:13:03.523656 containerd[1513]: time="2025-01-15T14:13:03.523456938Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 15 14:13:03.917471 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 15 14:13:03.933416 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 14:13:04.113275 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 14:13:04.123545 (kubelet)[2053]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 14:13:04.196238 kubelet[2053]: E0115 14:13:04.195954 2053 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 14:13:04.199991 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 14:13:04.200277 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 14:13:04.277425 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1594448255.mount: Deactivated successfully. Jan 15 14:13:04.283814 containerd[1513]: time="2025-01-15T14:13:04.282557033Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:13:04.283814 containerd[1513]: time="2025-01-15T14:13:04.283756192Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jan 15 14:13:04.284156 containerd[1513]: time="2025-01-15T14:13:04.284124334Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:13:04.287062 containerd[1513]: time="2025-01-15T14:13:04.287018145Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:13:04.288548 containerd[1513]: time="2025-01-15T14:13:04.288498714Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 765.002063ms" Jan 15 14:13:04.288725 containerd[1513]: time="2025-01-15T14:13:04.288691886Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 15 14:13:04.289668 containerd[1513]: time="2025-01-15T14:13:04.289637235Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 15 14:13:04.934135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3249129421.mount: Deactivated successfully. Jan 15 14:13:08.896698 containerd[1513]: time="2025-01-15T14:13:08.896522362Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:13:08.899452 containerd[1513]: time="2025-01-15T14:13:08.899289909Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779981" Jan 15 14:13:08.900193 containerd[1513]: time="2025-01-15T14:13:08.900123890Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:13:08.913013 containerd[1513]: time="2025-01-15T14:13:08.912684992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:13:08.914562 containerd[1513]: time="2025-01-15T14:13:08.914505043Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 4.624667183s" Jan 15 14:13:08.914650 containerd[1513]: time="2025-01-15T14:13:08.914575210Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jan 15 14:13:13.034567 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 14:13:13.050446 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 14:13:13.099811 systemd[1]: Reloading requested from client PID 2145 ('systemctl') (unit session-9.scope)... Jan 15 14:13:13.099863 systemd[1]: Reloading... Jan 15 14:13:13.264033 zram_generator::config[2180]: No configuration found. Jan 15 14:13:13.448145 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 15 14:13:13.557300 systemd[1]: Reloading finished in 456 ms. Jan 15 14:13:13.635959 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 15 14:13:13.636372 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 15 14:13:13.637100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 14:13:13.645594 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 14:13:13.795459 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 14:13:13.814678 (kubelet)[2251]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 15 14:13:13.874688 kubelet[2251]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 14:13:13.874688 kubelet[2251]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 15 14:13:13.874688 kubelet[2251]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 14:13:13.972047 kubelet[2251]: I0115 14:13:13.971852 2251 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 15 14:13:14.527562 kubelet[2251]: I0115 14:13:14.527502 2251 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 15 14:13:14.527789 kubelet[2251]: I0115 14:13:14.527770 2251 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 15 14:13:14.528243 kubelet[2251]: I0115 14:13:14.528219 2251 server.go:929] "Client rotation is on, will bootstrap in background" Jan 15 14:13:14.557141 kubelet[2251]: I0115 14:13:14.557094 2251 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 15 14:13:14.560254 kubelet[2251]: E0115 14:13:14.560205 2251 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.244.21.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.244.21.14:6443: connect: connection refused" logger="UnhandledError" Jan 15 14:13:14.568535 kubelet[2251]: E0115 14:13:14.568495 2251 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 15 14:13:14.568865 kubelet[2251]: I0115 14:13:14.568774 2251 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 15 14:13:14.592291 kubelet[2251]: I0115 14:13:14.591941 2251 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 15 14:13:14.594053 kubelet[2251]: I0115 14:13:14.593889 2251 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 15 14:13:14.594272 kubelet[2251]: I0115 14:13:14.594200 2251 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 15 14:13:14.594641 kubelet[2251]: I0115 14:13:14.594267 2251 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-8ino3.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 15 14:13:14.594934 kubelet[2251]: I0115 14:13:14.594662 2251 topology_manager.go:138] "Creating topology manager with none policy" Jan 15 14:13:14.594934 kubelet[2251]: I0115 14:13:14.594686 2251 container_manager_linux.go:300] "Creating device plugin manager" Jan 15 14:13:14.594934 kubelet[2251]: I0115 14:13:14.594904 2251 state_mem.go:36] "Initialized new in-memory state store" Jan 15 14:13:14.598609 kubelet[2251]: I0115 14:13:14.598545 2251 kubelet.go:408] "Attempting to sync node with API server" Jan 15 14:13:14.598609 kubelet[2251]: I0115 14:13:14.598596 2251 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 15 14:13:14.598800 kubelet[2251]: I0115 14:13:14.598682 2251 kubelet.go:314] "Adding apiserver pod source" Jan 15 14:13:14.598800 kubelet[2251]: I0115 14:13:14.598749 2251 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 15 14:13:14.606841 kubelet[2251]: W0115 14:13:14.606261 2251 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.244.21.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-8ino3.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.21.14:6443: connect: connection refused Jan 15 14:13:14.606841 kubelet[2251]: E0115 14:13:14.606472 2251 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.244.21.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-8ino3.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.21.14:6443: connect: connection refused" logger="UnhandledError" Jan 15 14:13:14.606841 kubelet[2251]: I0115 14:13:14.606631 2251 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 15 14:13:14.610584 kubelet[2251]: I0115 14:13:14.610336 2251 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 15 14:13:14.612374 kubelet[2251]: W0115 14:13:14.611455 2251 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 15 14:13:14.614010 kubelet[2251]: I0115 14:13:14.613902 2251 server.go:1269] "Started kubelet" Jan 15 14:13:14.617156 kubelet[2251]: W0115 14:13:14.616768 2251 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.244.21.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.244.21.14:6443: connect: connection refused Jan 15 14:13:14.617156 kubelet[2251]: E0115 14:13:14.616845 2251 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.244.21.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.21.14:6443: connect: connection refused" logger="UnhandledError" Jan 15 14:13:14.617156 kubelet[2251]: I0115 14:13:14.616938 2251 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 15 14:13:14.618474 kubelet[2251]: I0115 14:13:14.618351 2251 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 15 14:13:14.619266 kubelet[2251]: I0115 14:13:14.619243 2251 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 15 14:13:14.622621 kubelet[2251]: I0115 14:13:14.622587 2251 server.go:460] "Adding debug handlers to kubelet server" Jan 15 14:13:14.625729 kubelet[2251]: E0115 14:13:14.620809 2251 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.244.21.14:6443/api/v1/namespaces/default/events\": dial tcp 10.244.21.14:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-8ino3.gb1.brightbox.com.181ae32e5f6f0892 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-8ino3.gb1.brightbox.com,UID:srv-8ino3.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-8ino3.gb1.brightbox.com,},FirstTimestamp:2025-01-15 14:13:14.613860498 +0000 UTC m=+0.794158790,LastTimestamp:2025-01-15 14:13:14.613860498 +0000 UTC m=+0.794158790,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-8ino3.gb1.brightbox.com,}" Jan 15 14:13:14.625729 kubelet[2251]: I0115 14:13:14.624790 2251 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 15 14:13:14.627306 kubelet[2251]: I0115 14:13:14.626518 2251 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 15 14:13:14.633792 kubelet[2251]: E0115 14:13:14.633048 2251 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-8ino3.gb1.brightbox.com\" not found" Jan 15 14:13:14.633792 kubelet[2251]: I0115 14:13:14.633117 2251 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 15 14:13:14.633792 kubelet[2251]: I0115 14:13:14.633493 2251 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 15 14:13:14.633792 kubelet[2251]: I0115 14:13:14.633614 2251 reconciler.go:26] "Reconciler: start to sync state" Jan 15 14:13:14.636679 kubelet[2251]: W0115 14:13:14.635425 2251 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.244.21.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.21.14:6443: connect: connection refused Jan 15 14:13:14.636679 kubelet[2251]: E0115 14:13:14.635495 2251 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.244.21.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.21.14:6443: connect: connection refused" logger="UnhandledError" Jan 15 14:13:14.637279 kubelet[2251]: E0115 14:13:14.637213 2251 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.21.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-8ino3.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.21.14:6443: connect: connection refused" interval="200ms" Jan 15 14:13:14.638445 kubelet[2251]: I0115 14:13:14.638417 2251 factory.go:221] Registration of the systemd container factory successfully Jan 15 14:13:14.638591 kubelet[2251]: I0115 14:13:14.638554 2251 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 15 14:13:14.640713 kubelet[2251]: E0115 14:13:14.640683 2251 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 15 14:13:14.642189 kubelet[2251]: I0115 14:13:14.642162 2251 factory.go:221] Registration of the containerd container factory successfully Jan 15 14:13:14.676295 kubelet[2251]: I0115 14:13:14.676232 2251 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 15 14:13:14.676484 kubelet[2251]: I0115 14:13:14.676464 2251 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 15 14:13:14.676743 kubelet[2251]: I0115 14:13:14.676625 2251 state_mem.go:36] "Initialized new in-memory state store" Jan 15 14:13:14.680103 kubelet[2251]: I0115 14:13:14.680083 2251 policy_none.go:49] "None policy: Start" Jan 15 14:13:14.681434 kubelet[2251]: I0115 14:13:14.681252 2251 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 15 14:13:14.684638 kubelet[2251]: I0115 14:13:14.684595 2251 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 15 14:13:14.684638 kubelet[2251]: I0115 14:13:14.684639 2251 state_mem.go:35] "Initializing new in-memory state store" Jan 15 14:13:14.687010 kubelet[2251]: I0115 14:13:14.686953 2251 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 15 14:13:14.687463 kubelet[2251]: I0115 14:13:14.687130 2251 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 15 14:13:14.687463 kubelet[2251]: I0115 14:13:14.687171 2251 kubelet.go:2321] "Starting kubelet main sync loop" Jan 15 14:13:14.687463 kubelet[2251]: E0115 14:13:14.687237 2251 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 15 14:13:14.688616 kubelet[2251]: W0115 14:13:14.688567 2251 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.244.21.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.21.14:6443: connect: connection refused Jan 15 14:13:14.689286 kubelet[2251]: E0115 14:13:14.689238 2251 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.244.21.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.21.14:6443: connect: connection refused" logger="UnhandledError" Jan 15 14:13:14.697962 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 15 14:13:14.716260 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 15 14:13:14.722638 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 15 14:13:14.735397 kubelet[2251]: E0115 14:13:14.735279 2251 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-8ino3.gb1.brightbox.com\" not found" Jan 15 14:13:14.735589 kubelet[2251]: I0115 14:13:14.735551 2251 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 15 14:13:14.735913 kubelet[2251]: I0115 14:13:14.735886 2251 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 15 14:13:14.736017 kubelet[2251]: I0115 14:13:14.735920 2251 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 15 14:13:14.736622 kubelet[2251]: I0115 14:13:14.736273 2251 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 15 14:13:14.740341 kubelet[2251]: E0115 14:13:14.740285 2251 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-8ino3.gb1.brightbox.com\" not found" Jan 15 14:13:14.805072 systemd[1]: Created slice kubepods-burstable-pod847205d29e9e61eec6e21f24cef534af.slice - libcontainer container kubepods-burstable-pod847205d29e9e61eec6e21f24cef534af.slice. Jan 15 14:13:14.827083 systemd[1]: Created slice kubepods-burstable-podfb5ef8009fa7044faac5882b0636647c.slice - libcontainer container kubepods-burstable-podfb5ef8009fa7044faac5882b0636647c.slice. Jan 15 14:13:14.833492 systemd[1]: Created slice kubepods-burstable-podbaee752f84fef4ea9490ea979ecaa6fe.slice - libcontainer container kubepods-burstable-podbaee752f84fef4ea9490ea979ecaa6fe.slice. Jan 15 14:13:14.838486 kubelet[2251]: E0115 14:13:14.838437 2251 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.21.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-8ino3.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.21.14:6443: connect: connection refused" interval="400ms" Jan 15 14:13:14.844406 kubelet[2251]: I0115 14:13:14.844227 2251 kubelet_node_status.go:72] "Attempting to register node" node="srv-8ino3.gb1.brightbox.com" Jan 15 14:13:14.844685 kubelet[2251]: E0115 14:13:14.844655 2251 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.244.21.14:6443/api/v1/nodes\": dial tcp 10.244.21.14:6443: connect: connection refused" node="srv-8ino3.gb1.brightbox.com" Jan 15 14:13:14.936526 kubelet[2251]: I0115 14:13:14.936383 2251 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/847205d29e9e61eec6e21f24cef534af-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-8ino3.gb1.brightbox.com\" (UID: \"847205d29e9e61eec6e21f24cef534af\") " pod="kube-system/kube-controller-manager-srv-8ino3.gb1.brightbox.com" Jan 15 14:13:14.936526 kubelet[2251]: I0115 14:13:14.936454 2251 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/baee752f84fef4ea9490ea979ecaa6fe-ca-certs\") pod \"kube-apiserver-srv-8ino3.gb1.brightbox.com\" (UID: \"baee752f84fef4ea9490ea979ecaa6fe\") " pod="kube-system/kube-apiserver-srv-8ino3.gb1.brightbox.com" Jan 15 14:13:14.936526 kubelet[2251]: I0115 14:13:14.936490 2251 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/baee752f84fef4ea9490ea979ecaa6fe-k8s-certs\") pod \"kube-apiserver-srv-8ino3.gb1.brightbox.com\" (UID: \"baee752f84fef4ea9490ea979ecaa6fe\") " pod="kube-system/kube-apiserver-srv-8ino3.gb1.brightbox.com" Jan 15 14:13:14.936526 kubelet[2251]: I0115 14:13:14.936518 2251 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/baee752f84fef4ea9490ea979ecaa6fe-usr-share-ca-certificates\") pod \"kube-apiserver-srv-8ino3.gb1.brightbox.com\" (UID: \"baee752f84fef4ea9490ea979ecaa6fe\") " pod="kube-system/kube-apiserver-srv-8ino3.gb1.brightbox.com" Jan 15 14:13:14.936526 kubelet[2251]: I0115 14:13:14.936552 2251 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/847205d29e9e61eec6e21f24cef534af-ca-certs\") pod \"kube-controller-manager-srv-8ino3.gb1.brightbox.com\" (UID: \"847205d29e9e61eec6e21f24cef534af\") " pod="kube-system/kube-controller-manager-srv-8ino3.gb1.brightbox.com" Jan 15 14:13:14.937539 kubelet[2251]: I0115 14:13:14.936580 2251 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/847205d29e9e61eec6e21f24cef534af-flexvolume-dir\") pod \"kube-controller-manager-srv-8ino3.gb1.brightbox.com\" (UID: \"847205d29e9e61eec6e21f24cef534af\") " pod="kube-system/kube-controller-manager-srv-8ino3.gb1.brightbox.com" Jan 15 14:13:14.937539 kubelet[2251]: I0115 14:13:14.936611 2251 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/847205d29e9e61eec6e21f24cef534af-k8s-certs\") pod \"kube-controller-manager-srv-8ino3.gb1.brightbox.com\" (UID: \"847205d29e9e61eec6e21f24cef534af\") " pod="kube-system/kube-controller-manager-srv-8ino3.gb1.brightbox.com" Jan 15 14:13:14.937539 kubelet[2251]: I0115 14:13:14.936637 2251 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/847205d29e9e61eec6e21f24cef534af-kubeconfig\") pod \"kube-controller-manager-srv-8ino3.gb1.brightbox.com\" (UID: \"847205d29e9e61eec6e21f24cef534af\") " pod="kube-system/kube-controller-manager-srv-8ino3.gb1.brightbox.com" Jan 15 14:13:14.937539 kubelet[2251]: I0115 14:13:14.936664 2251 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fb5ef8009fa7044faac5882b0636647c-kubeconfig\") pod \"kube-scheduler-srv-8ino3.gb1.brightbox.com\" (UID: \"fb5ef8009fa7044faac5882b0636647c\") " pod="kube-system/kube-scheduler-srv-8ino3.gb1.brightbox.com" Jan 15 14:13:15.048399 kubelet[2251]: I0115 14:13:15.048353 2251 kubelet_node_status.go:72] "Attempting to register node" node="srv-8ino3.gb1.brightbox.com" Jan 15 14:13:15.048823 kubelet[2251]: E0115 14:13:15.048768 2251 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.244.21.14:6443/api/v1/nodes\": dial tcp 10.244.21.14:6443: connect: connection refused" node="srv-8ino3.gb1.brightbox.com" Jan 15 14:13:15.121495 containerd[1513]: time="2025-01-15T14:13:15.121261523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-8ino3.gb1.brightbox.com,Uid:847205d29e9e61eec6e21f24cef534af,Namespace:kube-system,Attempt:0,}" Jan 15 14:13:15.148694 containerd[1513]: time="2025-01-15T14:13:15.148623356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-8ino3.gb1.brightbox.com,Uid:baee752f84fef4ea9490ea979ecaa6fe,Namespace:kube-system,Attempt:0,}" Jan 15 14:13:15.150950 containerd[1513]: time="2025-01-15T14:13:15.150017171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-8ino3.gb1.brightbox.com,Uid:fb5ef8009fa7044faac5882b0636647c,Namespace:kube-system,Attempt:0,}" Jan 15 14:13:15.239441 kubelet[2251]: E0115 14:13:15.239264 2251 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.21.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-8ino3.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.21.14:6443: connect: connection refused" interval="800ms" Jan 15 14:13:15.452515 kubelet[2251]: I0115 14:13:15.452462 2251 kubelet_node_status.go:72] "Attempting to register node" node="srv-8ino3.gb1.brightbox.com" Jan 15 14:13:15.453015 kubelet[2251]: E0115 14:13:15.452911 2251 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.244.21.14:6443/api/v1/nodes\": dial tcp 10.244.21.14:6443: connect: connection refused" node="srv-8ino3.gb1.brightbox.com" Jan 15 14:13:15.761935 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1454864764.mount: Deactivated successfully. Jan 15 14:13:15.762735 update_engine[1495]: I20250115 14:13:15.762161 1495 update_attempter.cc:509] Updating boot flags... Jan 15 14:13:15.780023 containerd[1513]: time="2025-01-15T14:13:15.778927421Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 14:13:15.783301 containerd[1513]: time="2025-01-15T14:13:15.783219542Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 15 14:13:15.784747 containerd[1513]: time="2025-01-15T14:13:15.784685299Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 14:13:15.787182 containerd[1513]: time="2025-01-15T14:13:15.787129208Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 15 14:13:15.791184 kubelet[2251]: W0115 14:13:15.790992 2251 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.244.21.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.244.21.14:6443: connect: connection refused Jan 15 14:13:15.791184 kubelet[2251]: E0115 14:13:15.791143 2251 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.244.21.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.21.14:6443: connect: connection refused" logger="UnhandledError" Jan 15 14:13:15.791621 containerd[1513]: time="2025-01-15T14:13:15.791150812Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 14:13:15.795652 containerd[1513]: time="2025-01-15T14:13:15.794388367Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 14:13:15.798007 containerd[1513]: time="2025-01-15T14:13:15.796737104Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 15 14:13:15.801230 containerd[1513]: time="2025-01-15T14:13:15.801159974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 14:13:15.804669 containerd[1513]: time="2025-01-15T14:13:15.804619801Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 681.919057ms" Jan 15 14:13:15.813729 containerd[1513]: time="2025-01-15T14:13:15.813650633Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 663.554403ms" Jan 15 14:13:15.814307 containerd[1513]: time="2025-01-15T14:13:15.814271624Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 665.492274ms" Jan 15 14:13:15.842319 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2296) Jan 15 14:13:15.893964 kubelet[2251]: W0115 14:13:15.893831 2251 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.244.21.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.21.14:6443: connect: connection refused Jan 15 14:13:15.893964 kubelet[2251]: E0115 14:13:15.893918 2251 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.244.21.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.21.14:6443: connect: connection refused" logger="UnhandledError" Jan 15 14:13:15.941111 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2295) Jan 15 14:13:15.961473 kubelet[2251]: W0115 14:13:15.961055 2251 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.244.21.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.21.14:6443: connect: connection refused Jan 15 14:13:15.967601 kubelet[2251]: E0115 14:13:15.964024 2251 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.244.21.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.21.14:6443: connect: connection refused" logger="UnhandledError" Jan 15 14:13:16.044558 kubelet[2251]: E0115 14:13:16.044012 2251 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.21.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-8ino3.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.21.14:6443: connect: connection refused" interval="1.6s" Jan 15 14:13:16.078005 kubelet[2251]: W0115 14:13:16.077739 2251 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.244.21.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-8ino3.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.21.14:6443: connect: connection refused Jan 15 14:13:16.078005 kubelet[2251]: E0115 14:13:16.077859 2251 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.244.21.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-8ino3.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.21.14:6443: connect: connection refused" logger="UnhandledError" Jan 15 14:13:16.220891 containerd[1513]: time="2025-01-15T14:13:16.219898888Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 14:13:16.220891 containerd[1513]: time="2025-01-15T14:13:16.220021071Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 14:13:16.220891 containerd[1513]: time="2025-01-15T14:13:16.220042784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 14:13:16.220891 containerd[1513]: time="2025-01-15T14:13:16.220214778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 14:13:16.229270 containerd[1513]: time="2025-01-15T14:13:16.227076935Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 14:13:16.229270 containerd[1513]: time="2025-01-15T14:13:16.227279409Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 14:13:16.229270 containerd[1513]: time="2025-01-15T14:13:16.227331962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 14:13:16.229270 containerd[1513]: time="2025-01-15T14:13:16.227480878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 14:13:16.229270 containerd[1513]: time="2025-01-15T14:13:16.227878335Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 14:13:16.229270 containerd[1513]: time="2025-01-15T14:13:16.227967020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 14:13:16.229270 containerd[1513]: time="2025-01-15T14:13:16.228024234Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 14:13:16.229270 containerd[1513]: time="2025-01-15T14:13:16.228439923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 14:13:16.261421 kubelet[2251]: I0115 14:13:16.260581 2251 kubelet_node_status.go:72] "Attempting to register node" node="srv-8ino3.gb1.brightbox.com" Jan 15 14:13:16.261421 kubelet[2251]: E0115 14:13:16.261197 2251 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.244.21.14:6443/api/v1/nodes\": dial tcp 10.244.21.14:6443: connect: connection refused" node="srv-8ino3.gb1.brightbox.com" Jan 15 14:13:16.274303 systemd[1]: Started cri-containerd-4e7ea4bda881d87d074f7c8c930cf17b6b5934e1fa26a2f642a53f0eaf701944.scope - libcontainer container 4e7ea4bda881d87d074f7c8c930cf17b6b5934e1fa26a2f642a53f0eaf701944. Jan 15 14:13:16.290505 systemd[1]: Started cri-containerd-f7001466ae1822301ee541549cd3bdf558919b47186da5382e26a0ae5de5edbe.scope - libcontainer container f7001466ae1822301ee541549cd3bdf558919b47186da5382e26a0ae5de5edbe. Jan 15 14:13:16.320313 systemd[1]: Started cri-containerd-2512a00587c5e7849e615339fabeb4284777d6adad68f61095cac00bc18b59ca.scope - libcontainer container 2512a00587c5e7849e615339fabeb4284777d6adad68f61095cac00bc18b59ca. Jan 15 14:13:16.381460 containerd[1513]: time="2025-01-15T14:13:16.381372683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-8ino3.gb1.brightbox.com,Uid:baee752f84fef4ea9490ea979ecaa6fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e7ea4bda881d87d074f7c8c930cf17b6b5934e1fa26a2f642a53f0eaf701944\"" Jan 15 14:13:16.395417 containerd[1513]: time="2025-01-15T14:13:16.395208826Z" level=info msg="CreateContainer within sandbox \"4e7ea4bda881d87d074f7c8c930cf17b6b5934e1fa26a2f642a53f0eaf701944\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 15 14:13:16.421419 containerd[1513]: time="2025-01-15T14:13:16.421355811Z" level=info msg="CreateContainer within sandbox \"4e7ea4bda881d87d074f7c8c930cf17b6b5934e1fa26a2f642a53f0eaf701944\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5136b07eca630563bd79a4867bf812048c4bf2c9ef7e9d0ebe245017432c6712\"" Jan 15 14:13:16.424878 containerd[1513]: time="2025-01-15T14:13:16.424748892Z" level=info msg="StartContainer for \"5136b07eca630563bd79a4867bf812048c4bf2c9ef7e9d0ebe245017432c6712\"" Jan 15 14:13:16.444594 containerd[1513]: time="2025-01-15T14:13:16.444386109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-8ino3.gb1.brightbox.com,Uid:fb5ef8009fa7044faac5882b0636647c,Namespace:kube-system,Attempt:0,} returns sandbox id \"f7001466ae1822301ee541549cd3bdf558919b47186da5382e26a0ae5de5edbe\"" Jan 15 14:13:16.449233 containerd[1513]: time="2025-01-15T14:13:16.449177948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-8ino3.gb1.brightbox.com,Uid:847205d29e9e61eec6e21f24cef534af,Namespace:kube-system,Attempt:0,} returns sandbox id \"2512a00587c5e7849e615339fabeb4284777d6adad68f61095cac00bc18b59ca\"" Jan 15 14:13:16.450932 containerd[1513]: time="2025-01-15T14:13:16.450627675Z" level=info msg="CreateContainer within sandbox \"f7001466ae1822301ee541549cd3bdf558919b47186da5382e26a0ae5de5edbe\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 15 14:13:16.453200 containerd[1513]: time="2025-01-15T14:13:16.453157513Z" level=info msg="CreateContainer within sandbox \"2512a00587c5e7849e615339fabeb4284777d6adad68f61095cac00bc18b59ca\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 15 14:13:16.490207 systemd[1]: Started cri-containerd-5136b07eca630563bd79a4867bf812048c4bf2c9ef7e9d0ebe245017432c6712.scope - libcontainer container 5136b07eca630563bd79a4867bf812048c4bf2c9ef7e9d0ebe245017432c6712. Jan 15 14:13:16.492772 containerd[1513]: time="2025-01-15T14:13:16.492701256Z" level=info msg="CreateContainer within sandbox \"2512a00587c5e7849e615339fabeb4284777d6adad68f61095cac00bc18b59ca\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2327387f1143a6bb1443ebfe5b71247c6db8b236aea891462b8edfdedf316247\"" Jan 15 14:13:16.494934 containerd[1513]: time="2025-01-15T14:13:16.494843388Z" level=info msg="StartContainer for \"2327387f1143a6bb1443ebfe5b71247c6db8b236aea891462b8edfdedf316247\"" Jan 15 14:13:16.500405 containerd[1513]: time="2025-01-15T14:13:16.500192539Z" level=info msg="CreateContainer within sandbox \"f7001466ae1822301ee541549cd3bdf558919b47186da5382e26a0ae5de5edbe\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"69c5c4826fdb6f3ec6fc34282230f01125f0bd01ee9b1c30c5e8e5094a631014\"" Jan 15 14:13:16.502012 containerd[1513]: time="2025-01-15T14:13:16.501786386Z" level=info msg="StartContainer for \"69c5c4826fdb6f3ec6fc34282230f01125f0bd01ee9b1c30c5e8e5094a631014\"" Jan 15 14:13:16.571359 systemd[1]: Started cri-containerd-2327387f1143a6bb1443ebfe5b71247c6db8b236aea891462b8edfdedf316247.scope - libcontainer container 2327387f1143a6bb1443ebfe5b71247c6db8b236aea891462b8edfdedf316247. Jan 15 14:13:16.578435 kubelet[2251]: E0115 14:13:16.578198 2251 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.244.21.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.244.21.14:6443: connect: connection refused" logger="UnhandledError" Jan 15 14:13:16.580341 systemd[1]: Started cri-containerd-69c5c4826fdb6f3ec6fc34282230f01125f0bd01ee9b1c30c5e8e5094a631014.scope - libcontainer container 69c5c4826fdb6f3ec6fc34282230f01125f0bd01ee9b1c30c5e8e5094a631014. Jan 15 14:13:16.596302 containerd[1513]: time="2025-01-15T14:13:16.596234646Z" level=info msg="StartContainer for \"5136b07eca630563bd79a4867bf812048c4bf2c9ef7e9d0ebe245017432c6712\" returns successfully" Jan 15 14:13:16.686772 containerd[1513]: time="2025-01-15T14:13:16.685955931Z" level=info msg="StartContainer for \"69c5c4826fdb6f3ec6fc34282230f01125f0bd01ee9b1c30c5e8e5094a631014\" returns successfully" Jan 15 14:13:16.696080 containerd[1513]: time="2025-01-15T14:13:16.695331357Z" level=info msg="StartContainer for \"2327387f1143a6bb1443ebfe5b71247c6db8b236aea891462b8edfdedf316247\" returns successfully" Jan 15 14:13:17.867944 kubelet[2251]: I0115 14:13:17.867820 2251 kubelet_node_status.go:72] "Attempting to register node" node="srv-8ino3.gb1.brightbox.com" Jan 15 14:13:19.661313 kubelet[2251]: E0115 14:13:19.661193 2251 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-8ino3.gb1.brightbox.com\" not found" node="srv-8ino3.gb1.brightbox.com" Jan 15 14:13:19.730279 kubelet[2251]: I0115 14:13:19.730149 2251 kubelet_node_status.go:75] "Successfully registered node" node="srv-8ino3.gb1.brightbox.com" Jan 15 14:13:20.617606 kubelet[2251]: I0115 14:13:20.617505 2251 apiserver.go:52] "Watching apiserver" Jan 15 14:13:20.633818 kubelet[2251]: I0115 14:13:20.633718 2251 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 15 14:13:21.981381 systemd[1]: Reloading requested from client PID 2544 ('systemctl') (unit session-9.scope)... Jan 15 14:13:21.982088 systemd[1]: Reloading... Jan 15 14:13:22.130077 zram_generator::config[2586]: No configuration found. Jan 15 14:13:22.331355 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 15 14:13:22.473227 systemd[1]: Reloading finished in 490 ms. Jan 15 14:13:22.555490 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 14:13:22.573428 systemd[1]: kubelet.service: Deactivated successfully. Jan 15 14:13:22.574244 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 14:13:22.574575 systemd[1]: kubelet.service: Consumed 1.246s CPU time, 114.3M memory peak, 0B memory swap peak. Jan 15 14:13:22.583644 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 14:13:22.801220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 14:13:22.803574 (kubelet)[2647]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 15 14:13:22.922236 kubelet[2647]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 14:13:22.922236 kubelet[2647]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 15 14:13:22.922236 kubelet[2647]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 14:13:22.922945 kubelet[2647]: I0115 14:13:22.922356 2647 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 15 14:13:22.941139 kubelet[2647]: I0115 14:13:22.940124 2647 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 15 14:13:22.941139 kubelet[2647]: I0115 14:13:22.940378 2647 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 15 14:13:22.943012 kubelet[2647]: I0115 14:13:22.941899 2647 server.go:929] "Client rotation is on, will bootstrap in background" Jan 15 14:13:22.944381 kubelet[2647]: I0115 14:13:22.944351 2647 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 15 14:13:22.949962 kubelet[2647]: I0115 14:13:22.949893 2647 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 15 14:13:22.960548 kubelet[2647]: E0115 14:13:22.960486 2647 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 15 14:13:22.960857 kubelet[2647]: I0115 14:13:22.960832 2647 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 15 14:13:22.971018 kubelet[2647]: I0115 14:13:22.969760 2647 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 15 14:13:22.971503 kubelet[2647]: I0115 14:13:22.971479 2647 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 15 14:13:22.972124 kubelet[2647]: I0115 14:13:22.972066 2647 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 15 14:13:22.973006 kubelet[2647]: I0115 14:13:22.972271 2647 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-8ino3.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 15 14:13:22.973006 kubelet[2647]: I0115 14:13:22.972622 2647 topology_manager.go:138] "Creating topology manager with none policy" Jan 15 14:13:22.973006 kubelet[2647]: I0115 14:13:22.972641 2647 container_manager_linux.go:300] "Creating device plugin manager" Jan 15 14:13:22.973006 kubelet[2647]: I0115 14:13:22.972757 2647 state_mem.go:36] "Initialized new in-memory state store" Jan 15 14:13:22.973719 kubelet[2647]: I0115 14:13:22.973486 2647 kubelet.go:408] "Attempting to sync node with API server" Jan 15 14:13:22.973719 kubelet[2647]: I0115 14:13:22.973523 2647 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 15 14:13:22.973719 kubelet[2647]: I0115 14:13:22.973581 2647 kubelet.go:314] "Adding apiserver pod source" Jan 15 14:13:22.973719 kubelet[2647]: I0115 14:13:22.973616 2647 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 15 14:13:22.977151 kubelet[2647]: I0115 14:13:22.976007 2647 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 15 14:13:22.979012 kubelet[2647]: I0115 14:13:22.978112 2647 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 15 14:13:22.979012 kubelet[2647]: I0115 14:13:22.978904 2647 server.go:1269] "Started kubelet" Jan 15 14:13:22.981894 kubelet[2647]: I0115 14:13:22.981840 2647 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 15 14:13:22.988357 kubelet[2647]: I0115 14:13:22.988268 2647 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 15 14:13:22.993442 kubelet[2647]: I0115 14:13:22.993310 2647 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 15 14:13:22.997373 kubelet[2647]: I0115 14:13:22.997338 2647 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 15 14:13:23.000204 kubelet[2647]: I0115 14:13:23.000172 2647 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 15 14:13:23.008034 kubelet[2647]: E0115 14:13:23.007358 2647 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-8ino3.gb1.brightbox.com\" not found" Jan 15 14:13:23.014498 kubelet[2647]: I0115 14:13:23.014456 2647 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 15 14:13:23.016763 kubelet[2647]: I0115 14:13:23.015229 2647 server.go:460] "Adding debug handlers to kubelet server" Jan 15 14:13:23.016763 kubelet[2647]: I0115 14:13:23.016035 2647 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 15 14:13:23.016763 kubelet[2647]: I0115 14:13:23.016378 2647 reconciler.go:26] "Reconciler: start to sync state" Jan 15 14:13:23.018250 kubelet[2647]: I0115 14:13:23.018222 2647 factory.go:221] Registration of the systemd container factory successfully Jan 15 14:13:23.018590 kubelet[2647]: I0115 14:13:23.018558 2647 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 15 14:13:23.030129 kubelet[2647]: I0115 14:13:23.030080 2647 factory.go:221] Registration of the containerd container factory successfully Jan 15 14:13:23.036289 kubelet[2647]: I0115 14:13:23.036232 2647 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 15 14:13:23.038222 kubelet[2647]: I0115 14:13:23.038197 2647 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 15 14:13:23.038382 kubelet[2647]: I0115 14:13:23.038360 2647 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 15 14:13:23.038545 kubelet[2647]: I0115 14:13:23.038523 2647 kubelet.go:2321] "Starting kubelet main sync loop" Jan 15 14:13:23.038738 kubelet[2647]: E0115 14:13:23.038686 2647 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 15 14:13:23.139109 kubelet[2647]: E0115 14:13:23.139041 2647 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 15 14:13:23.141577 kubelet[2647]: I0115 14:13:23.141534 2647 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 15 14:13:23.141706 kubelet[2647]: I0115 14:13:23.141562 2647 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 15 14:13:23.141706 kubelet[2647]: I0115 14:13:23.141627 2647 state_mem.go:36] "Initialized new in-memory state store" Jan 15 14:13:23.141930 kubelet[2647]: I0115 14:13:23.141900 2647 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 15 14:13:23.142032 kubelet[2647]: I0115 14:13:23.141930 2647 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 15 14:13:23.142032 kubelet[2647]: I0115 14:13:23.141971 2647 policy_none.go:49] "None policy: Start" Jan 15 14:13:23.143836 kubelet[2647]: I0115 14:13:23.143815 2647 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 15 14:13:23.144068 kubelet[2647]: I0115 14:13:23.143913 2647 state_mem.go:35] "Initializing new in-memory state store" Jan 15 14:13:23.144746 kubelet[2647]: I0115 14:13:23.144725 2647 state_mem.go:75] "Updated machine memory state" Jan 15 14:13:23.157403 kubelet[2647]: I0115 14:13:23.156289 2647 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 15 14:13:23.157403 kubelet[2647]: I0115 14:13:23.156653 2647 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 15 14:13:23.157403 kubelet[2647]: I0115 14:13:23.156680 2647 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 15 14:13:23.157403 kubelet[2647]: I0115 14:13:23.157207 2647 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 15 14:13:23.297217 kubelet[2647]: I0115 14:13:23.295916 2647 kubelet_node_status.go:72] "Attempting to register node" node="srv-8ino3.gb1.brightbox.com" Jan 15 14:13:23.312281 kubelet[2647]: I0115 14:13:23.311714 2647 kubelet_node_status.go:111] "Node was previously registered" node="srv-8ino3.gb1.brightbox.com" Jan 15 14:13:23.312281 kubelet[2647]: I0115 14:13:23.311924 2647 kubelet_node_status.go:75] "Successfully registered node" node="srv-8ino3.gb1.brightbox.com" Jan 15 14:13:23.358013 kubelet[2647]: W0115 14:13:23.357597 2647 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 15 14:13:23.359410 kubelet[2647]: W0115 14:13:23.359382 2647 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 15 14:13:23.360116 kubelet[2647]: W0115 14:13:23.359930 2647 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 15 14:13:23.418893 kubelet[2647]: I0115 14:13:23.418613 2647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/847205d29e9e61eec6e21f24cef534af-flexvolume-dir\") pod \"kube-controller-manager-srv-8ino3.gb1.brightbox.com\" (UID: \"847205d29e9e61eec6e21f24cef534af\") " pod="kube-system/kube-controller-manager-srv-8ino3.gb1.brightbox.com" Jan 15 14:13:23.418893 kubelet[2647]: I0115 14:13:23.418681 2647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/847205d29e9e61eec6e21f24cef534af-k8s-certs\") pod \"kube-controller-manager-srv-8ino3.gb1.brightbox.com\" (UID: \"847205d29e9e61eec6e21f24cef534af\") " pod="kube-system/kube-controller-manager-srv-8ino3.gb1.brightbox.com" Jan 15 14:13:23.418893 kubelet[2647]: I0115 14:13:23.418717 2647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/847205d29e9e61eec6e21f24cef534af-kubeconfig\") pod \"kube-controller-manager-srv-8ino3.gb1.brightbox.com\" (UID: \"847205d29e9e61eec6e21f24cef534af\") " pod="kube-system/kube-controller-manager-srv-8ino3.gb1.brightbox.com" Jan 15 14:13:23.418893 kubelet[2647]: I0115 14:13:23.418748 2647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fb5ef8009fa7044faac5882b0636647c-kubeconfig\") pod \"kube-scheduler-srv-8ino3.gb1.brightbox.com\" (UID: \"fb5ef8009fa7044faac5882b0636647c\") " pod="kube-system/kube-scheduler-srv-8ino3.gb1.brightbox.com" Jan 15 14:13:23.418893 kubelet[2647]: I0115 14:13:23.418777 2647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/baee752f84fef4ea9490ea979ecaa6fe-ca-certs\") pod \"kube-apiserver-srv-8ino3.gb1.brightbox.com\" (UID: \"baee752f84fef4ea9490ea979ecaa6fe\") " pod="kube-system/kube-apiserver-srv-8ino3.gb1.brightbox.com" Jan 15 14:13:23.419344 kubelet[2647]: I0115 14:13:23.418807 2647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/baee752f84fef4ea9490ea979ecaa6fe-usr-share-ca-certificates\") pod \"kube-apiserver-srv-8ino3.gb1.brightbox.com\" (UID: \"baee752f84fef4ea9490ea979ecaa6fe\") " pod="kube-system/kube-apiserver-srv-8ino3.gb1.brightbox.com" Jan 15 14:13:23.419344 kubelet[2647]: I0115 14:13:23.418837 2647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/847205d29e9e61eec6e21f24cef534af-ca-certs\") pod \"kube-controller-manager-srv-8ino3.gb1.brightbox.com\" (UID: \"847205d29e9e61eec6e21f24cef534af\") " pod="kube-system/kube-controller-manager-srv-8ino3.gb1.brightbox.com" Jan 15 14:13:23.419344 kubelet[2647]: I0115 14:13:23.418868 2647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/847205d29e9e61eec6e21f24cef534af-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-8ino3.gb1.brightbox.com\" (UID: \"847205d29e9e61eec6e21f24cef534af\") " pod="kube-system/kube-controller-manager-srv-8ino3.gb1.brightbox.com" Jan 15 14:13:23.419344 kubelet[2647]: I0115 14:13:23.418898 2647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/baee752f84fef4ea9490ea979ecaa6fe-k8s-certs\") pod \"kube-apiserver-srv-8ino3.gb1.brightbox.com\" (UID: \"baee752f84fef4ea9490ea979ecaa6fe\") " pod="kube-system/kube-apiserver-srv-8ino3.gb1.brightbox.com" Jan 15 14:13:23.976791 kubelet[2647]: I0115 14:13:23.974814 2647 apiserver.go:52] "Watching apiserver" Jan 15 14:13:24.016552 kubelet[2647]: I0115 14:13:24.016442 2647 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 15 14:13:24.112178 kubelet[2647]: W0115 14:13:24.110171 2647 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 15 14:13:24.112178 kubelet[2647]: E0115 14:13:24.110619 2647 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-srv-8ino3.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-8ino3.gb1.brightbox.com" Jan 15 14:13:24.172126 kubelet[2647]: I0115 14:13:24.171928 2647 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-8ino3.gb1.brightbox.com" podStartSLOduration=1.171287754 podStartE2EDuration="1.171287754s" podCreationTimestamp="2025-01-15 14:13:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-15 14:13:24.151550063 +0000 UTC m=+1.324873639" watchObservedRunningTime="2025-01-15 14:13:24.171287754 +0000 UTC m=+1.344611321" Jan 15 14:13:24.194503 kubelet[2647]: I0115 14:13:24.194098 2647 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-8ino3.gb1.brightbox.com" podStartSLOduration=1.19406739 podStartE2EDuration="1.19406739s" podCreationTimestamp="2025-01-15 14:13:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-15 14:13:24.172634074 +0000 UTC m=+1.345957641" watchObservedRunningTime="2025-01-15 14:13:24.19406739 +0000 UTC m=+1.367390959" Jan 15 14:13:27.749092 kubelet[2647]: I0115 14:13:27.747800 2647 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-8ino3.gb1.brightbox.com" podStartSLOduration=4.747769187 podStartE2EDuration="4.747769187s" podCreationTimestamp="2025-01-15 14:13:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-15 14:13:24.197085964 +0000 UTC m=+1.370409538" watchObservedRunningTime="2025-01-15 14:13:27.747769187 +0000 UTC m=+4.921092754" Jan 15 14:13:28.278125 kubelet[2647]: I0115 14:13:28.278049 2647 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 15 14:13:28.279687 containerd[1513]: time="2025-01-15T14:13:28.279002525Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 15 14:13:28.280743 kubelet[2647]: I0115 14:13:28.279377 2647 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 15 14:13:29.002300 systemd[1]: Created slice kubepods-besteffort-podd6db9ba9_5d83_4647_b569_6e07c2b24899.slice - libcontainer container kubepods-besteffort-podd6db9ba9_5d83_4647_b569_6e07c2b24899.slice. Jan 15 14:13:29.060045 kubelet[2647]: I0115 14:13:29.059691 2647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lf8j\" (UniqueName: \"kubernetes.io/projected/d6db9ba9-5d83-4647-b569-6e07c2b24899-kube-api-access-2lf8j\") pod \"kube-proxy-p9sh9\" (UID: \"d6db9ba9-5d83-4647-b569-6e07c2b24899\") " pod="kube-system/kube-proxy-p9sh9" Jan 15 14:13:29.060045 kubelet[2647]: I0115 14:13:29.059773 2647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d6db9ba9-5d83-4647-b569-6e07c2b24899-xtables-lock\") pod \"kube-proxy-p9sh9\" (UID: \"d6db9ba9-5d83-4647-b569-6e07c2b24899\") " pod="kube-system/kube-proxy-p9sh9" Jan 15 14:13:29.060045 kubelet[2647]: I0115 14:13:29.059809 2647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d6db9ba9-5d83-4647-b569-6e07c2b24899-lib-modules\") pod \"kube-proxy-p9sh9\" (UID: \"d6db9ba9-5d83-4647-b569-6e07c2b24899\") " pod="kube-system/kube-proxy-p9sh9" Jan 15 14:13:29.060045 kubelet[2647]: I0115 14:13:29.059853 2647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d6db9ba9-5d83-4647-b569-6e07c2b24899-kube-proxy\") pod \"kube-proxy-p9sh9\" (UID: \"d6db9ba9-5d83-4647-b569-6e07c2b24899\") " pod="kube-system/kube-proxy-p9sh9" Jan 15 14:13:29.181434 kubelet[2647]: E0115 14:13:29.181338 2647 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 15 14:13:29.181434 kubelet[2647]: E0115 14:13:29.181418 2647 projected.go:194] Error preparing data for projected volume kube-api-access-2lf8j for pod kube-system/kube-proxy-p9sh9: configmap "kube-root-ca.crt" not found Jan 15 14:13:29.181736 kubelet[2647]: E0115 14:13:29.181570 2647 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d6db9ba9-5d83-4647-b569-6e07c2b24899-kube-api-access-2lf8j podName:d6db9ba9-5d83-4647-b569-6e07c2b24899 nodeName:}" failed. No retries permitted until 2025-01-15 14:13:29.681524492 +0000 UTC m=+6.854848053 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2lf8j" (UniqueName: "kubernetes.io/projected/d6db9ba9-5d83-4647-b569-6e07c2b24899-kube-api-access-2lf8j") pod "kube-proxy-p9sh9" (UID: "d6db9ba9-5d83-4647-b569-6e07c2b24899") : configmap "kube-root-ca.crt" not found Jan 15 14:13:29.382033 systemd[1]: Created slice kubepods-besteffort-pod72736027_74dd_44fc_8154_6684ea85a29c.slice - libcontainer container kubepods-besteffort-pod72736027_74dd_44fc_8154_6684ea85a29c.slice. Jan 15 14:13:29.465653 kubelet[2647]: I0115 14:13:29.465542 2647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/72736027-74dd-44fc-8154-6684ea85a29c-var-lib-calico\") pod \"tigera-operator-76c4976dd7-r76x2\" (UID: \"72736027-74dd-44fc-8154-6684ea85a29c\") " pod="tigera-operator/tigera-operator-76c4976dd7-r76x2" Jan 15 14:13:29.465971 kubelet[2647]: I0115 14:13:29.465705 2647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmwrx\" (UniqueName: \"kubernetes.io/projected/72736027-74dd-44fc-8154-6684ea85a29c-kube-api-access-fmwrx\") pod \"tigera-operator-76c4976dd7-r76x2\" (UID: \"72736027-74dd-44fc-8154-6684ea85a29c\") " pod="tigera-operator/tigera-operator-76c4976dd7-r76x2" Jan 15 14:13:29.617085 sudo[1754]: pam_unix(sudo:session): session closed for user root Jan 15 14:13:29.689866 containerd[1513]: time="2025-01-15T14:13:29.689760930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-r76x2,Uid:72736027-74dd-44fc-8154-6684ea85a29c,Namespace:tigera-operator,Attempt:0,}" Jan 15 14:13:29.734040 containerd[1513]: time="2025-01-15T14:13:29.733672354Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 14:13:29.734040 containerd[1513]: time="2025-01-15T14:13:29.733798347Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 14:13:29.734040 containerd[1513]: time="2025-01-15T14:13:29.733880299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 14:13:29.735613 containerd[1513]: time="2025-01-15T14:13:29.735502050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 14:13:29.766417 sshd[1751]: pam_unix(sshd:session): session closed for user core Jan 15 14:13:29.775648 systemd[1]: Started cri-containerd-ae6f10b267fded37de98ec973f26727ce18482415663d063220e8d9a6a4aec68.scope - libcontainer container ae6f10b267fded37de98ec973f26727ce18482415663d063220e8d9a6a4aec68. Jan 15 14:13:29.784968 systemd[1]: sshd@6-10.244.21.14:22-147.75.109.163:58016.service: Deactivated successfully. Jan 15 14:13:29.790658 systemd[1]: session-9.scope: Deactivated successfully. Jan 15 14:13:29.791788 systemd[1]: session-9.scope: Consumed 6.422s CPU time, 145.1M memory peak, 0B memory swap peak. Jan 15 14:13:29.794780 systemd-logind[1493]: Session 9 logged out. Waiting for processes to exit. Jan 15 14:13:29.798867 systemd-logind[1493]: Removed session 9. Jan 15 14:13:29.851882 containerd[1513]: time="2025-01-15T14:13:29.851790376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-r76x2,Uid:72736027-74dd-44fc-8154-6684ea85a29c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ae6f10b267fded37de98ec973f26727ce18482415663d063220e8d9a6a4aec68\"" Jan 15 14:13:29.857066 containerd[1513]: time="2025-01-15T14:13:29.856905593Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 15 14:13:29.917413 containerd[1513]: time="2025-01-15T14:13:29.917349603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p9sh9,Uid:d6db9ba9-5d83-4647-b569-6e07c2b24899,Namespace:kube-system,Attempt:0,}" Jan 15 14:13:29.952719 containerd[1513]: time="2025-01-15T14:13:29.952126449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 14:13:29.952719 containerd[1513]: time="2025-01-15T14:13:29.952253764Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 14:13:29.952719 containerd[1513]: time="2025-01-15T14:13:29.952279167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 14:13:29.954147 containerd[1513]: time="2025-01-15T14:13:29.952705257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 14:13:29.989352 systemd[1]: Started cri-containerd-2c3e92efd485c210474a2befab6c7adcb2f3f5662a94a7dedc2cd9fc150dd825.scope - libcontainer container 2c3e92efd485c210474a2befab6c7adcb2f3f5662a94a7dedc2cd9fc150dd825. Jan 15 14:13:30.033149 containerd[1513]: time="2025-01-15T14:13:30.033012995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p9sh9,Uid:d6db9ba9-5d83-4647-b569-6e07c2b24899,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c3e92efd485c210474a2befab6c7adcb2f3f5662a94a7dedc2cd9fc150dd825\"" Jan 15 14:13:30.042908 containerd[1513]: time="2025-01-15T14:13:30.042844532Z" level=info msg="CreateContainer within sandbox \"2c3e92efd485c210474a2befab6c7adcb2f3f5662a94a7dedc2cd9fc150dd825\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 15 14:13:30.064585 containerd[1513]: time="2025-01-15T14:13:30.063809351Z" level=info msg="CreateContainer within sandbox \"2c3e92efd485c210474a2befab6c7adcb2f3f5662a94a7dedc2cd9fc150dd825\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2ca8ed97be2fe6e1faa60b4ec70b8ab0ca875d14f8bb8d37001cf1c9504b9252\"" Jan 15 14:13:30.065201 containerd[1513]: time="2025-01-15T14:13:30.065166074Z" level=info msg="StartContainer for \"2ca8ed97be2fe6e1faa60b4ec70b8ab0ca875d14f8bb8d37001cf1c9504b9252\"" Jan 15 14:13:30.107630 systemd[1]: Started cri-containerd-2ca8ed97be2fe6e1faa60b4ec70b8ab0ca875d14f8bb8d37001cf1c9504b9252.scope - libcontainer container 2ca8ed97be2fe6e1faa60b4ec70b8ab0ca875d14f8bb8d37001cf1c9504b9252. Jan 15 14:13:30.163778 containerd[1513]: time="2025-01-15T14:13:30.163681030Z" level=info msg="StartContainer for \"2ca8ed97be2fe6e1faa60b4ec70b8ab0ca875d14f8bb8d37001cf1c9504b9252\" returns successfully" Jan 15 14:13:31.162597 kubelet[2647]: I0115 14:13:31.161307 2647 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-p9sh9" podStartSLOduration=3.161249387 podStartE2EDuration="3.161249387s" podCreationTimestamp="2025-01-15 14:13:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-15 14:13:31.161049195 +0000 UTC m=+8.334372771" watchObservedRunningTime="2025-01-15 14:13:31.161249387 +0000 UTC m=+8.334572950" Jan 15 14:13:32.650073 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1278320783.mount: Deactivated successfully. Jan 15 14:13:33.610141 containerd[1513]: time="2025-01-15T14:13:33.608868207Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:13:33.611842 containerd[1513]: time="2025-01-15T14:13:33.611787710Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764273" Jan 15 14:13:33.612945 containerd[1513]: time="2025-01-15T14:13:33.612907568Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:13:33.616933 containerd[1513]: time="2025-01-15T14:13:33.616852251Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:13:33.618234 containerd[1513]: time="2025-01-15T14:13:33.617950236Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 3.760924525s" Jan 15 14:13:33.618234 containerd[1513]: time="2025-01-15T14:13:33.618022463Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 15 14:13:33.628499 containerd[1513]: time="2025-01-15T14:13:33.628362794Z" level=info msg="CreateContainer within sandbox \"ae6f10b267fded37de98ec973f26727ce18482415663d063220e8d9a6a4aec68\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 15 14:13:33.650697 containerd[1513]: time="2025-01-15T14:13:33.650356567Z" level=info msg="CreateContainer within sandbox \"ae6f10b267fded37de98ec973f26727ce18482415663d063220e8d9a6a4aec68\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d66db96ca66ac670281853adb323dc728417665cac8d923957cb35fd244ebd9b\"" Jan 15 14:13:33.651745 containerd[1513]: time="2025-01-15T14:13:33.651221784Z" level=info msg="StartContainer for \"d66db96ca66ac670281853adb323dc728417665cac8d923957cb35fd244ebd9b\"" Jan 15 14:13:33.706536 systemd[1]: Started cri-containerd-d66db96ca66ac670281853adb323dc728417665cac8d923957cb35fd244ebd9b.scope - libcontainer container d66db96ca66ac670281853adb323dc728417665cac8d923957cb35fd244ebd9b. Jan 15 14:13:33.753803 containerd[1513]: time="2025-01-15T14:13:33.752518747Z" level=info msg="StartContainer for \"d66db96ca66ac670281853adb323dc728417665cac8d923957cb35fd244ebd9b\" returns successfully" Jan 15 14:13:34.171764 kubelet[2647]: I0115 14:13:34.171640 2647 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-r76x2" podStartSLOduration=1.40024267 podStartE2EDuration="5.171615488s" podCreationTimestamp="2025-01-15 14:13:29 +0000 UTC" firstStartedPulling="2025-01-15 14:13:29.854576855 +0000 UTC m=+7.027900410" lastFinishedPulling="2025-01-15 14:13:33.625949668 +0000 UTC m=+10.799273228" observedRunningTime="2025-01-15 14:13:34.171294658 +0000 UTC m=+11.344618232" watchObservedRunningTime="2025-01-15 14:13:34.171615488 +0000 UTC m=+11.344939055" Jan 15 14:13:37.179122 systemd[1]: Created slice kubepods-besteffort-pode7c26cdb_0c1b_4515_a81a_141714a56c51.slice - libcontainer container kubepods-besteffort-pode7c26cdb_0c1b_4515_a81a_141714a56c51.slice. Jan 15 14:13:37.222133 kubelet[2647]: I0115 14:13:37.222053 2647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7c26cdb-0c1b-4515-a81a-141714a56c51-tigera-ca-bundle\") pod \"calico-typha-84bfdccdfd-4rcvj\" (UID: \"e7c26cdb-0c1b-4515-a81a-141714a56c51\") " pod="calico-system/calico-typha-84bfdccdfd-4rcvj" Jan 15 14:13:37.222133 kubelet[2647]: I0115 14:13:37.222141 2647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e7c26cdb-0c1b-4515-a81a-141714a56c51-typha-certs\") pod \"calico-typha-84bfdccdfd-4rcvj\" (UID: \"e7c26cdb-0c1b-4515-a81a-141714a56c51\") " pod="calico-system/calico-typha-84bfdccdfd-4rcvj" Jan 15 14:13:37.222942 kubelet[2647]: I0115 14:13:37.222178 2647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2srd5\" (UniqueName: \"kubernetes.io/projected/e7c26cdb-0c1b-4515-a81a-141714a56c51-kube-api-access-2srd5\") pod \"calico-typha-84bfdccdfd-4rcvj\" (UID: \"e7c26cdb-0c1b-4515-a81a-141714a56c51\") " pod="calico-system/calico-typha-84bfdccdfd-4rcvj" Jan 15 14:13:37.391297 systemd[1]: Created slice kubepods-besteffort-podf602104c_c8d0_427f_883c_b91c3f793979.slice - libcontainer container kubepods-besteffort-podf602104c_c8d0_427f_883c_b91c3f793979.slice. Jan 15 14:13:37.426855 kubelet[2647]: I0115 14:13:37.426752 2647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f602104c-c8d0-427f-883c-b91c3f793979-policysync\") pod \"calico-node-748l2\" (UID: \"f602104c-c8d0-427f-883c-b91c3f793979\") " pod="calico-system/calico-node-748l2" Jan 15 14:13:37.427312 kubelet[2647]: I0115 14:13:37.427277 2647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f602104c-c8d0-427f-883c-b91c3f793979-node-certs\") pod \"calico-node-748l2\" (UID: \"f602104c-c8d0-427f-883c-b91c3f793979\") " pod="calico-system/calico-node-748l2" Jan 15 14:13:37.428232 kubelet[2647]: I0115 14:13:37.427546 2647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f602104c-c8d0-427f-883c-b91c3f793979-tigera-ca-bundle\") pod \"calico-node-748l2\" (UID: \"f602104c-c8d0-427f-883c-b91c3f793979\") " pod="calico-system/calico-node-748l2" Jan 15 14:13:37.428232 kubelet[2647]: I0115 14:13:37.427586 2647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f602104c-c8d0-427f-883c-b91c3f793979-cni-net-dir\") pod \"calico-node-748l2\" (UID: \"f602104c-c8d0-427f-883c-b91c3f793979\") " pod="calico-system/calico-node-748l2" Jan 15 14:13:37.428232 kubelet[2647]: I0115 14:13:37.427647 2647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f602104c-c8d0-427f-883c-b91c3f793979-cni-log-dir\") pod \"calico-node-748l2\" (UID: \"f602104c-c8d0-427f-883c-b91c3f793979\") " pod="calico-system/calico-node-748l2" Jan 15 14:13:37.428232 kubelet[2647]: I0115 14:13:37.427680 2647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f602104c-c8d0-427f-883c-b91c3f793979-lib-modules\") pod \"calico-node-748l2\" (UID: \"f602104c-c8d0-427f-883c-b91c3f793979\") " pod="calico-system/calico-node-748l2" Jan 15 14:13:37.428232 kubelet[2647]: I0115 14:13:37.427707 2647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f602104c-c8d0-427f-883c-b91c3f793979-var-run-calico\") pod \"calico-node-748l2\" (UID: \"f602104c-c8d0-427f-883c-b91c3f793979\") " pod="calico-system/calico-node-748l2" Jan 15 14:13:37.428544 kubelet[2647]: I0115 14:13:37.427763 2647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f602104c-c8d0-427f-883c-b91c3f793979-xtables-lock\") pod \"calico-node-748l2\" (UID: \"f602104c-c8d0-427f-883c-b91c3f793979\") " pod="calico-system/calico-node-748l2" Jan 15 14:13:37.428544 kubelet[2647]: I0115 14:13:37.427801 2647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f602104c-c8d0-427f-883c-b91c3f793979-cni-bin-dir\") pod \"calico-node-748l2\" (UID: \"f602104c-c8d0-427f-883c-b91c3f793979\") " pod="calico-system/calico-node-748l2" Jan 15 14:13:37.428544 kubelet[2647]: I0115 14:13:37.427854 2647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhx59\" (UniqueName: \"kubernetes.io/projected/f602104c-c8d0-427f-883c-b91c3f793979-kube-api-access-dhx59\") pod \"calico-node-748l2\" (UID: \"f602104c-c8d0-427f-883c-b91c3f793979\") " pod="calico-system/calico-node-748l2" Jan 15 14:13:37.428544 kubelet[2647]: I0115 14:13:37.427893 2647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f602104c-c8d0-427f-883c-b91c3f793979-var-lib-calico\") pod \"calico-node-748l2\" (UID: \"f602104c-c8d0-427f-883c-b91c3f793979\") " pod="calico-system/calico-node-748l2" Jan 15 14:13:37.428544 kubelet[2647]: I0115 14:13:37.427924 2647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f602104c-c8d0-427f-883c-b91c3f793979-flexvol-driver-host\") pod \"calico-node-748l2\" (UID: \"f602104c-c8d0-427f-883c-b91c3f793979\") " pod="calico-system/calico-node-748l2" Jan 15 14:13:37.489673 containerd[1513]: time="2025-01-15T14:13:37.489403573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84bfdccdfd-4rcvj,Uid:e7c26cdb-0c1b-4515-a81a-141714a56c51,Namespace:calico-system,Attempt:0,}" Jan 15 14:13:37.554484 kubelet[2647]: E0115 14:13:37.553383 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.554484 kubelet[2647]: W0115 14:13:37.553447 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.554484 kubelet[2647]: E0115 14:13:37.553518 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.570599 kubelet[2647]: E0115 14:13:37.570385 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.570599 kubelet[2647]: W0115 14:13:37.570418 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.570599 kubelet[2647]: E0115 14:13:37.570451 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.600020 kubelet[2647]: E0115 14:13:37.598250 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.600020 kubelet[2647]: W0115 14:13:37.598282 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.600020 kubelet[2647]: E0115 14:13:37.598314 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.618509 containerd[1513]: time="2025-01-15T14:13:37.616186385Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 14:13:37.618509 containerd[1513]: time="2025-01-15T14:13:37.616424770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 14:13:37.618509 containerd[1513]: time="2025-01-15T14:13:37.616574111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 14:13:37.618509 containerd[1513]: time="2025-01-15T14:13:37.617960126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 14:13:37.676037 kubelet[2647]: E0115 14:13:37.675652 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hvqsn" podUID="7cb0d9ef-84b8-4638-9648-eb1fe2376a04" Jan 15 14:13:37.678359 systemd[1]: Started cri-containerd-b7af0d7074037439ce282057926e2013d69ffdcc0dd64de44d7eb668676db10e.scope - libcontainer container b7af0d7074037439ce282057926e2013d69ffdcc0dd64de44d7eb668676db10e. Jan 15 14:13:37.698264 containerd[1513]: time="2025-01-15T14:13:37.698202634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-748l2,Uid:f602104c-c8d0-427f-883c-b91c3f793979,Namespace:calico-system,Attempt:0,}" Jan 15 14:13:37.724538 kubelet[2647]: E0115 14:13:37.724475 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.724538 kubelet[2647]: W0115 14:13:37.724522 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.724538 kubelet[2647]: E0115 14:13:37.724560 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.725537 kubelet[2647]: E0115 14:13:37.725510 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.725537 kubelet[2647]: W0115 14:13:37.725535 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.725666 kubelet[2647]: E0115 14:13:37.725553 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.733242 kubelet[2647]: E0115 14:13:37.733186 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.733242 kubelet[2647]: W0115 14:13:37.733235 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.733496 kubelet[2647]: E0115 14:13:37.733276 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.734677 kubelet[2647]: E0115 14:13:37.734645 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.734677 kubelet[2647]: W0115 14:13:37.734671 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.734822 kubelet[2647]: E0115 14:13:37.734701 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.736188 kubelet[2647]: E0115 14:13:37.736159 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.736188 kubelet[2647]: W0115 14:13:37.736185 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.736336 kubelet[2647]: E0115 14:13:37.736207 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.737042 kubelet[2647]: E0115 14:13:37.737016 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.737042 kubelet[2647]: W0115 14:13:37.737041 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.737179 kubelet[2647]: E0115 14:13:37.737060 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.738117 kubelet[2647]: E0115 14:13:37.738089 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.738117 kubelet[2647]: W0115 14:13:37.738115 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.738278 kubelet[2647]: E0115 14:13:37.738134 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.739462 kubelet[2647]: E0115 14:13:37.739428 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.739462 kubelet[2647]: W0115 14:13:37.739458 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.742781 kubelet[2647]: E0115 14:13:37.739479 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.742781 kubelet[2647]: E0115 14:13:37.740140 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.742781 kubelet[2647]: W0115 14:13:37.740159 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.742781 kubelet[2647]: E0115 14:13:37.740179 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.744171 kubelet[2647]: E0115 14:13:37.744126 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.744249 kubelet[2647]: W0115 14:13:37.744169 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.744249 kubelet[2647]: E0115 14:13:37.744215 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.744991 kubelet[2647]: E0115 14:13:37.744951 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.745064 kubelet[2647]: W0115 14:13:37.744992 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.745064 kubelet[2647]: E0115 14:13:37.745012 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.745613 kubelet[2647]: E0115 14:13:37.745588 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.745613 kubelet[2647]: W0115 14:13:37.745609 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.745726 kubelet[2647]: E0115 14:13:37.745626 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.747290 kubelet[2647]: E0115 14:13:37.747256 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.747290 kubelet[2647]: W0115 14:13:37.747285 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.747437 kubelet[2647]: E0115 14:13:37.747306 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.747587 kubelet[2647]: E0115 14:13:37.747563 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.747587 kubelet[2647]: W0115 14:13:37.747584 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.747704 kubelet[2647]: E0115 14:13:37.747604 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.747884 kubelet[2647]: E0115 14:13:37.747838 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.747884 kubelet[2647]: W0115 14:13:37.747860 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.747884 kubelet[2647]: E0115 14:13:37.747876 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.748346 kubelet[2647]: E0115 14:13:37.748156 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.748346 kubelet[2647]: W0115 14:13:37.748171 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.748346 kubelet[2647]: E0115 14:13:37.748186 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.749722 kubelet[2647]: E0115 14:13:37.749695 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.749722 kubelet[2647]: W0115 14:13:37.749728 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.749845 kubelet[2647]: E0115 14:13:37.749769 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.749894 kubelet[2647]: I0115 14:13:37.749807 2647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7cb0d9ef-84b8-4638-9648-eb1fe2376a04-kubelet-dir\") pod \"csi-node-driver-hvqsn\" (UID: \"7cb0d9ef-84b8-4638-9648-eb1fe2376a04\") " pod="calico-system/csi-node-driver-hvqsn" Jan 15 14:13:37.750752 kubelet[2647]: E0115 14:13:37.750667 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.750847 kubelet[2647]: W0115 14:13:37.750756 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.752102 kubelet[2647]: E0115 14:13:37.751128 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.752102 kubelet[2647]: I0115 14:13:37.751212 2647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7cb0d9ef-84b8-4638-9648-eb1fe2376a04-socket-dir\") pod \"csi-node-driver-hvqsn\" (UID: \"7cb0d9ef-84b8-4638-9648-eb1fe2376a04\") " pod="calico-system/csi-node-driver-hvqsn" Jan 15 14:13:37.752414 kubelet[2647]: E0115 14:13:37.752386 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.752414 kubelet[2647]: W0115 14:13:37.752413 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.754542 kubelet[2647]: E0115 14:13:37.754497 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.756447 kubelet[2647]: E0115 14:13:37.756412 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.756513 kubelet[2647]: W0115 14:13:37.756446 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.756629 kubelet[2647]: E0115 14:13:37.756602 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.756817 kubelet[2647]: E0115 14:13:37.756790 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.756817 kubelet[2647]: W0115 14:13:37.756811 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.757838 kubelet[2647]: E0115 14:13:37.756948 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.757838 kubelet[2647]: E0115 14:13:37.757073 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.757838 kubelet[2647]: W0115 14:13:37.757088 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.757838 kubelet[2647]: E0115 14:13:37.757333 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.757838 kubelet[2647]: W0115 14:13:37.757347 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.757838 kubelet[2647]: E0115 14:13:37.757567 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.757838 kubelet[2647]: W0115 14:13:37.757581 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.757838 kubelet[2647]: E0115 14:13:37.757596 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.757838 kubelet[2647]: E0115 14:13:37.757633 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.758430 kubelet[2647]: I0115 14:13:37.757765 2647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7cb0d9ef-84b8-4638-9648-eb1fe2376a04-registration-dir\") pod \"csi-node-driver-hvqsn\" (UID: \"7cb0d9ef-84b8-4638-9648-eb1fe2376a04\") " pod="calico-system/csi-node-driver-hvqsn" Jan 15 14:13:37.758430 kubelet[2647]: E0115 14:13:37.757797 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.758430 kubelet[2647]: E0115 14:13:37.757859 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.758430 kubelet[2647]: W0115 14:13:37.757874 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.758430 kubelet[2647]: E0115 14:13:37.757898 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.758634 kubelet[2647]: E0115 14:13:37.758247 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.758634 kubelet[2647]: W0115 14:13:37.758481 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.760060 kubelet[2647]: E0115 14:13:37.760025 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.760760 kubelet[2647]: E0115 14:13:37.760726 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.760760 kubelet[2647]: W0115 14:13:37.760750 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.760966 kubelet[2647]: E0115 14:13:37.760939 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.761588 kubelet[2647]: E0115 14:13:37.761563 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.761588 kubelet[2647]: W0115 14:13:37.761584 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.763480 kubelet[2647]: E0115 14:13:37.763440 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.763815 kubelet[2647]: E0115 14:13:37.763784 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.763815 kubelet[2647]: W0115 14:13:37.763811 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.763938 kubelet[2647]: E0115 14:13:37.763845 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.763938 kubelet[2647]: I0115 14:13:37.763887 2647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7cb0d9ef-84b8-4638-9648-eb1fe2376a04-varrun\") pod \"csi-node-driver-hvqsn\" (UID: \"7cb0d9ef-84b8-4638-9648-eb1fe2376a04\") " pod="calico-system/csi-node-driver-hvqsn" Jan 15 14:13:37.767039 kubelet[2647]: E0115 14:13:37.766369 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.767039 kubelet[2647]: W0115 14:13:37.766409 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.767039 kubelet[2647]: E0115 14:13:37.766692 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.767039 kubelet[2647]: W0115 14:13:37.766707 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.767039 kubelet[2647]: E0115 14:13:37.766728 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.767039 kubelet[2647]: E0115 14:13:37.766828 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.768647 kubelet[2647]: E0115 14:13:37.767122 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.768647 kubelet[2647]: W0115 14:13:37.767136 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.768647 kubelet[2647]: E0115 14:13:37.767152 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.798244 containerd[1513]: time="2025-01-15T14:13:37.797691835Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 14:13:37.798244 containerd[1513]: time="2025-01-15T14:13:37.797795453Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 14:13:37.798244 containerd[1513]: time="2025-01-15T14:13:37.797814525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 14:13:37.798244 containerd[1513]: time="2025-01-15T14:13:37.797957758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 14:13:37.858592 systemd[1]: Started cri-containerd-0132ae029dc796e260e90fb9d4a03ad4be32153da9fe8162ce69a96c73b8c34b.scope - libcontainer container 0132ae029dc796e260e90fb9d4a03ad4be32153da9fe8162ce69a96c73b8c34b. Jan 15 14:13:37.871176 kubelet[2647]: E0115 14:13:37.870344 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.871176 kubelet[2647]: W0115 14:13:37.870383 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.871176 kubelet[2647]: E0115 14:13:37.870426 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.873473 kubelet[2647]: E0115 14:13:37.871353 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.873473 kubelet[2647]: W0115 14:13:37.871373 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.873473 kubelet[2647]: E0115 14:13:37.871407 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.873473 kubelet[2647]: E0115 14:13:37.872525 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.873473 kubelet[2647]: W0115 14:13:37.872544 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.873473 kubelet[2647]: E0115 14:13:37.872617 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.873473 kubelet[2647]: E0115 14:13:37.873124 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.873473 kubelet[2647]: W0115 14:13:37.873140 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.873473 kubelet[2647]: E0115 14:13:37.873282 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.874717 kubelet[2647]: E0115 14:13:37.873961 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.874717 kubelet[2647]: W0115 14:13:37.874009 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.874717 kubelet[2647]: E0115 14:13:37.874039 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.876622 kubelet[2647]: E0115 14:13:37.875483 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.876622 kubelet[2647]: W0115 14:13:37.875509 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.876622 kubelet[2647]: E0115 14:13:37.875642 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.876622 kubelet[2647]: E0115 14:13:37.875888 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.876622 kubelet[2647]: W0115 14:13:37.875903 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.876622 kubelet[2647]: E0115 14:13:37.876054 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.876622 kubelet[2647]: E0115 14:13:37.876352 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.876622 kubelet[2647]: W0115 14:13:37.876367 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.877351 kubelet[2647]: E0115 14:13:37.876624 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.877351 kubelet[2647]: E0115 14:13:37.877068 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.877351 kubelet[2647]: W0115 14:13:37.877086 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.878455 kubelet[2647]: E0115 14:13:37.877445 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.878455 kubelet[2647]: E0115 14:13:37.877902 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.878455 kubelet[2647]: W0115 14:13:37.877918 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.878455 kubelet[2647]: E0115 14:13:37.878022 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.878455 kubelet[2647]: I0115 14:13:37.878096 2647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwn97\" (UniqueName: \"kubernetes.io/projected/7cb0d9ef-84b8-4638-9648-eb1fe2376a04-kube-api-access-dwn97\") pod \"csi-node-driver-hvqsn\" (UID: \"7cb0d9ef-84b8-4638-9648-eb1fe2376a04\") " pod="calico-system/csi-node-driver-hvqsn" Jan 15 14:13:37.881000 kubelet[2647]: E0115 14:13:37.878512 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.881000 kubelet[2647]: W0115 14:13:37.878528 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.881000 kubelet[2647]: E0115 14:13:37.878840 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.881000 kubelet[2647]: E0115 14:13:37.879169 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.881000 kubelet[2647]: W0115 14:13:37.879183 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.881000 kubelet[2647]: E0115 14:13:37.879246 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.881000 kubelet[2647]: E0115 14:13:37.879863 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.881000 kubelet[2647]: W0115 14:13:37.879879 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.881000 kubelet[2647]: E0115 14:13:37.880569 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.881000 kubelet[2647]: W0115 14:13:37.880585 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.881478 kubelet[2647]: E0115 14:13:37.881034 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.881478 kubelet[2647]: W0115 14:13:37.881049 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.881733 kubelet[2647]: E0115 14:13:37.881597 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.881733 kubelet[2647]: E0115 14:13:37.881630 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.881733 kubelet[2647]: E0115 14:13:37.881693 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.883113 kubelet[2647]: E0115 14:13:37.881761 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.883113 kubelet[2647]: W0115 14:13:37.881777 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.884065 kubelet[2647]: E0115 14:13:37.883630 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.884065 kubelet[2647]: E0115 14:13:37.884050 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.884460 kubelet[2647]: W0115 14:13:37.884071 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.884460 kubelet[2647]: E0115 14:13:37.884118 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.884460 kubelet[2647]: E0115 14:13:37.884420 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.884460 kubelet[2647]: W0115 14:13:37.884437 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.885342 kubelet[2647]: E0115 14:13:37.884539 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.885342 kubelet[2647]: E0115 14:13:37.884820 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.885342 kubelet[2647]: W0115 14:13:37.884835 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.885342 kubelet[2647]: E0115 14:13:37.885036 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.887113 kubelet[2647]: E0115 14:13:37.885719 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.887113 kubelet[2647]: W0115 14:13:37.885742 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.887113 kubelet[2647]: E0115 14:13:37.886571 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.887113 kubelet[2647]: E0115 14:13:37.886706 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.887113 kubelet[2647]: W0115 14:13:37.886720 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.887113 kubelet[2647]: E0115 14:13:37.886857 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.887760 kubelet[2647]: E0115 14:13:37.887106 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.887760 kubelet[2647]: W0115 14:13:37.887232 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.887760 kubelet[2647]: E0115 14:13:37.887639 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.888096 kubelet[2647]: E0115 14:13:37.888068 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.888096 kubelet[2647]: W0115 14:13:37.888091 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.888218 kubelet[2647]: E0115 14:13:37.888108 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.981156 kubelet[2647]: E0115 14:13:37.980539 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.981156 kubelet[2647]: W0115 14:13:37.980616 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.981156 kubelet[2647]: E0115 14:13:37.981064 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.983199 kubelet[2647]: E0115 14:13:37.982427 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.983199 kubelet[2647]: W0115 14:13:37.982466 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.983199 kubelet[2647]: E0115 14:13:37.982509 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.983199 kubelet[2647]: E0115 14:13:37.983152 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.983199 kubelet[2647]: W0115 14:13:37.983168 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.983199 kubelet[2647]: E0115 14:13:37.983185 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.984035 kubelet[2647]: E0115 14:13:37.983656 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.984035 kubelet[2647]: W0115 14:13:37.983673 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.984035 kubelet[2647]: E0115 14:13:37.983689 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:37.985108 kubelet[2647]: E0115 14:13:37.984341 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:37.985108 kubelet[2647]: W0115 14:13:37.984358 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:37.985108 kubelet[2647]: E0115 14:13:37.984373 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:38.001970 containerd[1513]: time="2025-01-15T14:13:37.999225683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-748l2,Uid:f602104c-c8d0-427f-883c-b91c3f793979,Namespace:calico-system,Attempt:0,} returns sandbox id \"0132ae029dc796e260e90fb9d4a03ad4be32153da9fe8162ce69a96c73b8c34b\"" Jan 15 14:13:38.010716 containerd[1513]: time="2025-01-15T14:13:38.007812064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 15 14:13:38.023579 kubelet[2647]: E0115 14:13:38.023477 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 14:13:38.023579 kubelet[2647]: W0115 14:13:38.023574 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 14:13:38.023924 kubelet[2647]: E0115 14:13:38.023612 2647 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 14:13:38.085045 containerd[1513]: time="2025-01-15T14:13:38.084668380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84bfdccdfd-4rcvj,Uid:e7c26cdb-0c1b-4515-a81a-141714a56c51,Namespace:calico-system,Attempt:0,} returns sandbox id \"b7af0d7074037439ce282057926e2013d69ffdcc0dd64de44d7eb668676db10e\"" Jan 15 14:13:40.039886 kubelet[2647]: E0115 14:13:40.039724 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hvqsn" podUID="7cb0d9ef-84b8-4638-9648-eb1fe2376a04" Jan 15 14:13:40.747283 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3040102539.mount: Deactivated successfully. Jan 15 14:13:41.049967 containerd[1513]: time="2025-01-15T14:13:41.049648084Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:13:41.052428 containerd[1513]: time="2025-01-15T14:13:41.052330717Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 15 14:13:41.053662 containerd[1513]: time="2025-01-15T14:13:41.053529124Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:13:41.060601 containerd[1513]: time="2025-01-15T14:13:41.060554335Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:13:41.062639 containerd[1513]: time="2025-01-15T14:13:41.061932638Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 3.05404457s" Jan 15 14:13:41.062639 containerd[1513]: time="2025-01-15T14:13:41.062002110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 15 14:13:41.064928 containerd[1513]: time="2025-01-15T14:13:41.064528569Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 15 14:13:41.071269 containerd[1513]: time="2025-01-15T14:13:41.070525204Z" level=info msg="CreateContainer within sandbox \"0132ae029dc796e260e90fb9d4a03ad4be32153da9fe8162ce69a96c73b8c34b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 15 14:13:41.116644 containerd[1513]: time="2025-01-15T14:13:41.116466196Z" level=info msg="CreateContainer within sandbox \"0132ae029dc796e260e90fb9d4a03ad4be32153da9fe8162ce69a96c73b8c34b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0d7d6f48e953da96f76967dc90e3664c1ec085841991be4d8d834c8d1db1f8f8\"" Jan 15 14:13:41.119903 containerd[1513]: time="2025-01-15T14:13:41.119862779Z" level=info msg="StartContainer for \"0d7d6f48e953da96f76967dc90e3664c1ec085841991be4d8d834c8d1db1f8f8\"" Jan 15 14:13:41.210709 systemd[1]: Started cri-containerd-0d7d6f48e953da96f76967dc90e3664c1ec085841991be4d8d834c8d1db1f8f8.scope - libcontainer container 0d7d6f48e953da96f76967dc90e3664c1ec085841991be4d8d834c8d1db1f8f8. Jan 15 14:13:41.292718 containerd[1513]: time="2025-01-15T14:13:41.292499904Z" level=info msg="StartContainer for \"0d7d6f48e953da96f76967dc90e3664c1ec085841991be4d8d834c8d1db1f8f8\" returns successfully" Jan 15 14:13:41.361271 systemd[1]: cri-containerd-0d7d6f48e953da96f76967dc90e3664c1ec085841991be4d8d834c8d1db1f8f8.scope: Deactivated successfully. Jan 15 14:13:41.610011 containerd[1513]: time="2025-01-15T14:13:41.609520930Z" level=info msg="shim disconnected" id=0d7d6f48e953da96f76967dc90e3664c1ec085841991be4d8d834c8d1db1f8f8 namespace=k8s.io Jan 15 14:13:41.610011 containerd[1513]: time="2025-01-15T14:13:41.609806972Z" level=warning msg="cleaning up after shim disconnected" id=0d7d6f48e953da96f76967dc90e3664c1ec085841991be4d8d834c8d1db1f8f8 namespace=k8s.io Jan 15 14:13:41.610374 containerd[1513]: time="2025-01-15T14:13:41.610102844Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 15 14:13:41.652632 containerd[1513]: time="2025-01-15T14:13:41.651391613Z" level=warning msg="cleanup warnings time=\"2025-01-15T14:13:41Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 15 14:13:41.680206 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d7d6f48e953da96f76967dc90e3664c1ec085841991be4d8d834c8d1db1f8f8-rootfs.mount: Deactivated successfully. Jan 15 14:13:42.045150 kubelet[2647]: E0115 14:13:42.044395 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hvqsn" podUID="7cb0d9ef-84b8-4638-9648-eb1fe2376a04" Jan 15 14:13:44.040418 kubelet[2647]: E0115 14:13:44.040283 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hvqsn" podUID="7cb0d9ef-84b8-4638-9648-eb1fe2376a04" Jan 15 14:13:46.039236 kubelet[2647]: E0115 14:13:46.039157 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hvqsn" podUID="7cb0d9ef-84b8-4638-9648-eb1fe2376a04" Jan 15 14:13:48.039124 kubelet[2647]: E0115 14:13:48.039021 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hvqsn" podUID="7cb0d9ef-84b8-4638-9648-eb1fe2376a04" Jan 15 14:13:50.040570 kubelet[2647]: E0115 14:13:50.039881 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hvqsn" podUID="7cb0d9ef-84b8-4638-9648-eb1fe2376a04" Jan 15 14:13:52.042400 kubelet[2647]: E0115 14:13:52.040300 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hvqsn" podUID="7cb0d9ef-84b8-4638-9648-eb1fe2376a04" Jan 15 14:13:52.439574 containerd[1513]: time="2025-01-15T14:13:52.439435091Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:13:52.440841 containerd[1513]: time="2025-01-15T14:13:52.440764064Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Jan 15 14:13:52.441663 containerd[1513]: time="2025-01-15T14:13:52.441567038Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:13:52.444533 containerd[1513]: time="2025-01-15T14:13:52.444458061Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:13:52.445915 containerd[1513]: time="2025-01-15T14:13:52.445652113Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 11.380843119s" Jan 15 14:13:52.445915 containerd[1513]: time="2025-01-15T14:13:52.445713646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 15 14:13:52.456207 containerd[1513]: time="2025-01-15T14:13:52.456168930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 15 14:13:52.492648 containerd[1513]: time="2025-01-15T14:13:52.492285149Z" level=info msg="CreateContainer within sandbox \"b7af0d7074037439ce282057926e2013d69ffdcc0dd64de44d7eb668676db10e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 15 14:13:52.521539 containerd[1513]: time="2025-01-15T14:13:52.521367069Z" level=info msg="CreateContainer within sandbox \"b7af0d7074037439ce282057926e2013d69ffdcc0dd64de44d7eb668676db10e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"677547f090c2110cf171aa2f39d0244ca0fd52c426c38977ab266e8343e65f27\"" Jan 15 14:13:52.523154 containerd[1513]: time="2025-01-15T14:13:52.522732924Z" level=info msg="StartContainer for \"677547f090c2110cf171aa2f39d0244ca0fd52c426c38977ab266e8343e65f27\"" Jan 15 14:13:52.631376 systemd[1]: Started cri-containerd-677547f090c2110cf171aa2f39d0244ca0fd52c426c38977ab266e8343e65f27.scope - libcontainer container 677547f090c2110cf171aa2f39d0244ca0fd52c426c38977ab266e8343e65f27. Jan 15 14:13:52.706894 containerd[1513]: time="2025-01-15T14:13:52.705681851Z" level=info msg="StartContainer for \"677547f090c2110cf171aa2f39d0244ca0fd52c426c38977ab266e8343e65f27\" returns successfully" Jan 15 14:13:53.268955 kubelet[2647]: I0115 14:13:53.266395 2647 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-84bfdccdfd-4rcvj" podStartSLOduration=1.904081015 podStartE2EDuration="16.266351647s" podCreationTimestamp="2025-01-15 14:13:37 +0000 UTC" firstStartedPulling="2025-01-15 14:13:38.0884298 +0000 UTC m=+15.261753360" lastFinishedPulling="2025-01-15 14:13:52.45070043 +0000 UTC m=+29.624023992" observedRunningTime="2025-01-15 14:13:53.266139319 +0000 UTC m=+30.439462891" watchObservedRunningTime="2025-01-15 14:13:53.266351647 +0000 UTC m=+30.439675233" Jan 15 14:13:54.039533 kubelet[2647]: E0115 14:13:54.039354 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hvqsn" podUID="7cb0d9ef-84b8-4638-9648-eb1fe2376a04" Jan 15 14:13:54.244564 kubelet[2647]: I0115 14:13:54.244398 2647 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 15 14:13:56.039911 kubelet[2647]: E0115 14:13:56.039795 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hvqsn" podUID="7cb0d9ef-84b8-4638-9648-eb1fe2376a04" Jan 15 14:13:58.040120 kubelet[2647]: E0115 14:13:58.039900 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hvqsn" podUID="7cb0d9ef-84b8-4638-9648-eb1fe2376a04" Jan 15 14:14:00.041044 kubelet[2647]: E0115 14:14:00.040387 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hvqsn" podUID="7cb0d9ef-84b8-4638-9648-eb1fe2376a04" Jan 15 14:14:02.046439 kubelet[2647]: E0115 14:14:02.046263 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hvqsn" podUID="7cb0d9ef-84b8-4638-9648-eb1fe2376a04" Jan 15 14:14:04.045671 kubelet[2647]: E0115 14:14:04.038973 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hvqsn" podUID="7cb0d9ef-84b8-4638-9648-eb1fe2376a04" Jan 15 14:14:05.530016 containerd[1513]: time="2025-01-15T14:14:05.528280163Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:14:05.531168 containerd[1513]: time="2025-01-15T14:14:05.529956198Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 15 14:14:05.531304 containerd[1513]: time="2025-01-15T14:14:05.530602374Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:14:05.533298 containerd[1513]: time="2025-01-15T14:14:05.533243037Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:14:05.534899 containerd[1513]: time="2025-01-15T14:14:05.534859187Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 13.078431197s" Jan 15 14:14:05.535089 containerd[1513]: time="2025-01-15T14:14:05.535059564Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 15 14:14:05.544263 containerd[1513]: time="2025-01-15T14:14:05.544184748Z" level=info msg="CreateContainer within sandbox \"0132ae029dc796e260e90fb9d4a03ad4be32153da9fe8162ce69a96c73b8c34b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 15 14:14:05.592263 containerd[1513]: time="2025-01-15T14:14:05.592183629Z" level=info msg="CreateContainer within sandbox \"0132ae029dc796e260e90fb9d4a03ad4be32153da9fe8162ce69a96c73b8c34b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e92bdf9f9bfa06c473288c367e054841742e97d889b7890d19151264ebf2a086\"" Jan 15 14:14:05.596939 containerd[1513]: time="2025-01-15T14:14:05.595022223Z" level=info msg="StartContainer for \"e92bdf9f9bfa06c473288c367e054841742e97d889b7890d19151264ebf2a086\"" Jan 15 14:14:05.683640 systemd[1]: run-containerd-runc-k8s.io-e92bdf9f9bfa06c473288c367e054841742e97d889b7890d19151264ebf2a086-runc.W72krd.mount: Deactivated successfully. Jan 15 14:14:05.696201 systemd[1]: Started cri-containerd-e92bdf9f9bfa06c473288c367e054841742e97d889b7890d19151264ebf2a086.scope - libcontainer container e92bdf9f9bfa06c473288c367e054841742e97d889b7890d19151264ebf2a086. Jan 15 14:14:05.763647 containerd[1513]: time="2025-01-15T14:14:05.763544855Z" level=info msg="StartContainer for \"e92bdf9f9bfa06c473288c367e054841742e97d889b7890d19151264ebf2a086\" returns successfully" Jan 15 14:14:05.887353 kubelet[2647]: I0115 14:14:05.887179 2647 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 15 14:14:06.040059 kubelet[2647]: E0115 14:14:06.039938 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hvqsn" podUID="7cb0d9ef-84b8-4638-9648-eb1fe2376a04" Jan 15 14:14:06.818126 systemd[1]: cri-containerd-e92bdf9f9bfa06c473288c367e054841742e97d889b7890d19151264ebf2a086.scope: Deactivated successfully. Jan 15 14:14:06.871096 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e92bdf9f9bfa06c473288c367e054841742e97d889b7890d19151264ebf2a086-rootfs.mount: Deactivated successfully. Jan 15 14:14:06.960750 containerd[1513]: time="2025-01-15T14:14:06.959839649Z" level=info msg="shim disconnected" id=e92bdf9f9bfa06c473288c367e054841742e97d889b7890d19151264ebf2a086 namespace=k8s.io Jan 15 14:14:06.960750 containerd[1513]: time="2025-01-15T14:14:06.960003753Z" level=warning msg="cleaning up after shim disconnected" id=e92bdf9f9bfa06c473288c367e054841742e97d889b7890d19151264ebf2a086 namespace=k8s.io Jan 15 14:14:06.960750 containerd[1513]: time="2025-01-15T14:14:06.960027367Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 15 14:14:06.981297 kubelet[2647]: I0115 14:14:06.981244 2647 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 15 14:14:07.045932 systemd[1]: Created slice kubepods-burstable-pod62a77434_74c8_4f4e_93f0_06f47a0ce14a.slice - libcontainer container kubepods-burstable-pod62a77434_74c8_4f4e_93f0_06f47a0ce14a.slice. Jan 15 14:14:07.068772 systemd[1]: Created slice kubepods-burstable-podd0a1a3c0_5c7a_49bf_98cd_cbcfee7c8716.slice - libcontainer container kubepods-burstable-podd0a1a3c0_5c7a_49bf_98cd_cbcfee7c8716.slice. Jan 15 14:14:07.092448 systemd[1]: Created slice kubepods-besteffort-poddb57f239_8d03_4c8d_9b83_ba0c7c7afb10.slice - libcontainer container kubepods-besteffort-poddb57f239_8d03_4c8d_9b83_ba0c7c7afb10.slice. Jan 15 14:14:07.114944 systemd[1]: Created slice kubepods-besteffort-podab3b97f5_1e67_4e9f_9a0c_89fe7060ec56.slice - libcontainer container kubepods-besteffort-podab3b97f5_1e67_4e9f_9a0c_89fe7060ec56.slice. Jan 15 14:14:07.129043 systemd[1]: Created slice kubepods-besteffort-podffcfce1b_cf9f_456b_9f06_d5b89b3336c6.slice - libcontainer container kubepods-besteffort-podffcfce1b_cf9f_456b_9f06_d5b89b3336c6.slice. Jan 15 14:14:07.204326 kubelet[2647]: I0115 14:14:07.204261 2647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d0a1a3c0-5c7a-49bf-98cd-cbcfee7c8716-config-volume\") pod \"coredns-6f6b679f8f-j2sst\" (UID: \"d0a1a3c0-5c7a-49bf-98cd-cbcfee7c8716\") " pod="kube-system/coredns-6f6b679f8f-j2sst" Jan 15 14:14:07.204689 kubelet[2647]: I0115 14:14:07.204661 2647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/db57f239-8d03-4c8d-9b83-ba0c7c7afb10-calico-apiserver-certs\") pod \"calico-apiserver-d56596c95-5w28p\" (UID: \"db57f239-8d03-4c8d-9b83-ba0c7c7afb10\") " pod="calico-apiserver/calico-apiserver-d56596c95-5w28p" Jan 15 14:14:07.205302 kubelet[2647]: I0115 14:14:07.204868 2647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2w8v\" (UniqueName: \"kubernetes.io/projected/ab3b97f5-1e67-4e9f-9a0c-89fe7060ec56-kube-api-access-q2w8v\") pod \"calico-kube-controllers-677f55659-m4s76\" (UID: \"ab3b97f5-1e67-4e9f-9a0c-89fe7060ec56\") " pod="calico-system/calico-kube-controllers-677f55659-m4s76" Jan 15 14:14:07.205302 kubelet[2647]: I0115 14:14:07.204923 2647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ffcfce1b-cf9f-456b-9f06-d5b89b3336c6-calico-apiserver-certs\") pod \"calico-apiserver-d56596c95-lvrnc\" (UID: \"ffcfce1b-cf9f-456b-9f06-d5b89b3336c6\") " pod="calico-apiserver/calico-apiserver-d56596c95-lvrnc" Jan 15 14:14:07.205302 kubelet[2647]: I0115 14:14:07.204960 2647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62a77434-74c8-4f4e-93f0-06f47a0ce14a-config-volume\") pod \"coredns-6f6b679f8f-2gnwv\" (UID: \"62a77434-74c8-4f4e-93f0-06f47a0ce14a\") " pod="kube-system/coredns-6f6b679f8f-2gnwv" Jan 15 14:14:07.205302 kubelet[2647]: I0115 14:14:07.205018 2647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbk4t\" (UniqueName: \"kubernetes.io/projected/62a77434-74c8-4f4e-93f0-06f47a0ce14a-kube-api-access-rbk4t\") pod \"coredns-6f6b679f8f-2gnwv\" (UID: \"62a77434-74c8-4f4e-93f0-06f47a0ce14a\") " pod="kube-system/coredns-6f6b679f8f-2gnwv" Jan 15 14:14:07.205302 kubelet[2647]: I0115 14:14:07.205052 2647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2gq4\" (UniqueName: \"kubernetes.io/projected/db57f239-8d03-4c8d-9b83-ba0c7c7afb10-kube-api-access-t2gq4\") pod \"calico-apiserver-d56596c95-5w28p\" (UID: \"db57f239-8d03-4c8d-9b83-ba0c7c7afb10\") " pod="calico-apiserver/calico-apiserver-d56596c95-5w28p" Jan 15 14:14:07.205820 kubelet[2647]: I0115 14:14:07.205081 2647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc69g\" (UniqueName: \"kubernetes.io/projected/ffcfce1b-cf9f-456b-9f06-d5b89b3336c6-kube-api-access-bc69g\") pod \"calico-apiserver-d56596c95-lvrnc\" (UID: \"ffcfce1b-cf9f-456b-9f06-d5b89b3336c6\") " pod="calico-apiserver/calico-apiserver-d56596c95-lvrnc" Jan 15 14:14:07.205820 kubelet[2647]: I0115 14:14:07.205137 2647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab3b97f5-1e67-4e9f-9a0c-89fe7060ec56-tigera-ca-bundle\") pod \"calico-kube-controllers-677f55659-m4s76\" (UID: \"ab3b97f5-1e67-4e9f-9a0c-89fe7060ec56\") " pod="calico-system/calico-kube-controllers-677f55659-m4s76" Jan 15 14:14:07.205820 kubelet[2647]: I0115 14:14:07.205259 2647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5ssj\" (UniqueName: \"kubernetes.io/projected/d0a1a3c0-5c7a-49bf-98cd-cbcfee7c8716-kube-api-access-v5ssj\") pod \"coredns-6f6b679f8f-j2sst\" (UID: \"d0a1a3c0-5c7a-49bf-98cd-cbcfee7c8716\") " pod="kube-system/coredns-6f6b679f8f-j2sst" Jan 15 14:14:07.303106 containerd[1513]: time="2025-01-15T14:14:07.303031339Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 15 14:14:07.406097 containerd[1513]: time="2025-01-15T14:14:07.402356685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-j2sst,Uid:d0a1a3c0-5c7a-49bf-98cd-cbcfee7c8716,Namespace:kube-system,Attempt:0,}" Jan 15 14:14:07.421457 containerd[1513]: time="2025-01-15T14:14:07.421381984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-677f55659-m4s76,Uid:ab3b97f5-1e67-4e9f-9a0c-89fe7060ec56,Namespace:calico-system,Attempt:0,}" Jan 15 14:14:07.436355 containerd[1513]: time="2025-01-15T14:14:07.436289646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d56596c95-lvrnc,Uid:ffcfce1b-cf9f-456b-9f06-d5b89b3336c6,Namespace:calico-apiserver,Attempt:0,}" Jan 15 14:14:07.669123 containerd[1513]: time="2025-01-15T14:14:07.668810990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2gnwv,Uid:62a77434-74c8-4f4e-93f0-06f47a0ce14a,Namespace:kube-system,Attempt:0,}" Jan 15 14:14:07.704608 containerd[1513]: time="2025-01-15T14:14:07.703367125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d56596c95-5w28p,Uid:db57f239-8d03-4c8d-9b83-ba0c7c7afb10,Namespace:calico-apiserver,Attempt:0,}" Jan 15 14:14:07.941795 containerd[1513]: time="2025-01-15T14:14:07.940845543Z" level=error msg="Failed to destroy network for sandbox \"6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 14:14:07.945650 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e-shm.mount: Deactivated successfully. Jan 15 14:14:07.950498 containerd[1513]: time="2025-01-15T14:14:07.950358206Z" level=error msg="Failed to destroy network for sandbox \"0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 14:14:07.953646 containerd[1513]: time="2025-01-15T14:14:07.953391891Z" level=error msg="encountered an error cleaning up failed sandbox \"0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 14:14:07.953646 containerd[1513]: time="2025-01-15T14:14:07.953485489Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-j2sst,Uid:d0a1a3c0-5c7a-49bf-98cd-cbcfee7c8716,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 14:14:07.954489 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593-shm.mount: Deactivated successfully. Jan 15 14:14:07.958288 containerd[1513]: time="2025-01-15T14:14:07.955818161Z" level=error msg="encountered an error cleaning up failed sandbox \"6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 14:14:07.958288 containerd[1513]: time="2025-01-15T14:14:07.955904305Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d56596c95-lvrnc,Uid:ffcfce1b-cf9f-456b-9f06-d5b89b3336c6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 14:14:07.968136 kubelet[2647]: E0115 14:14:07.966604 2647 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 14:14:07.968136 kubelet[2647]: E0115 14:14:07.966824 2647 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d56596c95-lvrnc" Jan 15 14:14:07.968136 kubelet[2647]: E0115 14:14:07.966885 2647 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 14:14:07.968136 kubelet[2647]: E0115 14:14:07.967043 2647 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d56596c95-lvrnc" Jan 15 14:14:07.968527 containerd[1513]: time="2025-01-15T14:14:07.967889693Z" level=error msg="Failed to destroy network for sandbox \"c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 14:14:07.975145 kubelet[2647]: E0115 14:14:07.967084 2647 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-j2sst" Jan 15 14:14:07.975145 kubelet[2647]: E0115 14:14:07.967113 2647 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-j2sst" Jan 15 14:14:07.975145 kubelet[2647]: E0115 14:14:07.967141 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d56596c95-lvrnc_calico-apiserver(ffcfce1b-cf9f-456b-9f06-d5b89b3336c6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d56596c95-lvrnc_calico-apiserver(ffcfce1b-cf9f-456b-9f06-d5b89b3336c6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d56596c95-lvrnc" podUID="ffcfce1b-cf9f-456b-9f06-d5b89b3336c6" Jan 15 14:14:07.972074 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8-shm.mount: Deactivated successfully. Jan 15 14:14:07.975785 containerd[1513]: time="2025-01-15T14:14:07.970776580Z" level=error msg="encountered an error cleaning up failed sandbox \"c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 14:14:07.975785 containerd[1513]: time="2025-01-15T14:14:07.970863818Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-677f55659-m4s76,Uid:ab3b97f5-1e67-4e9f-9a0c-89fe7060ec56,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 14:14:07.975960 kubelet[2647]: E0115 14:14:07.967176 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-j2sst_kube-system(d0a1a3c0-5c7a-49bf-98cd-cbcfee7c8716)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-j2sst_kube-system(d0a1a3c0-5c7a-49bf-98cd-cbcfee7c8716)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-j2sst" podUID="d0a1a3c0-5c7a-49bf-98cd-cbcfee7c8716" Jan 15 14:14:07.975960 kubelet[2647]: E0115 14:14:07.972556 2647 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 14:14:07.975960 kubelet[2647]: E0115 14:14:07.972628 2647 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-677f55659-m4s76" Jan 15 14:14:07.976154 kubelet[2647]: E0115 14:14:07.972657 2647 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-677f55659-m4s76" Jan 15 14:14:07.976154 kubelet[2647]: E0115 14:14:07.972704 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-677f55659-m4s76_calico-system(ab3b97f5-1e67-4e9f-9a0c-89fe7060ec56)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-677f55659-m4s76_calico-system(ab3b97f5-1e67-4e9f-9a0c-89fe7060ec56)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-677f55659-m4s76" podUID="ab3b97f5-1e67-4e9f-9a0c-89fe7060ec56" Jan 15 14:14:08.036770 containerd[1513]: time="2025-01-15T14:14:08.035503475Z" level=error msg="Failed to destroy network for sandbox \"debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 14:14:08.040943 containerd[1513]: time="2025-01-15T14:14:08.038392257Z" level=error msg="encountered an error cleaning up failed sandbox \"debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 14:14:08.040943 containerd[1513]: time="2025-01-15T14:14:08.038492933Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d56596c95-5w28p,Uid:db57f239-8d03-4c8d-9b83-ba0c7c7afb10,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 14:14:08.042945 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926-shm.mount: Deactivated successfully. Jan 15 14:14:08.044419 kubelet[2647]: E0115 14:14:08.043761 2647 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 14:14:08.044419 kubelet[2647]: E0115 14:14:08.043843 2647 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d56596c95-5w28p" Jan 15 14:14:08.044419 kubelet[2647]: E0115 14:14:08.043878 2647 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d56596c95-5w28p" Jan 15 14:14:08.045051 kubelet[2647]: E0115 14:14:08.043953 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d56596c95-5w28p_calico-apiserver(db57f239-8d03-4c8d-9b83-ba0c7c7afb10)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d56596c95-5w28p_calico-apiserver(db57f239-8d03-4c8d-9b83-ba0c7c7afb10)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d56596c95-5w28p" podUID="db57f239-8d03-4c8d-9b83-ba0c7c7afb10" Jan 15 14:14:08.065920 systemd[1]: Created slice kubepods-besteffort-pod7cb0d9ef_84b8_4638_9648_eb1fe2376a04.slice - libcontainer container kubepods-besteffort-pod7cb0d9ef_84b8_4638_9648_eb1fe2376a04.slice. Jan 15 14:14:08.074117 containerd[1513]: time="2025-01-15T14:14:08.074007764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hvqsn,Uid:7cb0d9ef-84b8-4638-9648-eb1fe2376a04,Namespace:calico-system,Attempt:0,}" Jan 15 14:14:08.087031 containerd[1513]: time="2025-01-15T14:14:08.086943584Z" level=error msg="Failed to destroy network for sandbox \"1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 14:14:08.087704 containerd[1513]: time="2025-01-15T14:14:08.087628967Z" level=error msg="encountered an error cleaning up failed sandbox \"1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 14:14:08.087849 containerd[1513]: time="2025-01-15T14:14:08.087783601Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2gnwv,Uid:62a77434-74c8-4f4e-93f0-06f47a0ce14a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 14:14:08.088233 kubelet[2647]: E0115 14:14:08.088179 2647 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 14:14:08.088343 kubelet[2647]: E0115 14:14:08.088277 2647 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-2gnwv" Jan 15 14:14:08.088343 kubelet[2647]: E0115 14:14:08.088310 2647 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-2gnwv" Jan 15 14:14:08.088504 kubelet[2647]: E0115 14:14:08.088397 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-2gnwv_kube-system(62a77434-74c8-4f4e-93f0-06f47a0ce14a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-2gnwv_kube-system(62a77434-74c8-4f4e-93f0-06f47a0ce14a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-2gnwv" podUID="62a77434-74c8-4f4e-93f0-06f47a0ce14a" Jan 15 14:14:08.186727 containerd[1513]: time="2025-01-15T14:14:08.186626291Z" level=error msg="Failed to destroy network for sandbox \"7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 14:14:08.187236 containerd[1513]: time="2025-01-15T14:14:08.187189626Z" level=error msg="encountered an error cleaning up failed sandbox \"7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 14:14:08.187343 containerd[1513]: time="2025-01-15T14:14:08.187268637Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hvqsn,Uid:7cb0d9ef-84b8-4638-9648-eb1fe2376a04,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 14:14:08.187761 kubelet[2647]: E0115 14:14:08.187584 2647 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 14:14:08.187761 kubelet[2647]: E0115 14:14:08.187672 2647 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hvqsn" Jan 15 14:14:08.187761 kubelet[2647]: E0115 14:14:08.187703 2647 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hvqsn" Jan 15 14:14:08.187954 kubelet[2647]: E0115 14:14:08.187800 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hvqsn_calico-system(7cb0d9ef-84b8-4638-9648-eb1fe2376a04)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hvqsn_calico-system(7cb0d9ef-84b8-4638-9648-eb1fe2376a04)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hvqsn" podUID="7cb0d9ef-84b8-4638-9648-eb1fe2376a04" Jan 15 14:14:08.308875 kubelet[2647]: I0115 14:14:08.306114 2647 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb" Jan 15 14:14:08.314248 kubelet[2647]: I0115 14:14:08.313952 2647 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b" Jan 15 14:14:08.364007 kubelet[2647]: I0115 14:14:08.363915 2647 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926" Jan 15 14:14:08.369016 kubelet[2647]: I0115 14:14:08.368311 2647 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e" Jan 15 14:14:08.373688 kubelet[2647]: I0115 14:14:08.372735 2647 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593" Jan 15 14:14:08.376055 kubelet[2647]: I0115 14:14:08.375905 2647 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8" Jan 15 14:14:08.410426 containerd[1513]: time="2025-01-15T14:14:08.407943656Z" level=info msg="StopPodSandbox for \"c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8\"" Jan 15 14:14:08.410426 containerd[1513]: time="2025-01-15T14:14:08.408920976Z" level=info msg="StopPodSandbox for \"1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b\"" Jan 15 14:14:08.410426 containerd[1513]: time="2025-01-15T14:14:08.409995214Z" level=info msg="Ensure that sandbox c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8 in task-service has been cleanup successfully" Jan 15 14:14:08.416002 containerd[1513]: time="2025-01-15T14:14:08.415804935Z" level=info msg="StopPodSandbox for \"7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb\"" Jan 15 14:14:08.417094 containerd[1513]: time="2025-01-15T14:14:08.416925260Z" level=info msg="Ensure that sandbox 1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b in task-service has been cleanup successfully" Jan 15 14:14:08.417709 containerd[1513]: time="2025-01-15T14:14:08.417684075Z" level=info msg="Ensure that sandbox 7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb in task-service has been cleanup successfully" Jan 15 14:14:08.429610 containerd[1513]: time="2025-01-15T14:14:08.416797108Z" level=info msg="StopPodSandbox for \"6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e\"" Jan 15 14:14:08.429610 containerd[1513]: time="2025-01-15T14:14:08.427685673Z" level=info msg="Ensure that sandbox 6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e in task-service has been cleanup successfully" Jan 15 14:14:08.430947 containerd[1513]: time="2025-01-15T14:14:08.416828499Z" level=info msg="StopPodSandbox for \"debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926\"" Jan 15 14:14:08.431861 containerd[1513]: time="2025-01-15T14:14:08.431825816Z" level=info msg="Ensure that sandbox debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926 in task-service has been cleanup successfully" Jan 15 14:14:08.437377 containerd[1513]: time="2025-01-15T14:14:08.416867157Z" level=info msg="StopPodSandbox for \"0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593\"" Jan 15 14:14:08.443206 containerd[1513]: time="2025-01-15T14:14:08.442947693Z" level=info msg="Ensure that sandbox 0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593 in task-service has been cleanup successfully" Jan 15 14:14:08.665898 containerd[1513]: time="2025-01-15T14:14:08.665046938Z" level=error msg="StopPodSandbox for \"1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b\" failed" error="failed to destroy network for sandbox \"1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 14:14:08.667070 kubelet[2647]: E0115 14:14:08.666188 2647 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b" Jan 15 14:14:08.667334 kubelet[2647]: E0115 14:14:08.667134 2647 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b"} Jan 15 14:14:08.667838 kubelet[2647]: E0115 14:14:08.667303 2647 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"62a77434-74c8-4f4e-93f0-06f47a0ce14a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 15 14:14:08.667838 kubelet[2647]: E0115 14:14:08.667375 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"62a77434-74c8-4f4e-93f0-06f47a0ce14a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-2gnwv" podUID="62a77434-74c8-4f4e-93f0-06f47a0ce14a" Jan 15 14:14:08.690229 containerd[1513]: time="2025-01-15T14:14:08.689831037Z" level=error msg="StopPodSandbox for \"7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb\" failed" error="failed to destroy network for sandbox \"7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 14:14:08.690656 kubelet[2647]: E0115 14:14:08.690237 2647 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb" Jan 15 14:14:08.690656 kubelet[2647]: E0115 14:14:08.690401 2647 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb"} Jan 15 14:14:08.690656 kubelet[2647]: E0115 14:14:08.690482 2647 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7cb0d9ef-84b8-4638-9648-eb1fe2376a04\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 15 14:14:08.690656 kubelet[2647]: E0115 14:14:08.690529 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7cb0d9ef-84b8-4638-9648-eb1fe2376a04\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hvqsn" podUID="7cb0d9ef-84b8-4638-9648-eb1fe2376a04" Jan 15 14:14:08.692728 containerd[1513]: time="2025-01-15T14:14:08.692651090Z" level=error msg="StopPodSandbox for \"debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926\" failed" error="failed to destroy network for sandbox \"debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 14:14:08.693231 kubelet[2647]: E0115 14:14:08.693092 2647 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926" Jan 15 14:14:08.693231 kubelet[2647]: E0115 14:14:08.693172 2647 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926"} Jan 15 14:14:08.693231 kubelet[2647]: E0115 14:14:08.693216 2647 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"db57f239-8d03-4c8d-9b83-ba0c7c7afb10\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 15 14:14:08.693439 kubelet[2647]: E0115 14:14:08.693248 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"db57f239-8d03-4c8d-9b83-ba0c7c7afb10\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d56596c95-5w28p" podUID="db57f239-8d03-4c8d-9b83-ba0c7c7afb10" Jan 15 14:14:08.706481 containerd[1513]: time="2025-01-15T14:14:08.705606239Z" level=error msg="StopPodSandbox for \"c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8\" failed" error="failed to destroy network for sandbox \"c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 14:14:08.706713 kubelet[2647]: E0115 14:14:08.706208 2647 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8" Jan 15 14:14:08.707366 kubelet[2647]: E0115 14:14:08.707173 2647 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8"} Jan 15 14:14:08.707366 kubelet[2647]: E0115 14:14:08.707252 2647 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ab3b97f5-1e67-4e9f-9a0c-89fe7060ec56\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 15 14:14:08.708012 kubelet[2647]: E0115 14:14:08.707428 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ab3b97f5-1e67-4e9f-9a0c-89fe7060ec56\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-677f55659-m4s76" podUID="ab3b97f5-1e67-4e9f-9a0c-89fe7060ec56" Jan 15 14:14:08.715808 containerd[1513]: time="2025-01-15T14:14:08.715724917Z" level=error msg="StopPodSandbox for \"6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e\" failed" error="failed to destroy network for sandbox \"6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 14:14:08.716875 kubelet[2647]: E0115 14:14:08.716083 2647 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e" Jan 15 14:14:08.717216 kubelet[2647]: E0115 14:14:08.717014 2647 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e"} Jan 15 14:14:08.717581 kubelet[2647]: E0115 14:14:08.717136 2647 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ffcfce1b-cf9f-456b-9f06-d5b89b3336c6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 15 14:14:08.717581 kubelet[2647]: E0115 14:14:08.717452 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ffcfce1b-cf9f-456b-9f06-d5b89b3336c6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d56596c95-lvrnc" podUID="ffcfce1b-cf9f-456b-9f06-d5b89b3336c6" Jan 15 14:14:08.724886 containerd[1513]: time="2025-01-15T14:14:08.724808611Z" level=error msg="StopPodSandbox for \"0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593\" failed" error="failed to destroy network for sandbox \"0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 14:14:08.725525 kubelet[2647]: E0115 14:14:08.725449 2647 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593" Jan 15 14:14:08.726010 kubelet[2647]: E0115 14:14:08.725548 2647 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593"} Jan 15 14:14:08.726010 kubelet[2647]: E0115 14:14:08.725604 2647 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d0a1a3c0-5c7a-49bf-98cd-cbcfee7c8716\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 15 14:14:08.726010 kubelet[2647]: E0115 14:14:08.725642 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d0a1a3c0-5c7a-49bf-98cd-cbcfee7c8716\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-j2sst" podUID="d0a1a3c0-5c7a-49bf-98cd-cbcfee7c8716" Jan 15 14:14:08.867491 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b-shm.mount: Deactivated successfully. Jan 15 14:14:19.797217 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1046792531.mount: Deactivated successfully. Jan 15 14:14:19.979705 containerd[1513]: time="2025-01-15T14:14:19.979558443Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:14:19.990270 containerd[1513]: time="2025-01-15T14:14:19.989947595Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 15 14:14:19.992150 containerd[1513]: time="2025-01-15T14:14:19.992070730Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:14:19.995311 containerd[1513]: time="2025-01-15T14:14:19.994350956Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:14:19.997748 containerd[1513]: time="2025-01-15T14:14:19.997697171Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 12.689713735s" Jan 15 14:14:19.998010 containerd[1513]: time="2025-01-15T14:14:19.997967494Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 15 14:14:20.044658 containerd[1513]: time="2025-01-15T14:14:20.044552527Z" level=info msg="CreateContainer within sandbox \"0132ae029dc796e260e90fb9d4a03ad4be32153da9fe8162ce69a96c73b8c34b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 15 14:14:20.110560 containerd[1513]: time="2025-01-15T14:14:20.110208758Z" level=info msg="CreateContainer within sandbox \"0132ae029dc796e260e90fb9d4a03ad4be32153da9fe8162ce69a96c73b8c34b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"898ac6d65f4f10dc663dec77896492a58247ee2a615cfed675882ddad84adc04\"" Jan 15 14:14:20.114003 containerd[1513]: time="2025-01-15T14:14:20.113226678Z" level=info msg="StartContainer for \"898ac6d65f4f10dc663dec77896492a58247ee2a615cfed675882ddad84adc04\"" Jan 15 14:14:20.221393 systemd[1]: Started cri-containerd-898ac6d65f4f10dc663dec77896492a58247ee2a615cfed675882ddad84adc04.scope - libcontainer container 898ac6d65f4f10dc663dec77896492a58247ee2a615cfed675882ddad84adc04. Jan 15 14:14:20.297803 containerd[1513]: time="2025-01-15T14:14:20.295290774Z" level=info msg="StartContainer for \"898ac6d65f4f10dc663dec77896492a58247ee2a615cfed675882ddad84adc04\" returns successfully" Jan 15 14:14:20.442710 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 15 14:14:20.443928 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 15 14:14:20.564973 kubelet[2647]: I0115 14:14:20.558519 2647 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-748l2" podStartSLOduration=1.539206061 podStartE2EDuration="43.531701742s" podCreationTimestamp="2025-01-15 14:13:37 +0000 UTC" firstStartedPulling="2025-01-15 14:13:38.007254599 +0000 UTC m=+15.180578155" lastFinishedPulling="2025-01-15 14:14:19.999750267 +0000 UTC m=+57.173073836" observedRunningTime="2025-01-15 14:14:20.531016977 +0000 UTC m=+57.704340549" watchObservedRunningTime="2025-01-15 14:14:20.531701742 +0000 UTC m=+57.705025302" Jan 15 14:14:21.043961 containerd[1513]: time="2025-01-15T14:14:21.043606052Z" level=info msg="StopPodSandbox for \"6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e\"" Jan 15 14:14:21.432419 containerd[1513]: 2025-01-15 14:14:21.185 [INFO][3754] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e" Jan 15 14:14:21.432419 containerd[1513]: 2025-01-15 14:14:21.187 [INFO][3754] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e" iface="eth0" netns="/var/run/netns/cni-555cade5-4245-bda5-c760-85bc083b47c7" Jan 15 14:14:21.432419 containerd[1513]: 2025-01-15 14:14:21.187 [INFO][3754] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e" iface="eth0" netns="/var/run/netns/cni-555cade5-4245-bda5-c760-85bc083b47c7" Jan 15 14:14:21.432419 containerd[1513]: 2025-01-15 14:14:21.189 [INFO][3754] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e" iface="eth0" netns="/var/run/netns/cni-555cade5-4245-bda5-c760-85bc083b47c7" Jan 15 14:14:21.432419 containerd[1513]: 2025-01-15 14:14:21.189 [INFO][3754] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e" Jan 15 14:14:21.432419 containerd[1513]: 2025-01-15 14:14:21.189 [INFO][3754] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e" Jan 15 14:14:21.432419 containerd[1513]: 2025-01-15 14:14:21.403 [INFO][3771] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e" HandleID="k8s-pod-network.6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e" Workload="srv--8ino3.gb1.brightbox.com-k8s-calico--apiserver--d56596c95--lvrnc-eth0" Jan 15 14:14:21.432419 containerd[1513]: 2025-01-15 14:14:21.405 [INFO][3771] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 14:14:21.432419 containerd[1513]: 2025-01-15 14:14:21.406 [INFO][3771] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 14:14:21.432419 containerd[1513]: 2025-01-15 14:14:21.423 [WARNING][3771] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e" HandleID="k8s-pod-network.6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e" Workload="srv--8ino3.gb1.brightbox.com-k8s-calico--apiserver--d56596c95--lvrnc-eth0" Jan 15 14:14:21.432419 containerd[1513]: 2025-01-15 14:14:21.423 [INFO][3771] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e" HandleID="k8s-pod-network.6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e" Workload="srv--8ino3.gb1.brightbox.com-k8s-calico--apiserver--d56596c95--lvrnc-eth0" Jan 15 14:14:21.432419 containerd[1513]: 2025-01-15 14:14:21.426 [INFO][3771] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 14:14:21.432419 containerd[1513]: 2025-01-15 14:14:21.428 [INFO][3754] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e" Jan 15 14:14:21.435766 containerd[1513]: time="2025-01-15T14:14:21.435288684Z" level=info msg="TearDown network for sandbox \"6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e\" successfully" Jan 15 14:14:21.435766 containerd[1513]: time="2025-01-15T14:14:21.435363563Z" level=info msg="StopPodSandbox for \"6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e\" returns successfully" Jan 15 14:14:21.435811 systemd[1]: run-netns-cni\x2d555cade5\x2d4245\x2dbda5\x2dc760\x2d85bc083b47c7.mount: Deactivated successfully. Jan 15 14:14:21.440922 containerd[1513]: time="2025-01-15T14:14:21.439966537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d56596c95-lvrnc,Uid:ffcfce1b-cf9f-456b-9f06-d5b89b3336c6,Namespace:calico-apiserver,Attempt:1,}" Jan 15 14:14:21.729869 systemd-networkd[1415]: cali7b3fb2c54b4: Link UP Jan 15 14:14:21.732174 systemd-networkd[1415]: cali7b3fb2c54b4: Gained carrier Jan 15 14:14:21.776045 containerd[1513]: 2025-01-15 14:14:21.502 [INFO][3779] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 15 14:14:21.776045 containerd[1513]: 2025-01-15 14:14:21.525 [INFO][3779] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--8ino3.gb1.brightbox.com-k8s-calico--apiserver--d56596c95--lvrnc-eth0 calico-apiserver-d56596c95- calico-apiserver ffcfce1b-cf9f-456b-9f06-d5b89b3336c6 781 0 2025-01-15 14:13:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d56596c95 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-8ino3.gb1.brightbox.com calico-apiserver-d56596c95-lvrnc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7b3fb2c54b4 [] []}} ContainerID="e4caf114919b966340f6facad83ce3b11fc2697c70b395340a33ee9774c76240" Namespace="calico-apiserver" Pod="calico-apiserver-d56596c95-lvrnc" WorkloadEndpoint="srv--8ino3.gb1.brightbox.com-k8s-calico--apiserver--d56596c95--lvrnc-" Jan 15 14:14:21.776045 containerd[1513]: 2025-01-15 14:14:21.526 [INFO][3779] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e4caf114919b966340f6facad83ce3b11fc2697c70b395340a33ee9774c76240" Namespace="calico-apiserver" Pod="calico-apiserver-d56596c95-lvrnc" WorkloadEndpoint="srv--8ino3.gb1.brightbox.com-k8s-calico--apiserver--d56596c95--lvrnc-eth0" Jan 15 14:14:21.776045 containerd[1513]: 2025-01-15 14:14:21.595 [INFO][3789] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e4caf114919b966340f6facad83ce3b11fc2697c70b395340a33ee9774c76240" HandleID="k8s-pod-network.e4caf114919b966340f6facad83ce3b11fc2697c70b395340a33ee9774c76240" Workload="srv--8ino3.gb1.brightbox.com-k8s-calico--apiserver--d56596c95--lvrnc-eth0" Jan 15 14:14:21.776045 containerd[1513]: 2025-01-15 14:14:21.624 [INFO][3789] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e4caf114919b966340f6facad83ce3b11fc2697c70b395340a33ee9774c76240" HandleID="k8s-pod-network.e4caf114919b966340f6facad83ce3b11fc2697c70b395340a33ee9774c76240" Workload="srv--8ino3.gb1.brightbox.com-k8s-calico--apiserver--d56596c95--lvrnc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004a9e10), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-8ino3.gb1.brightbox.com", "pod":"calico-apiserver-d56596c95-lvrnc", "timestamp":"2025-01-15 14:14:21.595082817 +0000 UTC"}, Hostname:"srv-8ino3.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 14:14:21.776045 containerd[1513]: 2025-01-15 14:14:21.628 [INFO][3789] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 14:14:21.776045 containerd[1513]: 2025-01-15 14:14:21.628 [INFO][3789] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 14:14:21.776045 containerd[1513]: 2025-01-15 14:14:21.628 [INFO][3789] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-8ino3.gb1.brightbox.com' Jan 15 14:14:21.776045 containerd[1513]: 2025-01-15 14:14:21.635 [INFO][3789] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e4caf114919b966340f6facad83ce3b11fc2697c70b395340a33ee9774c76240" host="srv-8ino3.gb1.brightbox.com" Jan 15 14:14:21.776045 containerd[1513]: 2025-01-15 14:14:21.647 [INFO][3789] ipam/ipam.go 372: Looking up existing affinities for host host="srv-8ino3.gb1.brightbox.com" Jan 15 14:14:21.776045 containerd[1513]: 2025-01-15 14:14:21.666 [INFO][3789] ipam/ipam.go 489: Trying affinity for 192.168.33.192/26 host="srv-8ino3.gb1.brightbox.com" Jan 15 14:14:21.776045 containerd[1513]: 2025-01-15 14:14:21.671 [INFO][3789] ipam/ipam.go 155: Attempting to load block cidr=192.168.33.192/26 host="srv-8ino3.gb1.brightbox.com" Jan 15 14:14:21.776045 containerd[1513]: 2025-01-15 14:14:21.676 [INFO][3789] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.33.192/26 host="srv-8ino3.gb1.brightbox.com" Jan 15 14:14:21.776045 containerd[1513]: 2025-01-15 14:14:21.677 [INFO][3789] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.33.192/26 handle="k8s-pod-network.e4caf114919b966340f6facad83ce3b11fc2697c70b395340a33ee9774c76240" host="srv-8ino3.gb1.brightbox.com" Jan 15 14:14:21.776045 containerd[1513]: 2025-01-15 14:14:21.683 [INFO][3789] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e4caf114919b966340f6facad83ce3b11fc2697c70b395340a33ee9774c76240 Jan 15 14:14:21.776045 containerd[1513]: 2025-01-15 14:14:21.690 [INFO][3789] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.33.192/26 handle="k8s-pod-network.e4caf114919b966340f6facad83ce3b11fc2697c70b395340a33ee9774c76240" host="srv-8ino3.gb1.brightbox.com" Jan 15 14:14:21.776045 containerd[1513]: 2025-01-15 14:14:21.700 [INFO][3789] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.33.193/26] block=192.168.33.192/26 handle="k8s-pod-network.e4caf114919b966340f6facad83ce3b11fc2697c70b395340a33ee9774c76240" host="srv-8ino3.gb1.brightbox.com" Jan 15 14:14:21.776045 containerd[1513]: 2025-01-15 14:14:21.700 [INFO][3789] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.33.193/26] handle="k8s-pod-network.e4caf114919b966340f6facad83ce3b11fc2697c70b395340a33ee9774c76240" host="srv-8ino3.gb1.brightbox.com" Jan 15 14:14:21.776045 containerd[1513]: 2025-01-15 14:14:21.700 [INFO][3789] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 14:14:21.776045 containerd[1513]: 2025-01-15 14:14:21.700 [INFO][3789] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.33.193/26] IPv6=[] ContainerID="e4caf114919b966340f6facad83ce3b11fc2697c70b395340a33ee9774c76240" HandleID="k8s-pod-network.e4caf114919b966340f6facad83ce3b11fc2697c70b395340a33ee9774c76240" Workload="srv--8ino3.gb1.brightbox.com-k8s-calico--apiserver--d56596c95--lvrnc-eth0" Jan 15 14:14:21.778880 containerd[1513]: 2025-01-15 14:14:21.706 [INFO][3779] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e4caf114919b966340f6facad83ce3b11fc2697c70b395340a33ee9774c76240" Namespace="calico-apiserver" Pod="calico-apiserver-d56596c95-lvrnc" WorkloadEndpoint="srv--8ino3.gb1.brightbox.com-k8s-calico--apiserver--d56596c95--lvrnc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--8ino3.gb1.brightbox.com-k8s-calico--apiserver--d56596c95--lvrnc-eth0", GenerateName:"calico-apiserver-d56596c95-", Namespace:"calico-apiserver", SelfLink:"", UID:"ffcfce1b-cf9f-456b-9f06-d5b89b3336c6", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 14, 13, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d56596c95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-8ino3.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-d56596c95-lvrnc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7b3fb2c54b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 14:14:21.778880 containerd[1513]: 2025-01-15 14:14:21.706 [INFO][3779] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.33.193/32] ContainerID="e4caf114919b966340f6facad83ce3b11fc2697c70b395340a33ee9774c76240" Namespace="calico-apiserver" Pod="calico-apiserver-d56596c95-lvrnc" WorkloadEndpoint="srv--8ino3.gb1.brightbox.com-k8s-calico--apiserver--d56596c95--lvrnc-eth0" Jan 15 14:14:21.778880 containerd[1513]: 2025-01-15 14:14:21.707 [INFO][3779] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7b3fb2c54b4 ContainerID="e4caf114919b966340f6facad83ce3b11fc2697c70b395340a33ee9774c76240" Namespace="calico-apiserver" Pod="calico-apiserver-d56596c95-lvrnc" WorkloadEndpoint="srv--8ino3.gb1.brightbox.com-k8s-calico--apiserver--d56596c95--lvrnc-eth0" Jan 15 14:14:21.778880 containerd[1513]: 2025-01-15 14:14:21.733 [INFO][3779] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e4caf114919b966340f6facad83ce3b11fc2697c70b395340a33ee9774c76240" Namespace="calico-apiserver" Pod="calico-apiserver-d56596c95-lvrnc" WorkloadEndpoint="srv--8ino3.gb1.brightbox.com-k8s-calico--apiserver--d56596c95--lvrnc-eth0" Jan 15 14:14:21.778880 containerd[1513]: 2025-01-15 14:14:21.735 [INFO][3779] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e4caf114919b966340f6facad83ce3b11fc2697c70b395340a33ee9774c76240" Namespace="calico-apiserver" Pod="calico-apiserver-d56596c95-lvrnc" WorkloadEndpoint="srv--8ino3.gb1.brightbox.com-k8s-calico--apiserver--d56596c95--lvrnc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--8ino3.gb1.brightbox.com-k8s-calico--apiserver--d56596c95--lvrnc-eth0", GenerateName:"calico-apiserver-d56596c95-", Namespace:"calico-apiserver", SelfLink:"", UID:"ffcfce1b-cf9f-456b-9f06-d5b89b3336c6", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 14, 13, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d56596c95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-8ino3.gb1.brightbox.com", ContainerID:"e4caf114919b966340f6facad83ce3b11fc2697c70b395340a33ee9774c76240", Pod:"calico-apiserver-d56596c95-lvrnc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7b3fb2c54b4", MAC:"42:d5:3a:04:d1:71", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 14:14:21.778880 containerd[1513]: 2025-01-15 14:14:21.771 [INFO][3779] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e4caf114919b966340f6facad83ce3b11fc2697c70b395340a33ee9774c76240" Namespace="calico-apiserver" Pod="calico-apiserver-d56596c95-lvrnc" WorkloadEndpoint="srv--8ino3.gb1.brightbox.com-k8s-calico--apiserver--d56596c95--lvrnc-eth0" Jan 15 14:14:21.795058 systemd[1]: run-containerd-runc-k8s.io-898ac6d65f4f10dc663dec77896492a58247ee2a615cfed675882ddad84adc04-runc.GV2Kmq.mount: Deactivated successfully. Jan 15 14:14:21.867115 containerd[1513]: time="2025-01-15T14:14:21.866901678Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 14:14:21.867541 containerd[1513]: time="2025-01-15T14:14:21.867076449Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 14:14:21.867541 containerd[1513]: time="2025-01-15T14:14:21.867364942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 14:14:21.869774 containerd[1513]: time="2025-01-15T14:14:21.869672186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 14:14:21.906293 systemd[1]: Started cri-containerd-e4caf114919b966340f6facad83ce3b11fc2697c70b395340a33ee9774c76240.scope - libcontainer container e4caf114919b966340f6facad83ce3b11fc2697c70b395340a33ee9774c76240. Jan 15 14:14:22.054970 containerd[1513]: time="2025-01-15T14:14:22.054037170Z" level=info msg="StopPodSandbox for \"7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb\"" Jan 15 14:14:22.101163 containerd[1513]: time="2025-01-15T14:14:22.101109123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d56596c95-lvrnc,Uid:ffcfce1b-cf9f-456b-9f06-d5b89b3336c6,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e4caf114919b966340f6facad83ce3b11fc2697c70b395340a33ee9774c76240\"" Jan 15 14:14:22.110834 containerd[1513]: time="2025-01-15T14:14:22.110191211Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 15 14:14:22.324925 containerd[1513]: 2025-01-15 14:14:22.197 [INFO][3882] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb" Jan 15 14:14:22.324925 containerd[1513]: 2025-01-15 14:14:22.197 [INFO][3882] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb" iface="eth0" netns="/var/run/netns/cni-7736b85d-c4f6-7c3f-9462-5a42c419b24a" Jan 15 14:14:22.324925 containerd[1513]: 2025-01-15 14:14:22.197 [INFO][3882] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb" iface="eth0" netns="/var/run/netns/cni-7736b85d-c4f6-7c3f-9462-5a42c419b24a" Jan 15 14:14:22.324925 containerd[1513]: 2025-01-15 14:14:22.198 [INFO][3882] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb" iface="eth0" netns="/var/run/netns/cni-7736b85d-c4f6-7c3f-9462-5a42c419b24a" Jan 15 14:14:22.324925 containerd[1513]: 2025-01-15 14:14:22.198 [INFO][3882] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb" Jan 15 14:14:22.324925 containerd[1513]: 2025-01-15 14:14:22.198 [INFO][3882] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb" Jan 15 14:14:22.324925 containerd[1513]: 2025-01-15 14:14:22.295 [INFO][3919] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb" HandleID="k8s-pod-network.7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb" Workload="srv--8ino3.gb1.brightbox.com-k8s-csi--node--driver--hvqsn-eth0" Jan 15 14:14:22.324925 containerd[1513]: 2025-01-15 14:14:22.296 [INFO][3919] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 14:14:22.324925 containerd[1513]: 2025-01-15 14:14:22.296 [INFO][3919] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 14:14:22.324925 containerd[1513]: 2025-01-15 14:14:22.314 [WARNING][3919] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb" HandleID="k8s-pod-network.7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb" Workload="srv--8ino3.gb1.brightbox.com-k8s-csi--node--driver--hvqsn-eth0" Jan 15 14:14:22.324925 containerd[1513]: 2025-01-15 14:14:22.314 [INFO][3919] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb" HandleID="k8s-pod-network.7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb" Workload="srv--8ino3.gb1.brightbox.com-k8s-csi--node--driver--hvqsn-eth0" Jan 15 14:14:22.324925 containerd[1513]: 2025-01-15 14:14:22.319 [INFO][3919] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 14:14:22.324925 containerd[1513]: 2025-01-15 14:14:22.321 [INFO][3882] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb" Jan 15 14:14:22.324925 containerd[1513]: time="2025-01-15T14:14:22.324615805Z" level=info msg="TearDown network for sandbox \"7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb\" successfully" Jan 15 14:14:22.324925 containerd[1513]: time="2025-01-15T14:14:22.324669394Z" level=info msg="StopPodSandbox for \"7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb\" returns successfully" Jan 15 14:14:22.330620 containerd[1513]: time="2025-01-15T14:14:22.329839756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hvqsn,Uid:7cb0d9ef-84b8-4638-9648-eb1fe2376a04,Namespace:calico-system,Attempt:1,}" Jan 15 14:14:22.334191 systemd[1]: run-netns-cni\x2d7736b85d\x2dc4f6\x2d7c3f\x2d9462\x2d5a42c419b24a.mount: Deactivated successfully. Jan 15 14:14:22.877753 systemd-networkd[1415]: calid2cf6173d3e: Link UP Jan 15 14:14:22.879317 systemd-networkd[1415]: calid2cf6173d3e: Gained carrier Jan 15 14:14:22.925678 containerd[1513]: 2025-01-15 14:14:22.527 [INFO][3981] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 15 14:14:22.925678 containerd[1513]: 2025-01-15 14:14:22.594 [INFO][3981] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--8ino3.gb1.brightbox.com-k8s-csi--node--driver--hvqsn-eth0 csi-node-driver- calico-system 7cb0d9ef-84b8-4638-9648-eb1fe2376a04 790 0 2025-01-15 14:13:37 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-8ino3.gb1.brightbox.com csi-node-driver-hvqsn eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid2cf6173d3e [] []}} ContainerID="40f3729803b6e4300851ad856f905c308455a65e1624f3b15b57d0ca88421e54" Namespace="calico-system" Pod="csi-node-driver-hvqsn" WorkloadEndpoint="srv--8ino3.gb1.brightbox.com-k8s-csi--node--driver--hvqsn-" Jan 15 14:14:22.925678 containerd[1513]: 2025-01-15 14:14:22.595 [INFO][3981] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="40f3729803b6e4300851ad856f905c308455a65e1624f3b15b57d0ca88421e54" Namespace="calico-system" Pod="csi-node-driver-hvqsn" WorkloadEndpoint="srv--8ino3.gb1.brightbox.com-k8s-csi--node--driver--hvqsn-eth0" Jan 15 14:14:22.925678 containerd[1513]: 2025-01-15 14:14:22.741 [INFO][3999] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="40f3729803b6e4300851ad856f905c308455a65e1624f3b15b57d0ca88421e54" HandleID="k8s-pod-network.40f3729803b6e4300851ad856f905c308455a65e1624f3b15b57d0ca88421e54" Workload="srv--8ino3.gb1.brightbox.com-k8s-csi--node--driver--hvqsn-eth0" Jan 15 14:14:22.925678 containerd[1513]: 2025-01-15 14:14:22.768 [INFO][3999] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="40f3729803b6e4300851ad856f905c308455a65e1624f3b15b57d0ca88421e54" HandleID="k8s-pod-network.40f3729803b6e4300851ad856f905c308455a65e1624f3b15b57d0ca88421e54" Workload="srv--8ino3.gb1.brightbox.com-k8s-csi--node--driver--hvqsn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003d4ad0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-8ino3.gb1.brightbox.com", "pod":"csi-node-driver-hvqsn", "timestamp":"2025-01-15 14:14:22.741630058 +0000 UTC"}, Hostname:"srv-8ino3.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 14:14:22.925678 containerd[1513]: 2025-01-15 14:14:22.768 [INFO][3999] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 14:14:22.925678 containerd[1513]: 2025-01-15 14:14:22.769 [INFO][3999] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 14:14:22.925678 containerd[1513]: 2025-01-15 14:14:22.769 [INFO][3999] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-8ino3.gb1.brightbox.com' Jan 15 14:14:22.925678 containerd[1513]: 2025-01-15 14:14:22.773 [INFO][3999] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.40f3729803b6e4300851ad856f905c308455a65e1624f3b15b57d0ca88421e54" host="srv-8ino3.gb1.brightbox.com" Jan 15 14:14:22.925678 containerd[1513]: 2025-01-15 14:14:22.781 [INFO][3999] ipam/ipam.go 372: Looking up existing affinities for host host="srv-8ino3.gb1.brightbox.com" Jan 15 14:14:22.925678 containerd[1513]: 2025-01-15 14:14:22.791 [INFO][3999] ipam/ipam.go 489: Trying affinity for 192.168.33.192/26 host="srv-8ino3.gb1.brightbox.com" Jan 15 14:14:22.925678 containerd[1513]: 2025-01-15 14:14:22.799 [INFO][3999] ipam/ipam.go 155: Attempting to load block cidr=192.168.33.192/26 host="srv-8ino3.gb1.brightbox.com" Jan 15 14:14:22.925678 containerd[1513]: 2025-01-15 14:14:22.837 [INFO][3999] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.33.192/26 host="srv-8ino3.gb1.brightbox.com" Jan 15 14:14:22.925678 containerd[1513]: 2025-01-15 14:14:22.838 [INFO][3999] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.33.192/26 handle="k8s-pod-network.40f3729803b6e4300851ad856f905c308455a65e1624f3b15b57d0ca88421e54" host="srv-8ino3.gb1.brightbox.com" Jan 15 14:14:22.925678 containerd[1513]: 2025-01-15 14:14:22.844 [INFO][3999] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.40f3729803b6e4300851ad856f905c308455a65e1624f3b15b57d0ca88421e54 Jan 15 14:14:22.925678 containerd[1513]: 2025-01-15 14:14:22.853 [INFO][3999] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.33.192/26 handle="k8s-pod-network.40f3729803b6e4300851ad856f905c308455a65e1624f3b15b57d0ca88421e54" host="srv-8ino3.gb1.brightbox.com" Jan 15 14:14:22.925678 containerd[1513]: 2025-01-15 14:14:22.861 [INFO][3999] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.33.194/26] block=192.168.33.192/26 handle="k8s-pod-network.40f3729803b6e4300851ad856f905c308455a65e1624f3b15b57d0ca88421e54" host="srv-8ino3.gb1.brightbox.com" Jan 15 14:14:22.925678 containerd[1513]: 2025-01-15 14:14:22.862 [INFO][3999] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.33.194/26] handle="k8s-pod-network.40f3729803b6e4300851ad856f905c308455a65e1624f3b15b57d0ca88421e54" host="srv-8ino3.gb1.brightbox.com" Jan 15 14:14:22.925678 containerd[1513]: 2025-01-15 14:14:22.862 [INFO][3999] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 14:14:22.925678 containerd[1513]: 2025-01-15 14:14:22.863 [INFO][3999] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.33.194/26] IPv6=[] ContainerID="40f3729803b6e4300851ad856f905c308455a65e1624f3b15b57d0ca88421e54" HandleID="k8s-pod-network.40f3729803b6e4300851ad856f905c308455a65e1624f3b15b57d0ca88421e54" Workload="srv--8ino3.gb1.brightbox.com-k8s-csi--node--driver--hvqsn-eth0" Jan 15 14:14:22.928752 containerd[1513]: 2025-01-15 14:14:22.867 [INFO][3981] cni-plugin/k8s.go 386: Populated endpoint ContainerID="40f3729803b6e4300851ad856f905c308455a65e1624f3b15b57d0ca88421e54" Namespace="calico-system" Pod="csi-node-driver-hvqsn" WorkloadEndpoint="srv--8ino3.gb1.brightbox.com-k8s-csi--node--driver--hvqsn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--8ino3.gb1.brightbox.com-k8s-csi--node--driver--hvqsn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7cb0d9ef-84b8-4638-9648-eb1fe2376a04", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 14, 13, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-8ino3.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-hvqsn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.33.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid2cf6173d3e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 14:14:22.928752 containerd[1513]: 2025-01-15 14:14:22.868 [INFO][3981] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.33.194/32] ContainerID="40f3729803b6e4300851ad856f905c308455a65e1624f3b15b57d0ca88421e54" Namespace="calico-system" Pod="csi-node-driver-hvqsn" WorkloadEndpoint="srv--8ino3.gb1.brightbox.com-k8s-csi--node--driver--hvqsn-eth0" Jan 15 14:14:22.928752 containerd[1513]: 2025-01-15 14:14:22.868 [INFO][3981] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid2cf6173d3e ContainerID="40f3729803b6e4300851ad856f905c308455a65e1624f3b15b57d0ca88421e54" Namespace="calico-system" Pod="csi-node-driver-hvqsn" WorkloadEndpoint="srv--8ino3.gb1.brightbox.com-k8s-csi--node--driver--hvqsn-eth0" Jan 15 14:14:22.928752 containerd[1513]: 2025-01-15 14:14:22.879 [INFO][3981] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="40f3729803b6e4300851ad856f905c308455a65e1624f3b15b57d0ca88421e54" Namespace="calico-system" Pod="csi-node-driver-hvqsn" WorkloadEndpoint="srv--8ino3.gb1.brightbox.com-k8s-csi--node--driver--hvqsn-eth0" Jan 15 14:14:22.928752 containerd[1513]: 2025-01-15 14:14:22.885 [INFO][3981] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="40f3729803b6e4300851ad856f905c308455a65e1624f3b15b57d0ca88421e54" Namespace="calico-system" Pod="csi-node-driver-hvqsn" WorkloadEndpoint="srv--8ino3.gb1.brightbox.com-k8s-csi--node--driver--hvqsn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--8ino3.gb1.brightbox.com-k8s-csi--node--driver--hvqsn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7cb0d9ef-84b8-4638-9648-eb1fe2376a04", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 14, 13, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-8ino3.gb1.brightbox.com", ContainerID:"40f3729803b6e4300851ad856f905c308455a65e1624f3b15b57d0ca88421e54", Pod:"csi-node-driver-hvqsn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.33.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid2cf6173d3e", MAC:"16:41:76:69:74:17", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 14:14:22.928752 containerd[1513]: 2025-01-15 14:14:22.922 [INFO][3981] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="40f3729803b6e4300851ad856f905c308455a65e1624f3b15b57d0ca88421e54" Namespace="calico-system" Pod="csi-node-driver-hvqsn" WorkloadEndpoint="srv--8ino3.gb1.brightbox.com-k8s-csi--node--driver--hvqsn-eth0" Jan 15 14:14:23.048644 systemd-networkd[1415]: cali7b3fb2c54b4: Gained IPv6LL Jan 15 14:14:23.062293 containerd[1513]: time="2025-01-15T14:14:23.059239899Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 14:14:23.068769 containerd[1513]: time="2025-01-15T14:14:23.067643136Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 14:14:23.068769 containerd[1513]: time="2025-01-15T14:14:23.067680085Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 14:14:23.068769 containerd[1513]: time="2025-01-15T14:14:23.067853450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 14:14:23.197395 containerd[1513]: time="2025-01-15T14:14:23.197072101Z" level=info msg="StopPodSandbox for \"c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8\"" Jan 15 14:14:23.216234 systemd[1]: Started cri-containerd-40f3729803b6e4300851ad856f905c308455a65e1624f3b15b57d0ca88421e54.scope - libcontainer container 40f3729803b6e4300851ad856f905c308455a65e1624f3b15b57d0ca88421e54. Jan 15 14:14:23.222349 containerd[1513]: time="2025-01-15T14:14:23.222109478Z" level=info msg="StopPodSandbox for \"debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926\"" Jan 15 14:14:23.250995 containerd[1513]: time="2025-01-15T14:14:23.250385294Z" level=info msg="StopPodSandbox for \"6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e\"" Jan 15 14:14:23.378395 containerd[1513]: time="2025-01-15T14:14:23.378334501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hvqsn,Uid:7cb0d9ef-84b8-4638-9648-eb1fe2376a04,Namespace:calico-system,Attempt:1,} returns sandbox id \"40f3729803b6e4300851ad856f905c308455a65e1624f3b15b57d0ca88421e54\"" Jan 15 14:14:23.692015 kernel: bpftool[4179]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 15 14:14:23.799349 containerd[1513]: 2025-01-15 14:14:23.503 [WARNING][4111] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--8ino3.gb1.brightbox.com-k8s-calico--apiserver--d56596c95--lvrnc-eth0", GenerateName:"calico-apiserver-d56596c95-", Namespace:"calico-apiserver", SelfLink:"", UID:"ffcfce1b-cf9f-456b-9f06-d5b89b3336c6", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 14, 13, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d56596c95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-8ino3.gb1.brightbox.com", ContainerID:"e4caf114919b966340f6facad83ce3b11fc2697c70b395340a33ee9774c76240", Pod:"calico-apiserver-d56596c95-lvrnc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7b3fb2c54b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 14:14:23.799349 containerd[1513]: 2025-01-15 14:14:23.506 [INFO][4111] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e" Jan 15 14:14:23.799349 containerd[1513]: 2025-01-15 14:14:23.506 [INFO][4111] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e" iface="eth0" netns="" Jan 15 14:14:23.799349 containerd[1513]: 2025-01-15 14:14:23.506 [INFO][4111] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e" Jan 15 14:14:23.799349 containerd[1513]: 2025-01-15 14:14:23.506 [INFO][4111] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e" Jan 15 14:14:23.799349 containerd[1513]: 2025-01-15 14:14:23.772 [INFO][4159] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e" HandleID="k8s-pod-network.6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e" Workload="srv--8ino3.gb1.brightbox.com-k8s-calico--apiserver--d56596c95--lvrnc-eth0" Jan 15 14:14:23.799349 containerd[1513]: 2025-01-15 14:14:23.772 [INFO][4159] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 14:14:23.799349 containerd[1513]: 2025-01-15 14:14:23.774 [INFO][4159] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 14:14:23.799349 containerd[1513]: 2025-01-15 14:14:23.787 [WARNING][4159] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e" HandleID="k8s-pod-network.6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e" Workload="srv--8ino3.gb1.brightbox.com-k8s-calico--apiserver--d56596c95--lvrnc-eth0" Jan 15 14:14:23.799349 containerd[1513]: 2025-01-15 14:14:23.787 [INFO][4159] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e" HandleID="k8s-pod-network.6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e" Workload="srv--8ino3.gb1.brightbox.com-k8s-calico--apiserver--d56596c95--lvrnc-eth0" Jan 15 14:14:23.799349 containerd[1513]: 2025-01-15 14:14:23.791 [INFO][4159] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 14:14:23.799349 containerd[1513]: 2025-01-15 14:14:23.793 [INFO][4111] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e" Jan 15 14:14:23.800406 containerd[1513]: time="2025-01-15T14:14:23.800362038Z" level=info msg="TearDown network for sandbox \"6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e\" successfully" Jan 15 14:14:23.800528 containerd[1513]: time="2025-01-15T14:14:23.800501407Z" level=info msg="StopPodSandbox for \"6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e\" returns successfully" Jan 15 14:14:23.802097 containerd[1513]: time="2025-01-15T14:14:23.802065142Z" level=info msg="RemovePodSandbox for \"6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e\"" Jan 15 14:14:23.802251 containerd[1513]: time="2025-01-15T14:14:23.802223294Z" level=info msg="Forcibly stopping sandbox \"6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e\"" Jan 15 14:14:23.837238 containerd[1513]: 2025-01-15 14:14:23.531 [INFO][4120] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8" Jan 15 14:14:23.837238 containerd[1513]: 2025-01-15 14:14:23.533 [INFO][4120] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8" iface="eth0" netns="/var/run/netns/cni-b7ad0d99-af7f-0404-eb78-cfd54275ea8d" Jan 15 14:14:23.837238 containerd[1513]: 2025-01-15 14:14:23.533 [INFO][4120] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8" iface="eth0" netns="/var/run/netns/cni-b7ad0d99-af7f-0404-eb78-cfd54275ea8d" Jan 15 14:14:23.837238 containerd[1513]: 2025-01-15 14:14:23.537 [INFO][4120] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8" iface="eth0" netns="/var/run/netns/cni-b7ad0d99-af7f-0404-eb78-cfd54275ea8d" Jan 15 14:14:23.837238 containerd[1513]: 2025-01-15 14:14:23.537 [INFO][4120] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8" Jan 15 14:14:23.837238 containerd[1513]: 2025-01-15 14:14:23.537 [INFO][4120] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8" Jan 15 14:14:23.837238 containerd[1513]: 2025-01-15 14:14:23.785 [INFO][4163] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8" HandleID="k8s-pod-network.c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8" Workload="srv--8ino3.gb1.brightbox.com-k8s-calico--kube--controllers--677f55659--m4s76-eth0" Jan 15 14:14:23.837238 containerd[1513]: 2025-01-15 14:14:23.785 [INFO][4163] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 14:14:23.837238 containerd[1513]: 2025-01-15 14:14:23.792 [INFO][4163] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 14:14:23.837238 containerd[1513]: 2025-01-15 14:14:23.810 [WARNING][4163] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8" HandleID="k8s-pod-network.c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8" Workload="srv--8ino3.gb1.brightbox.com-k8s-calico--kube--controllers--677f55659--m4s76-eth0" Jan 15 14:14:23.837238 containerd[1513]: 2025-01-15 14:14:23.810 [INFO][4163] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8" HandleID="k8s-pod-network.c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8" Workload="srv--8ino3.gb1.brightbox.com-k8s-calico--kube--controllers--677f55659--m4s76-eth0" Jan 15 14:14:23.837238 containerd[1513]: 2025-01-15 14:14:23.825 [INFO][4163] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 14:14:23.837238 containerd[1513]: 2025-01-15 14:14:23.830 [INFO][4120] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8" Jan 15 14:14:23.843974 systemd[1]: run-netns-cni\x2db7ad0d99\x2daf7f\x2d0404\x2deb78\x2dcfd54275ea8d.mount: Deactivated successfully. Jan 15 14:14:23.845515 containerd[1513]: time="2025-01-15T14:14:23.844154002Z" level=info msg="TearDown network for sandbox \"c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8\" successfully" Jan 15 14:14:23.845515 containerd[1513]: time="2025-01-15T14:14:23.844227986Z" level=info msg="StopPodSandbox for \"c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8\" returns successfully" Jan 15 14:14:23.846769 containerd[1513]: time="2025-01-15T14:14:23.846044817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-677f55659-m4s76,Uid:ab3b97f5-1e67-4e9f-9a0c-89fe7060ec56,Namespace:calico-system,Attempt:1,}" Jan 15 14:14:23.865113 containerd[1513]: 2025-01-15 14:14:23.535 [INFO][4124] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926" Jan 15 14:14:23.865113 containerd[1513]: 2025-01-15 14:14:23.537 [INFO][4124] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926" iface="eth0" netns="/var/run/netns/cni-a41e94a3-e1db-0cef-b52e-91f8d98fd63b" Jan 15 14:14:23.865113 containerd[1513]: 2025-01-15 14:14:23.542 [INFO][4124] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926" iface="eth0" netns="/var/run/netns/cni-a41e94a3-e1db-0cef-b52e-91f8d98fd63b" Jan 15 14:14:23.865113 containerd[1513]: 2025-01-15 14:14:23.546 [INFO][4124] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926" iface="eth0" netns="/var/run/netns/cni-a41e94a3-e1db-0cef-b52e-91f8d98fd63b" Jan 15 14:14:23.865113 containerd[1513]: 2025-01-15 14:14:23.546 [INFO][4124] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926" Jan 15 14:14:23.865113 containerd[1513]: 2025-01-15 14:14:23.546 [INFO][4124] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926" Jan 15 14:14:23.865113 containerd[1513]: 2025-01-15 14:14:23.813 [INFO][4167] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926" HandleID="k8s-pod-network.debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926" Workload="srv--8ino3.gb1.brightbox.com-k8s-calico--apiserver--d56596c95--5w28p-eth0" Jan 15 14:14:23.865113 containerd[1513]: 2025-01-15 14:14:23.815 [INFO][4167] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 14:14:23.865113 containerd[1513]: 2025-01-15 14:14:23.825 [INFO][4167] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 14:14:23.865113 containerd[1513]: 2025-01-15 14:14:23.847 [WARNING][4167] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926" HandleID="k8s-pod-network.debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926" Workload="srv--8ino3.gb1.brightbox.com-k8s-calico--apiserver--d56596c95--5w28p-eth0" Jan 15 14:14:23.865113 containerd[1513]: 2025-01-15 14:14:23.847 [INFO][4167] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926" HandleID="k8s-pod-network.debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926" Workload="srv--8ino3.gb1.brightbox.com-k8s-calico--apiserver--d56596c95--5w28p-eth0" Jan 15 14:14:23.865113 containerd[1513]: 2025-01-15 14:14:23.855 [INFO][4167] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 14:14:23.865113 containerd[1513]: 2025-01-15 14:14:23.860 [INFO][4124] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926" Jan 15 14:14:23.868216 containerd[1513]: time="2025-01-15T14:14:23.866911727Z" level=info msg="TearDown network for sandbox \"debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926\" successfully" Jan 15 14:14:23.868216 containerd[1513]: time="2025-01-15T14:14:23.866999613Z" level=info msg="StopPodSandbox for \"debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926\" returns successfully" Jan 15 14:14:23.869044 containerd[1513]: time="2025-01-15T14:14:23.868679505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d56596c95-5w28p,Uid:db57f239-8d03-4c8d-9b83-ba0c7c7afb10,Namespace:calico-apiserver,Attempt:1,}" Jan 15 14:14:23.873841 systemd[1]: run-netns-cni\x2da41e94a3\x2de1db\x2d0cef\x2db52e\x2d91f8d98fd63b.mount: Deactivated successfully. Jan 15 14:14:24.041908 containerd[1513]: time="2025-01-15T14:14:24.041691065Z" level=info msg="StopPodSandbox for \"1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b\"" Jan 15 14:14:24.047636 containerd[1513]: time="2025-01-15T14:14:24.046859235Z" level=info msg="StopPodSandbox for \"0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593\"" Jan 15 14:14:24.181139 containerd[1513]: 2025-01-15 14:14:23.964 [WARNING][4198] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--8ino3.gb1.brightbox.com-k8s-calico--apiserver--d56596c95--lvrnc-eth0", GenerateName:"calico-apiserver-d56596c95-", Namespace:"calico-apiserver", SelfLink:"", UID:"ffcfce1b-cf9f-456b-9f06-d5b89b3336c6", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 14, 13, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d56596c95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-8ino3.gb1.brightbox.com", ContainerID:"e4caf114919b966340f6facad83ce3b11fc2697c70b395340a33ee9774c76240", Pod:"calico-apiserver-d56596c95-lvrnc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7b3fb2c54b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 14:14:24.181139 containerd[1513]: 2025-01-15 14:14:23.965 [INFO][4198] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e" Jan 15 14:14:24.181139 containerd[1513]: 2025-01-15 14:14:23.965 [INFO][4198] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e" iface="eth0" netns="" Jan 15 14:14:24.181139 containerd[1513]: 2025-01-15 14:14:23.966 [INFO][4198] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e" Jan 15 14:14:24.181139 containerd[1513]: 2025-01-15 14:14:23.966 [INFO][4198] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e" Jan 15 14:14:24.181139 containerd[1513]: 2025-01-15 14:14:24.154 [INFO][4215] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e" HandleID="k8s-pod-network.6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e" Workload="srv--8ino3.gb1.brightbox.com-k8s-calico--apiserver--d56596c95--lvrnc-eth0" Jan 15 14:14:24.181139 containerd[1513]: 2025-01-15 14:14:24.155 [INFO][4215] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 14:14:24.181139 containerd[1513]: 2025-01-15 14:14:24.156 [INFO][4215] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 14:14:24.181139 containerd[1513]: 2025-01-15 14:14:24.170 [WARNING][4215] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e" HandleID="k8s-pod-network.6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e" Workload="srv--8ino3.gb1.brightbox.com-k8s-calico--apiserver--d56596c95--lvrnc-eth0" Jan 15 14:14:24.181139 containerd[1513]: 2025-01-15 14:14:24.170 [INFO][4215] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e" HandleID="k8s-pod-network.6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e" Workload="srv--8ino3.gb1.brightbox.com-k8s-calico--apiserver--d56596c95--lvrnc-eth0" Jan 15 14:14:24.181139 containerd[1513]: 2025-01-15 14:14:24.177 [INFO][4215] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 14:14:24.181139 containerd[1513]: 2025-01-15 14:14:24.179 [INFO][4198] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e" Jan 15 14:14:24.183851 containerd[1513]: time="2025-01-15T14:14:24.181601834Z" level=info msg="TearDown network for sandbox \"6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e\" successfully" Jan 15 14:14:24.216918 containerd[1513]: time="2025-01-15T14:14:24.216449308Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 15 14:14:24.216918 containerd[1513]: time="2025-01-15T14:14:24.216562492Z" level=info msg="RemovePodSandbox \"6252a9c561e63c43dabe003812f18211a225b8065c93ccdaf1ebc00a557db98e\" returns successfully" Jan 15 14:14:24.525348 systemd-networkd[1415]: cali7245c61c5ff: Link UP Jan 15 14:14:24.525773 systemd-networkd[1415]: cali7245c61c5ff: Gained carrier Jan 15 14:14:24.551010 containerd[1513]: 2025-01-15 14:14:24.360 [INFO][4266] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b" Jan 15 14:14:24.551010 containerd[1513]: 2025-01-15 14:14:24.361 [INFO][4266] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b" iface="eth0" netns="/var/run/netns/cni-fbcbe3c4-2f41-4f50-2841-d339482de68d" Jan 15 14:14:24.551010 containerd[1513]: 2025-01-15 14:14:24.362 [INFO][4266] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b" iface="eth0" netns="/var/run/netns/cni-fbcbe3c4-2f41-4f50-2841-d339482de68d" Jan 15 14:14:24.551010 containerd[1513]: 2025-01-15 14:14:24.365 [INFO][4266] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b" iface="eth0" netns="/var/run/netns/cni-fbcbe3c4-2f41-4f50-2841-d339482de68d" Jan 15 14:14:24.551010 containerd[1513]: 2025-01-15 14:14:24.365 [INFO][4266] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b" Jan 15 14:14:24.551010 containerd[1513]: 2025-01-15 14:14:24.365 [INFO][4266] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b" Jan 15 14:14:24.551010 containerd[1513]: 2025-01-15 14:14:24.434 [INFO][4290] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b" HandleID="k8s-pod-network.1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b" Workload="srv--8ino3.gb1.brightbox.com-k8s-coredns--6f6b679f8f--2gnwv-eth0" Jan 15 14:14:24.551010 containerd[1513]: 2025-01-15 14:14:24.434 [INFO][4290] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 14:14:24.551010 containerd[1513]: 2025-01-15 14:14:24.494 [INFO][4290] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 14:14:24.551010 containerd[1513]: 2025-01-15 14:14:24.518 [WARNING][4290] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b" HandleID="k8s-pod-network.1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b" Workload="srv--8ino3.gb1.brightbox.com-k8s-coredns--6f6b679f8f--2gnwv-eth0" Jan 15 14:14:24.551010 containerd[1513]: 2025-01-15 14:14:24.519 [INFO][4290] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b" HandleID="k8s-pod-network.1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b" Workload="srv--8ino3.gb1.brightbox.com-k8s-coredns--6f6b679f8f--2gnwv-eth0" Jan 15 14:14:24.551010 containerd[1513]: 2025-01-15 14:14:24.532 [INFO][4290] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 14:14:24.551010 containerd[1513]: 2025-01-15 14:14:24.536 [INFO][4266] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b" Jan 15 14:14:24.555227 containerd[1513]: time="2025-01-15T14:14:24.554957903Z" level=info msg="TearDown network for sandbox \"1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b\" successfully" Jan 15 14:14:24.555227 containerd[1513]: time="2025-01-15T14:14:24.555222052Z" level=info msg="StopPodSandbox for \"1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b\" returns successfully" Jan 15 14:14:24.557010 containerd[1513]: time="2025-01-15T14:14:24.556511140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2gnwv,Uid:62a77434-74c8-4f4e-93f0-06f47a0ce14a,Namespace:kube-system,Attempt:1,}" Jan 15 14:14:24.615743 containerd[1513]: 2025-01-15 14:14:24.021 [INFO][4204] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--8ino3.gb1.brightbox.com-k8s-calico--kube--controllers--677f55659--m4s76-eth0 calico-kube-controllers-677f55659- calico-system ab3b97f5-1e67-4e9f-9a0c-89fe7060ec56 800 0 2025-01-15 14:13:37 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:677f55659 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-8ino3.gb1.brightbox.com calico-kube-controllers-677f55659-m4s76 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7245c61c5ff [] []}} ContainerID="26460777b2fc79d9b0123e9597257319158d87ea2d2ccf8596027fac99125aba" Namespace="calico-system" Pod="calico-kube-controllers-677f55659-m4s76" WorkloadEndpoint="srv--8ino3.gb1.brightbox.com-k8s-calico--kube--controllers--677f55659--m4s76-" Jan 15 14:14:24.615743 containerd[1513]: 2025-01-15 14:14:24.022 [INFO][4204] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="26460777b2fc79d9b0123e9597257319158d87ea2d2ccf8596027fac99125aba" Namespace="calico-system" Pod="calico-kube-controllers-677f55659-m4s76" WorkloadEndpoint="srv--8ino3.gb1.brightbox.com-k8s-calico--kube--controllers--677f55659--m4s76-eth0" Jan 15 14:14:24.615743 containerd[1513]: 2025-01-15 14:14:24.249 [INFO][4240] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="26460777b2fc79d9b0123e9597257319158d87ea2d2ccf8596027fac99125aba" HandleID="k8s-pod-network.26460777b2fc79d9b0123e9597257319158d87ea2d2ccf8596027fac99125aba" Workload="srv--8ino3.gb1.brightbox.com-k8s-calico--kube--controllers--677f55659--m4s76-eth0" Jan 15 14:14:24.615743 containerd[1513]: 2025-01-15 14:14:24.286 [INFO][4240] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="26460777b2fc79d9b0123e9597257319158d87ea2d2ccf8596027fac99125aba" HandleID="k8s-pod-network.26460777b2fc79d9b0123e9597257319158d87ea2d2ccf8596027fac99125aba" Workload="srv--8ino3.gb1.brightbox.com-k8s-calico--kube--controllers--677f55659--m4s76-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000dd930), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-8ino3.gb1.brightbox.com", "pod":"calico-kube-controllers-677f55659-m4s76", "timestamp":"2025-01-15 14:14:24.249638588 +0000 UTC"}, Hostname:"srv-8ino3.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 14:14:24.615743 containerd[1513]: 2025-01-15 14:14:24.288 [INFO][4240] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 14:14:24.615743 containerd[1513]: 2025-01-15 14:14:24.289 [INFO][4240] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 14:14:24.615743 containerd[1513]: 2025-01-15 14:14:24.289 [INFO][4240] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-8ino3.gb1.brightbox.com' Jan 15 14:14:24.615743 containerd[1513]: 2025-01-15 14:14:24.308 [INFO][4240] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.26460777b2fc79d9b0123e9597257319158d87ea2d2ccf8596027fac99125aba" host="srv-8ino3.gb1.brightbox.com" Jan 15 14:14:24.615743 containerd[1513]: 2025-01-15 14:14:24.384 [INFO][4240] ipam/ipam.go 372: Looking up existing affinities for host host="srv-8ino3.gb1.brightbox.com" Jan 15 14:14:24.615743 containerd[1513]: 2025-01-15 14:14:24.405 [INFO][4240] ipam/ipam.go 489: Trying affinity for 192.168.33.192/26 host="srv-8ino3.gb1.brightbox.com" Jan 15 14:14:24.615743 containerd[1513]: 2025-01-15 14:14:24.421 [INFO][4240] ipam/ipam.go 155: Attempting to load block cidr=192.168.33.192/26 host="srv-8ino3.gb1.brightbox.com" Jan 15 14:14:24.615743 containerd[1513]: 2025-01-15 14:14:24.442 [INFO][4240] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.33.192/26 host="srv-8ino3.gb1.brightbox.com" Jan 15 14:14:24.615743 containerd[1513]: 2025-01-15 14:14:24.443 [INFO][4240] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.33.192/26 handle="k8s-pod-network.26460777b2fc79d9b0123e9597257319158d87ea2d2ccf8596027fac99125aba" host="srv-8ino3.gb1.brightbox.com" Jan 15 14:14:24.615743 containerd[1513]: 2025-01-15 14:14:24.452 [INFO][4240] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.26460777b2fc79d9b0123e9597257319158d87ea2d2ccf8596027fac99125aba Jan 15 14:14:24.615743 containerd[1513]: 2025-01-15 14:14:24.462 [INFO][4240] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.33.192/26 handle="k8s-pod-network.26460777b2fc79d9b0123e9597257319158d87ea2d2ccf8596027fac99125aba" host="srv-8ino3.gb1.brightbox.com" Jan 15 14:14:24.615743 containerd[1513]: 2025-01-15 14:14:24.493 [INFO][4240] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.33.195/26] block=192.168.33.192/26 handle="k8s-pod-network.26460777b2fc79d9b0123e9597257319158d87ea2d2ccf8596027fac99125aba" host="srv-8ino3.gb1.brightbox.com" Jan 15 14:14:24.615743 containerd[1513]: 2025-01-15 14:14:24.493 [INFO][4240] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.33.195/26] handle="k8s-pod-network.26460777b2fc79d9b0123e9597257319158d87ea2d2ccf8596027fac99125aba" host="srv-8ino3.gb1.brightbox.com" Jan 15 14:14:24.615743 containerd[1513]: 2025-01-15 14:14:24.493 [INFO][4240] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 14:14:24.615743 containerd[1513]: 2025-01-15 14:14:24.493 [INFO][4240] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.33.195/26] IPv6=[] ContainerID="26460777b2fc79d9b0123e9597257319158d87ea2d2ccf8596027fac99125aba" HandleID="k8s-pod-network.26460777b2fc79d9b0123e9597257319158d87ea2d2ccf8596027fac99125aba" Workload="srv--8ino3.gb1.brightbox.com-k8s-calico--kube--controllers--677f55659--m4s76-eth0" Jan 15 14:14:24.619566 containerd[1513]: 2025-01-15 14:14:24.508 [INFO][4204] cni-plugin/k8s.go 386: Populated endpoint ContainerID="26460777b2fc79d9b0123e9597257319158d87ea2d2ccf8596027fac99125aba" Namespace="calico-system" Pod="calico-kube-controllers-677f55659-m4s76" WorkloadEndpoint="srv--8ino3.gb1.brightbox.com-k8s-calico--kube--controllers--677f55659--m4s76-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--8ino3.gb1.brightbox.com-k8s-calico--kube--controllers--677f55659--m4s76-eth0", GenerateName:"calico-kube-controllers-677f55659-", Namespace:"calico-system", SelfLink:"", UID:"ab3b97f5-1e67-4e9f-9a0c-89fe7060ec56", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 14, 13, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"677f55659", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-8ino3.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-677f55659-m4s76", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.33.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7245c61c5ff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 14:14:24.619566 containerd[1513]: 2025-01-15 14:14:24.508 [INFO][4204] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.33.195/32] ContainerID="26460777b2fc79d9b0123e9597257319158d87ea2d2ccf8596027fac99125aba" Namespace="calico-system" Pod="calico-kube-controllers-677f55659-m4s76" WorkloadEndpoint="srv--8ino3.gb1.brightbox.com-k8s-calico--kube--controllers--677f55659--m4s76-eth0" Jan 15 14:14:24.619566 containerd[1513]: 2025-01-15 14:14:24.508 [INFO][4204] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7245c61c5ff ContainerID="26460777b2fc79d9b0123e9597257319158d87ea2d2ccf8596027fac99125aba" Namespace="calico-system" Pod="calico-kube-controllers-677f55659-m4s76" WorkloadEndpoint="srv--8ino3.gb1.brightbox.com-k8s-calico--kube--controllers--677f55659--m4s76-eth0" Jan 15 14:14:24.619566 containerd[1513]: 2025-01-15 14:14:24.526 [INFO][4204] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="26460777b2fc79d9b0123e9597257319158d87ea2d2ccf8596027fac99125aba" Namespace="calico-system" Pod="calico-kube-controllers-677f55659-m4s76" WorkloadEndpoint="srv--8ino3.gb1.brightbox.com-k8s-calico--kube--controllers--677f55659--m4s76-eth0" Jan 15 14:14:24.619566 containerd[1513]: 2025-01-15 14:14:24.531 [INFO][4204] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="26460777b2fc79d9b0123e9597257319158d87ea2d2ccf8596027fac99125aba" Namespace="calico-system" Pod="calico-kube-controllers-677f55659-m4s76" WorkloadEndpoint="srv--8ino3.gb1.brightbox.com-k8s-calico--kube--controllers--677f55659--m4s76-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--8ino3.gb1.brightbox.com-k8s-calico--kube--controllers--677f55659--m4s76-eth0", GenerateName:"calico-kube-controllers-677f55659-", Namespace:"calico-system", SelfLink:"", UID:"ab3b97f5-1e67-4e9f-9a0c-89fe7060ec56", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 14, 13, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"677f55659", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-8ino3.gb1.brightbox.com", ContainerID:"26460777b2fc79d9b0123e9597257319158d87ea2d2ccf8596027fac99125aba", Pod:"calico-kube-controllers-677f55659-m4s76", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.33.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7245c61c5ff", MAC:"82:0b:6d:fe:e6:eb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 14:14:24.619566 containerd[1513]: 2025-01-15 14:14:24.605 [INFO][4204] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="26460777b2fc79d9b0123e9597257319158d87ea2d2ccf8596027fac99125aba" Namespace="calico-system" Pod="calico-kube-controllers-677f55659-m4s76" WorkloadEndpoint="srv--8ino3.gb1.brightbox.com-k8s-calico--kube--controllers--677f55659--m4s76-eth0" Jan 15 14:14:24.619566 containerd[1513]: 2025-01-15 14:14:24.342 [INFO][4262] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593" Jan 15 14:14:24.619566 containerd[1513]: 2025-01-15 14:14:24.343 [INFO][4262] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593" iface="eth0" netns="/var/run/netns/cni-81d0e976-be5c-9bad-59b7-e96c91f06e7e" Jan 15 14:14:24.619566 containerd[1513]: 2025-01-15 14:14:24.345 [INFO][4262] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593" iface="eth0" netns="/var/run/netns/cni-81d0e976-be5c-9bad-59b7-e96c91f06e7e" Jan 15 14:14:24.620436 containerd[1513]: 2025-01-15 14:14:24.346 [INFO][4262] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593" iface="eth0" netns="/var/run/netns/cni-81d0e976-be5c-9bad-59b7-e96c91f06e7e" Jan 15 14:14:24.620436 containerd[1513]: 2025-01-15 14:14:24.346 [INFO][4262] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593" Jan 15 14:14:24.620436 containerd[1513]: 2025-01-15 14:14:24.346 [INFO][4262] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593" Jan 15 14:14:24.620436 containerd[1513]: 2025-01-15 14:14:24.455 [INFO][4285] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593" HandleID="k8s-pod-network.0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593" Workload="srv--8ino3.gb1.brightbox.com-k8s-coredns--6f6b679f8f--j2sst-eth0" Jan 15 14:14:24.620436 containerd[1513]: 2025-01-15 14:14:24.455 [INFO][4285] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 14:14:24.620436 containerd[1513]: 2025-01-15 14:14:24.532 [INFO][4285] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 14:14:24.620436 containerd[1513]: 2025-01-15 14:14:24.577 [WARNING][4285] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593" HandleID="k8s-pod-network.0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593" Workload="srv--8ino3.gb1.brightbox.com-k8s-coredns--6f6b679f8f--j2sst-eth0" Jan 15 14:14:24.620436 containerd[1513]: 2025-01-15 14:14:24.577 [INFO][4285] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593" HandleID="k8s-pod-network.0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593" Workload="srv--8ino3.gb1.brightbox.com-k8s-coredns--6f6b679f8f--j2sst-eth0" Jan 15 14:14:24.620436 containerd[1513]: 2025-01-15 14:14:24.585 [INFO][4285] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 14:14:24.620436 containerd[1513]: 2025-01-15 14:14:24.609 [INFO][4262] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593" Jan 15 14:14:24.620436 containerd[1513]: time="2025-01-15T14:14:24.616245917Z" level=info msg="TearDown network for sandbox \"0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593\" successfully" Jan 15 14:14:24.620436 containerd[1513]: time="2025-01-15T14:14:24.616282613Z" level=info msg="StopPodSandbox for \"0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593\" returns successfully" Jan 15 14:14:24.632054 containerd[1513]: time="2025-01-15T14:14:24.630869103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-j2sst,Uid:d0a1a3c0-5c7a-49bf-98cd-cbcfee7c8716,Namespace:kube-system,Attempt:1,}" Jan 15 14:14:24.802669 systemd-networkd[1415]: cali64efce68a26: Link UP Jan 15 14:14:24.807439 systemd-networkd[1415]: cali64efce68a26: Gained carrier Jan 15 14:14:24.839672 systemd-networkd[1415]: calid2cf6173d3e: Gained IPv6LL Jan 15 14:14:24.861482 systemd[1]: run-netns-cni\x2dfbcbe3c4\x2d2f41\x2d4f50\x2d2841\x2dd339482de68d.mount: Deactivated successfully. Jan 15 14:14:24.861729 systemd[1]: run-netns-cni\x2d81d0e976\x2dbe5c\x2d9bad\x2d59b7\x2de96c91f06e7e.mount: Deactivated successfully. Jan 15 14:14:24.892739 containerd[1513]: time="2025-01-15T14:14:24.817428890Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 14:14:24.892739 containerd[1513]: time="2025-01-15T14:14:24.817518746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 14:14:24.892739 containerd[1513]: time="2025-01-15T14:14:24.817585377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 14:14:24.892739 containerd[1513]: time="2025-01-15T14:14:24.817763147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 14:14:24.908409 systemd-networkd[1415]: vxlan.calico: Link UP Jan 15 14:14:24.908480 systemd-networkd[1415]: vxlan.calico: Gained carrier Jan 15 14:14:24.921288 containerd[1513]: 2025-01-15 14:14:24.234 [INFO][4225] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--8ino3.gb1.brightbox.com-k8s-calico--apiserver--d56596c95--5w28p-eth0 calico-apiserver-d56596c95- calico-apiserver db57f239-8d03-4c8d-9b83-ba0c7c7afb10 799 0 2025-01-15 14:13:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d56596c95 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-8ino3.gb1.brightbox.com calico-apiserver-d56596c95-5w28p eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali64efce68a26 [] []}} ContainerID="268f429102cc7bdb52f9c5d5aca9a47a5516bb413f8131d7b92ab59aafeaea96" Namespace="calico-apiserver" Pod="calico-apiserver-d56596c95-5w28p" WorkloadEndpoint="srv--8ino3.gb1.brightbox.com-k8s-calico--apiserver--d56596c95--5w28p-" Jan 15 14:14:24.921288 containerd[1513]: 2025-01-15 14:14:24.236 [INFO][4225] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="268f429102cc7bdb52f9c5d5aca9a47a5516bb413f8131d7b92ab59aafeaea96" Namespace="calico-apiserver" Pod="calico-apiserver-d56596c95-5w28p" WorkloadEndpoint="srv--8ino3.gb1.brightbox.com-k8s-calico--apiserver--d56596c95--5w28p-eth0" Jan 15 14:14:24.921288 containerd[1513]: 2025-01-15 14:14:24.400 [INFO][4280] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="268f429102cc7bdb52f9c5d5aca9a47a5516bb413f8131d7b92ab59aafeaea96" HandleID="k8s-pod-network.268f429102cc7bdb52f9c5d5aca9a47a5516bb413f8131d7b92ab59aafeaea96" Workload="srv--8ino3.gb1.brightbox.com-k8s-calico--apiserver--d56596c95--5w28p-eth0" Jan 15 14:14:24.921288 containerd[1513]: 2025-01-15 14:14:24.478 [INFO][4280] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="268f429102cc7bdb52f9c5d5aca9a47a5516bb413f8131d7b92ab59aafeaea96" HandleID="k8s-pod-network.268f429102cc7bdb52f9c5d5aca9a47a5516bb413f8131d7b92ab59aafeaea96" Workload="srv--8ino3.gb1.brightbox.com-k8s-calico--apiserver--d56596c95--5w28p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002589d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-8ino3.gb1.brightbox.com", "pod":"calico-apiserver-d56596c95-5w28p", "timestamp":"2025-01-15 14:14:24.400075311 +0000 UTC"}, Hostname:"srv-8ino3.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 14:14:24.921288 containerd[1513]: 2025-01-15 14:14:24.479 [INFO][4280] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 14:14:24.921288 containerd[1513]: 2025-01-15 14:14:24.584 [INFO][4280] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 14:14:24.921288 containerd[1513]: 2025-01-15 14:14:24.584 [INFO][4280] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-8ino3.gb1.brightbox.com' Jan 15 14:14:24.921288 containerd[1513]: 2025-01-15 14:14:24.605 [INFO][4280] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.268f429102cc7bdb52f9c5d5aca9a47a5516bb413f8131d7b92ab59aafeaea96" host="srv-8ino3.gb1.brightbox.com" Jan 15 14:14:24.921288 containerd[1513]: 2025-01-15 14:14:24.628 [INFO][4280] ipam/ipam.go 372: Looking up existing affinities for host host="srv-8ino3.gb1.brightbox.com" Jan 15 14:14:24.921288 containerd[1513]: 2025-01-15 14:14:24.654 [INFO][4280] ipam/ipam.go 489: Trying affinity for 192.168.33.192/26 host="srv-8ino3.gb1.brightbox.com" Jan 15 14:14:24.921288 containerd[1513]: 2025-01-15 14:14:24.668 [INFO][4280] ipam/ipam.go 155: Attempting to load block cidr=192.168.33.192/26 host="srv-8ino3.gb1.brightbox.com" Jan 15 14:14:24.921288 containerd[1513]: 2025-01-15 14:14:24.686 [INFO][4280] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.33.192/26 host="srv-8ino3.gb1.brightbox.com" Jan 15 14:14:24.921288 containerd[1513]: 2025-01-15 14:14:24.686 [INFO][4280] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.33.192/26 handle="k8s-pod-network.268f429102cc7bdb52f9c5d5aca9a47a5516bb413f8131d7b92ab59aafeaea96" host="srv-8ino3.gb1.brightbox.com" Jan 15 14:14:24.921288 containerd[1513]: 2025-01-15 14:14:24.694 [INFO][4280] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.268f429102cc7bdb52f9c5d5aca9a47a5516bb413f8131d7b92ab59aafeaea96 Jan 15 14:14:24.921288 containerd[1513]: 2025-01-15 14:14:24.733 [INFO][4280] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.33.192/26 handle="k8s-pod-network.268f429102cc7bdb52f9c5d5aca9a47a5516bb413f8131d7b92ab59aafeaea96" host="srv-8ino3.gb1.brightbox.com" Jan 15 14:14:24.921288 containerd[1513]: 2025-01-15 14:14:24.765 [INFO][4280] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.33.196/26] block=192.168.33.192/26 handle="k8s-pod-network.268f429102cc7bdb52f9c5d5aca9a47a5516bb413f8131d7b92ab59aafeaea96" host="srv-8ino3.gb1.brightbox.com" Jan 15 14:14:24.921288 containerd[1513]: 2025-01-15 14:14:24.767 [INFO][4280] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.33.196/26] handle="k8s-pod-network.268f429102cc7bdb52f9c5d5aca9a47a5516bb413f8131d7b92ab59aafeaea96" host="srv-8ino3.gb1.brightbox.com" Jan 15 14:14:24.921288 containerd[1513]: 2025-01-15 14:14:24.767 [INFO][4280] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 14:14:24.921288 containerd[1513]: 2025-01-15 14:14:24.767 [INFO][4280] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.33.196/26] IPv6=[] ContainerID="268f429102cc7bdb52f9c5d5aca9a47a5516bb413f8131d7b92ab59aafeaea96" HandleID="k8s-pod-network.268f429102cc7bdb52f9c5d5aca9a47a5516bb413f8131d7b92ab59aafeaea96" Workload="srv--8ino3.gb1.brightbox.com-k8s-calico--apiserver--d56596c95--5w28p-eth0" Jan 15 14:14:24.923743 containerd[1513]: 2025-01-15 14:14:24.781 [INFO][4225] cni-plugin/k8s.go 386: Populated endpoint ContainerID="268f429102cc7bdb52f9c5d5aca9a47a5516bb413f8131d7b92ab59aafeaea96" Namespace="calico-apiserver" Pod="calico-apiserver-d56596c95-5w28p" WorkloadEndpoint="srv--8ino3.gb1.brightbox.com-k8s-calico--apiserver--d56596c95--5w28p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--8ino3.gb1.brightbox.com-k8s-calico--apiserver--d56596c95--5w28p-eth0", GenerateName:"calico-apiserver-d56596c95-", Namespace:"calico-apiserver", SelfLink:"", UID:"db57f239-8d03-4c8d-9b83-ba0c7c7afb10", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 14, 13, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d56596c95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-8ino3.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-d56596c95-5w28p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali64efce68a26", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 14:14:24.923743 containerd[1513]: 2025-01-15 14:14:24.781 [INFO][4225] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.33.196/32] ContainerID="268f429102cc7bdb52f9c5d5aca9a47a5516bb413f8131d7b92ab59aafeaea96" Namespace="calico-apiserver" Pod="calico-apiserver-d56596c95-5w28p" WorkloadEndpoint="srv--8ino3.gb1.brightbox.com-k8s-calico--apiserver--d56596c95--5w28p-eth0" Jan 15 14:14:24.923743 containerd[1513]: 2025-01-15 14:14:24.781 [INFO][4225] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali64efce68a26 ContainerID="268f429102cc7bdb52f9c5d5aca9a47a5516bb413f8131d7b92ab59aafeaea96" Namespace="calico-apiserver" Pod="calico-apiserver-d56596c95-5w28p" WorkloadEndpoint="srv--8ino3.gb1.brightbox.com-k8s-calico--apiserver--d56596c95--5w28p-eth0" Jan 15 14:14:24.923743 containerd[1513]: 2025-01-15 14:14:24.809 [INFO][4225] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="268f429102cc7bdb52f9c5d5aca9a47a5516bb413f8131d7b92ab59aafeaea96" Namespace="calico-apiserver" Pod="calico-apiserver-d56596c95-5w28p" WorkloadEndpoint="srv--8ino3.gb1.brightbox.com-k8s-calico--apiserver--d56596c95--5w28p-eth0" Jan 15 14:14:24.923743 containerd[1513]: 2025-01-15 14:14:24.814 [INFO][4225] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="268f429102cc7bdb52f9c5d5aca9a47a5516bb413f8131d7b92ab59aafeaea96" Namespace="calico-apiserver" Pod="calico-apiserver-d56596c95-5w28p" WorkloadEndpoint="srv--8ino3.gb1.brightbox.com-k8s-calico--apiserver--d56596c95--5w28p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--8ino3.gb1.brightbox.com-k8s-calico--apiserver--d56596c95--5w28p-eth0", GenerateName:"calico-apiserver-d56596c95-", Namespace:"calico-apiserver", SelfLink:"", UID:"db57f239-8d03-4c8d-9b83-ba0c7c7afb10", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 14, 13, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d56596c95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-8ino3.gb1.brightbox.com", ContainerID:"268f429102cc7bdb52f9c5d5aca9a47a5516bb413f8131d7b92ab59aafeaea96", Pod:"calico-apiserver-d56596c95-5w28p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali64efce68a26", MAC:"2a:ea:e1:0e:7a:5b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 14:14:24.923743 containerd[1513]: 2025-01-15 14:14:24.894 [INFO][4225] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="268f429102cc7bdb52f9c5d5aca9a47a5516bb413f8131d7b92ab59aafeaea96" Namespace="calico-apiserver" Pod="calico-apiserver-d56596c95-5w28p" WorkloadEndpoint="srv--8ino3.gb1.brightbox.com-k8s-calico--apiserver--d56596c95--5w28p-eth0" Jan 15 14:14:25.081256 systemd[1]: Started cri-containerd-26460777b2fc79d9b0123e9597257319158d87ea2d2ccf8596027fac99125aba.scope - libcontainer container 26460777b2fc79d9b0123e9597257319158d87ea2d2ccf8596027fac99125aba. Jan 15 14:14:25.318424 containerd[1513]: time="2025-01-15T14:14:25.318355588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-677f55659-m4s76,Uid:ab3b97f5-1e67-4e9f-9a0c-89fe7060ec56,Namespace:calico-system,Attempt:1,} returns sandbox id \"26460777b2fc79d9b0123e9597257319158d87ea2d2ccf8596027fac99125aba\"" Jan 15 14:14:25.361729 containerd[1513]: time="2025-01-15T14:14:25.358586401Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 14:14:25.364457 containerd[1513]: time="2025-01-15T14:14:25.362065543Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 14:14:25.364457 containerd[1513]: time="2025-01-15T14:14:25.362169286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 14:14:25.364457 containerd[1513]: time="2025-01-15T14:14:25.362452625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 14:14:25.436717 systemd-networkd[1415]: califbafc1cd3dd: Link UP Jan 15 14:14:25.443867 systemd-networkd[1415]: califbafc1cd3dd: Gained carrier Jan 15 14:14:25.485458 systemd[1]: Started cri-containerd-268f429102cc7bdb52f9c5d5aca9a47a5516bb413f8131d7b92ab59aafeaea96.scope - libcontainer container 268f429102cc7bdb52f9c5d5aca9a47a5516bb413f8131d7b92ab59aafeaea96. Jan 15 14:14:25.506683 containerd[1513]: 2025-01-15 14:14:24.799 [INFO][4316] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--8ino3.gb1.brightbox.com-k8s-coredns--6f6b679f8f--2gnwv-eth0 coredns-6f6b679f8f- kube-system 62a77434-74c8-4f4e-93f0-06f47a0ce14a 807 0 2025-01-15 14:13:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-8ino3.gb1.brightbox.com coredns-6f6b679f8f-2gnwv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califbafc1cd3dd [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="f6e3663e60272387fba344b935d61ecab5aa60f1393932ccde48b5fc57de82c5" Namespace="kube-system" Pod="coredns-6f6b679f8f-2gnwv" WorkloadEndpoint="srv--8ino3.gb1.brightbox.com-k8s-coredns--6f6b679f8f--2gnwv-" Jan 15 14:14:25.506683 containerd[1513]: 2025-01-15 14:14:24.799 [INFO][4316] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f6e3663e60272387fba344b935d61ecab5aa60f1393932ccde48b5fc57de82c5" Namespace="kube-system" Pod="coredns-6f6b679f8f-2gnwv" WorkloadEndpoint="srv--8ino3.gb1.brightbox.com-k8s-coredns--6f6b679f8f--2gnwv-eth0" Jan 15 14:14:25.506683 containerd[1513]: 2025-01-15 14:14:25.154 [INFO][4367] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f6e3663e60272387fba344b935d61ecab5aa60f1393932ccde48b5fc57de82c5" HandleID="k8s-pod-network.f6e3663e60272387fba344b935d61ecab5aa60f1393932ccde48b5fc57de82c5" Workload="srv--8ino3.gb1.brightbox.com-k8s-coredns--6f6b679f8f--2gnwv-eth0" Jan 15 14:14:25.506683 containerd[1513]: 2025-01-15 14:14:25.192 [INFO][4367] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f6e3663e60272387fba344b935d61ecab5aa60f1393932ccde48b5fc57de82c5" HandleID="k8s-pod-network.f6e3663e60272387fba344b935d61ecab5aa60f1393932ccde48b5fc57de82c5" Workload="srv--8ino3.gb1.brightbox.com-k8s-coredns--6f6b679f8f--2gnwv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003192d0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-8ino3.gb1.brightbox.com", "pod":"coredns-6f6b679f8f-2gnwv", "timestamp":"2025-01-15 14:14:25.154962239 +0000 UTC"}, Hostname:"srv-8ino3.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 14:14:25.506683 containerd[1513]: 2025-01-15 14:14:25.195 [INFO][4367] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 14:14:25.506683 containerd[1513]: 2025-01-15 14:14:25.195 [INFO][4367] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 14:14:25.506683 containerd[1513]: 2025-01-15 14:14:25.195 [INFO][4367] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-8ino3.gb1.brightbox.com' Jan 15 14:14:25.506683 containerd[1513]: 2025-01-15 14:14:25.206 [INFO][4367] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f6e3663e60272387fba344b935d61ecab5aa60f1393932ccde48b5fc57de82c5" host="srv-8ino3.gb1.brightbox.com" Jan 15 14:14:25.506683 containerd[1513]: 2025-01-15 14:14:25.245 [INFO][4367] ipam/ipam.go 372: Looking up existing affinities for host host="srv-8ino3.gb1.brightbox.com" Jan 15 14:14:25.506683 containerd[1513]: 2025-01-15 14:14:25.291 [INFO][4367] ipam/ipam.go 489: Trying affinity for 192.168.33.192/26 host="srv-8ino3.gb1.brightbox.com" Jan 15 14:14:25.506683 containerd[1513]: 2025-01-15 14:14:25.302 [INFO][4367] ipam/ipam.go 155: Attempting to load block cidr=192.168.33.192/26 host="srv-8ino3.gb1.brightbox.com" Jan 15 14:14:25.506683 containerd[1513]: 2025-01-15 14:14:25.314 [INFO][4367] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.33.192/26 host="srv-8ino3.gb1.brightbox.com" Jan 15 14:14:25.506683 containerd[1513]: 2025-01-15 14:14:25.315 [INFO][4367] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.33.192/26 handle="k8s-pod-network.f6e3663e60272387fba344b935d61ecab5aa60f1393932ccde48b5fc57de82c5" host="srv-8ino3.gb1.brightbox.com" Jan 15 14:14:25.506683 containerd[1513]: 2025-01-15 14:14:25.335 [INFO][4367] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f6e3663e60272387fba344b935d61ecab5aa60f1393932ccde48b5fc57de82c5 Jan 15 14:14:25.506683 containerd[1513]: 2025-01-15 14:14:25.368 [INFO][4367] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.33.192/26 handle="k8s-pod-network.f6e3663e60272387fba344b935d61ecab5aa60f1393932ccde48b5fc57de82c5" host="srv-8ino3.gb1.brightbox.com" Jan 15 14:14:25.506683 containerd[1513]: 2025-01-15 14:14:25.400 [INFO][4367] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.33.197/26] block=192.168.33.192/26 handle="k8s-pod-network.f6e3663e60272387fba344b935d61ecab5aa60f1393932ccde48b5fc57de82c5" host="srv-8ino3.gb1.brightbox.com" Jan 15 14:14:25.506683 containerd[1513]: 2025-01-15 14:14:25.401 [INFO][4367] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.33.197/26] handle="k8s-pod-network.f6e3663e60272387fba344b935d61ecab5aa60f1393932ccde48b5fc57de82c5" host="srv-8ino3.gb1.brightbox.com" Jan 15 14:14:25.506683 containerd[1513]: 2025-01-15 14:14:25.402 [INFO][4367] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 14:14:25.506683 containerd[1513]: 2025-01-15 14:14:25.402 [INFO][4367] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.33.197/26] IPv6=[] ContainerID="f6e3663e60272387fba344b935d61ecab5aa60f1393932ccde48b5fc57de82c5" HandleID="k8s-pod-network.f6e3663e60272387fba344b935d61ecab5aa60f1393932ccde48b5fc57de82c5" Workload="srv--8ino3.gb1.brightbox.com-k8s-coredns--6f6b679f8f--2gnwv-eth0" Jan 15 14:14:25.509711 containerd[1513]: 2025-01-15 14:14:25.417 [INFO][4316] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f6e3663e60272387fba344b935d61ecab5aa60f1393932ccde48b5fc57de82c5" Namespace="kube-system" Pod="coredns-6f6b679f8f-2gnwv" WorkloadEndpoint="srv--8ino3.gb1.brightbox.com-k8s-coredns--6f6b679f8f--2gnwv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--8ino3.gb1.brightbox.com-k8s-coredns--6f6b679f8f--2gnwv-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"62a77434-74c8-4f4e-93f0-06f47a0ce14a", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 14, 13, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-8ino3.gb1.brightbox.com", ContainerID:"", Pod:"coredns-6f6b679f8f-2gnwv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.33.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califbafc1cd3dd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 14:14:25.509711 containerd[1513]: 2025-01-15 14:14:25.420 [INFO][4316] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.33.197/32] ContainerID="f6e3663e60272387fba344b935d61ecab5aa60f1393932ccde48b5fc57de82c5" Namespace="kube-system" Pod="coredns-6f6b679f8f-2gnwv" WorkloadEndpoint="srv--8ino3.gb1.brightbox.com-k8s-coredns--6f6b679f8f--2gnwv-eth0" Jan 15 14:14:25.509711 containerd[1513]: 2025-01-15 14:14:25.420 [INFO][4316] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califbafc1cd3dd ContainerID="f6e3663e60272387fba344b935d61ecab5aa60f1393932ccde48b5fc57de82c5" Namespace="kube-system" Pod="coredns-6f6b679f8f-2gnwv" WorkloadEndpoint="srv--8ino3.gb1.brightbox.com-k8s-coredns--6f6b679f8f--2gnwv-eth0" Jan 15 14:14:25.509711 containerd[1513]: 2025-01-15 14:14:25.442 [INFO][4316] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f6e3663e60272387fba344b935d61ecab5aa60f1393932ccde48b5fc57de82c5" Namespace="kube-system" Pod="coredns-6f6b679f8f-2gnwv" WorkloadEndpoint="srv--8ino3.gb1.brightbox.com-k8s-coredns--6f6b679f8f--2gnwv-eth0" Jan 15 14:14:25.509711 containerd[1513]: 2025-01-15 14:14:25.445 [INFO][4316] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f6e3663e60272387fba344b935d61ecab5aa60f1393932ccde48b5fc57de82c5" Namespace="kube-system" Pod="coredns-6f6b679f8f-2gnwv" WorkloadEndpoint="srv--8ino3.gb1.brightbox.com-k8s-coredns--6f6b679f8f--2gnwv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--8ino3.gb1.brightbox.com-k8s-coredns--6f6b679f8f--2gnwv-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"62a77434-74c8-4f4e-93f0-06f47a0ce14a", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 14, 13, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-8ino3.gb1.brightbox.com", ContainerID:"f6e3663e60272387fba344b935d61ecab5aa60f1393932ccde48b5fc57de82c5", Pod:"coredns-6f6b679f8f-2gnwv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.33.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califbafc1cd3dd", MAC:"1a:b6:28:5a:82:69", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 14:14:25.509711 containerd[1513]: 2025-01-15 14:14:25.498 [INFO][4316] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f6e3663e60272387fba344b935d61ecab5aa60f1393932ccde48b5fc57de82c5" Namespace="kube-system" Pod="coredns-6f6b679f8f-2gnwv" WorkloadEndpoint="srv--8ino3.gb1.brightbox.com-k8s-coredns--6f6b679f8f--2gnwv-eth0" Jan 15 14:14:25.596262 systemd-networkd[1415]: cali7e6db23d61c: Link UP Jan 15 14:14:25.600197 systemd-networkd[1415]: cali7e6db23d61c: Gained carrier Jan 15 14:14:25.645574 containerd[1513]: 2025-01-15 14:14:25.153 [INFO][4337] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--8ino3.gb1.brightbox.com-k8s-coredns--6f6b679f8f--j2sst-eth0 coredns-6f6b679f8f- kube-system d0a1a3c0-5c7a-49bf-98cd-cbcfee7c8716 806 0 2025-01-15 14:13:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-8ino3.gb1.brightbox.com coredns-6f6b679f8f-j2sst eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7e6db23d61c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="01f60bb5e316baa6826d612b46a371f8166e8cc0769285be204342b2feaecd6f" Namespace="kube-system" Pod="coredns-6f6b679f8f-j2sst" WorkloadEndpoint="srv--8ino3.gb1.brightbox.com-k8s-coredns--6f6b679f8f--j2sst-" Jan 15 14:14:25.645574 containerd[1513]: 2025-01-15 14:14:25.153 [INFO][4337] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="01f60bb5e316baa6826d612b46a371f8166e8cc0769285be204342b2feaecd6f" Namespace="kube-system" Pod="coredns-6f6b679f8f-j2sst" WorkloadEndpoint="srv--8ino3.gb1.brightbox.com-k8s-coredns--6f6b679f8f--j2sst-eth0" Jan 15 14:14:25.645574 containerd[1513]: 2025-01-15 14:14:25.362 [INFO][4432] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="01f60bb5e316baa6826d612b46a371f8166e8cc0769285be204342b2feaecd6f" HandleID="k8s-pod-network.01f60bb5e316baa6826d612b46a371f8166e8cc0769285be204342b2feaecd6f" Workload="srv--8ino3.gb1.brightbox.com-k8s-coredns--6f6b679f8f--j2sst-eth0" Jan 15 14:14:25.645574 containerd[1513]: 2025-01-15 14:14:25.433 [INFO][4432] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="01f60bb5e316baa6826d612b46a371f8166e8cc0769285be204342b2feaecd6f" HandleID="k8s-pod-network.01f60bb5e316baa6826d612b46a371f8166e8cc0769285be204342b2feaecd6f" Workload="srv--8ino3.gb1.brightbox.com-k8s-coredns--6f6b679f8f--j2sst-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034bec0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-8ino3.gb1.brightbox.com", "pod":"coredns-6f6b679f8f-j2sst", "timestamp":"2025-01-15 14:14:25.362767771 +0000 UTC"}, Hostname:"srv-8ino3.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 14:14:25.645574 containerd[1513]: 2025-01-15 14:14:25.434 [INFO][4432] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 14:14:25.645574 containerd[1513]: 2025-01-15 14:14:25.434 [INFO][4432] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 14:14:25.645574 containerd[1513]: 2025-01-15 14:14:25.436 [INFO][4432] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-8ino3.gb1.brightbox.com' Jan 15 14:14:25.645574 containerd[1513]: 2025-01-15 14:14:25.452 [INFO][4432] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.01f60bb5e316baa6826d612b46a371f8166e8cc0769285be204342b2feaecd6f" host="srv-8ino3.gb1.brightbox.com" Jan 15 14:14:25.645574 containerd[1513]: 2025-01-15 14:14:25.482 [INFO][4432] ipam/ipam.go 372: Looking up existing affinities for host host="srv-8ino3.gb1.brightbox.com" Jan 15 14:14:25.645574 containerd[1513]: 2025-01-15 14:14:25.512 [INFO][4432] ipam/ipam.go 489: Trying affinity for 192.168.33.192/26 host="srv-8ino3.gb1.brightbox.com" Jan 15 14:14:25.645574 containerd[1513]: 2025-01-15 14:14:25.521 [INFO][4432] ipam/ipam.go 155: Attempting to load block cidr=192.168.33.192/26 host="srv-8ino3.gb1.brightbox.com" Jan 15 14:14:25.645574 containerd[1513]: 2025-01-15 14:14:25.532 [INFO][4432] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.33.192/26 host="srv-8ino3.gb1.brightbox.com" Jan 15 14:14:25.645574 containerd[1513]: 2025-01-15 14:14:25.532 [INFO][4432] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.33.192/26 handle="k8s-pod-network.01f60bb5e316baa6826d612b46a371f8166e8cc0769285be204342b2feaecd6f" host="srv-8ino3.gb1.brightbox.com" Jan 15 14:14:25.645574 containerd[1513]: 2025-01-15 14:14:25.540 [INFO][4432] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.01f60bb5e316baa6826d612b46a371f8166e8cc0769285be204342b2feaecd6f Jan 15 14:14:25.645574 containerd[1513]: 2025-01-15 14:14:25.556 [INFO][4432] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.33.192/26 handle="k8s-pod-network.01f60bb5e316baa6826d612b46a371f8166e8cc0769285be204342b2feaecd6f" host="srv-8ino3.gb1.brightbox.com" Jan 15 14:14:25.645574 containerd[1513]: 2025-01-15 14:14:25.577 [INFO][4432] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.33.198/26] block=192.168.33.192/26 handle="k8s-pod-network.01f60bb5e316baa6826d612b46a371f8166e8cc0769285be204342b2feaecd6f" host="srv-8ino3.gb1.brightbox.com" Jan 15 14:14:25.645574 containerd[1513]: 2025-01-15 14:14:25.577 [INFO][4432] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.33.198/26] handle="k8s-pod-network.01f60bb5e316baa6826d612b46a371f8166e8cc0769285be204342b2feaecd6f" host="srv-8ino3.gb1.brightbox.com" Jan 15 14:14:25.645574 containerd[1513]: 2025-01-15 14:14:25.577 [INFO][4432] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 14:14:25.645574 containerd[1513]: 2025-01-15 14:14:25.577 [INFO][4432] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.33.198/26] IPv6=[] ContainerID="01f60bb5e316baa6826d612b46a371f8166e8cc0769285be204342b2feaecd6f" HandleID="k8s-pod-network.01f60bb5e316baa6826d612b46a371f8166e8cc0769285be204342b2feaecd6f" Workload="srv--8ino3.gb1.brightbox.com-k8s-coredns--6f6b679f8f--j2sst-eth0" Jan 15 14:14:25.648563 containerd[1513]: 2025-01-15 14:14:25.585 [INFO][4337] cni-plugin/k8s.go 386: Populated endpoint ContainerID="01f60bb5e316baa6826d612b46a371f8166e8cc0769285be204342b2feaecd6f" Namespace="kube-system" Pod="coredns-6f6b679f8f-j2sst" WorkloadEndpoint="srv--8ino3.gb1.brightbox.com-k8s-coredns--6f6b679f8f--j2sst-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--8ino3.gb1.brightbox.com-k8s-coredns--6f6b679f8f--j2sst-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"d0a1a3c0-5c7a-49bf-98cd-cbcfee7c8716", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 14, 13, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-8ino3.gb1.brightbox.com", ContainerID:"", Pod:"coredns-6f6b679f8f-j2sst", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.33.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7e6db23d61c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 14:14:25.648563 containerd[1513]: 2025-01-15 14:14:25.585 [INFO][4337] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.33.198/32] ContainerID="01f60bb5e316baa6826d612b46a371f8166e8cc0769285be204342b2feaecd6f" Namespace="kube-system" Pod="coredns-6f6b679f8f-j2sst" WorkloadEndpoint="srv--8ino3.gb1.brightbox.com-k8s-coredns--6f6b679f8f--j2sst-eth0" Jan 15 14:14:25.648563 containerd[1513]: 2025-01-15 14:14:25.585 [INFO][4337] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7e6db23d61c ContainerID="01f60bb5e316baa6826d612b46a371f8166e8cc0769285be204342b2feaecd6f" Namespace="kube-system" Pod="coredns-6f6b679f8f-j2sst" WorkloadEndpoint="srv--8ino3.gb1.brightbox.com-k8s-coredns--6f6b679f8f--j2sst-eth0" Jan 15 14:14:25.648563 containerd[1513]: 2025-01-15 14:14:25.599 [INFO][4337] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="01f60bb5e316baa6826d612b46a371f8166e8cc0769285be204342b2feaecd6f" Namespace="kube-system" Pod="coredns-6f6b679f8f-j2sst" WorkloadEndpoint="srv--8ino3.gb1.brightbox.com-k8s-coredns--6f6b679f8f--j2sst-eth0" Jan 15 14:14:25.648563 containerd[1513]: 2025-01-15 14:14:25.604 [INFO][4337] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="01f60bb5e316baa6826d612b46a371f8166e8cc0769285be204342b2feaecd6f" Namespace="kube-system" Pod="coredns-6f6b679f8f-j2sst" WorkloadEndpoint="srv--8ino3.gb1.brightbox.com-k8s-coredns--6f6b679f8f--j2sst-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--8ino3.gb1.brightbox.com-k8s-coredns--6f6b679f8f--j2sst-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"d0a1a3c0-5c7a-49bf-98cd-cbcfee7c8716", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 14, 13, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-8ino3.gb1.brightbox.com", ContainerID:"01f60bb5e316baa6826d612b46a371f8166e8cc0769285be204342b2feaecd6f", Pod:"coredns-6f6b679f8f-j2sst", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.33.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7e6db23d61c", MAC:"2a:bb:7f:74:6a:c8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 14:14:25.648563 containerd[1513]: 2025-01-15 14:14:25.636 [INFO][4337] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="01f60bb5e316baa6826d612b46a371f8166e8cc0769285be204342b2feaecd6f" Namespace="kube-system" Pod="coredns-6f6b679f8f-j2sst" WorkloadEndpoint="srv--8ino3.gb1.brightbox.com-k8s-coredns--6f6b679f8f--j2sst-eth0" Jan 15 14:14:25.682572 containerd[1513]: time="2025-01-15T14:14:25.681363150Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 14:14:25.682572 containerd[1513]: time="2025-01-15T14:14:25.681482965Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 14:14:25.682572 containerd[1513]: time="2025-01-15T14:14:25.681507389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 14:14:25.684630 containerd[1513]: time="2025-01-15T14:14:25.683142058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 14:14:25.767504 containerd[1513]: time="2025-01-15T14:14:25.766970551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d56596c95-5w28p,Uid:db57f239-8d03-4c8d-9b83-ba0c7c7afb10,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"268f429102cc7bdb52f9c5d5aca9a47a5516bb413f8131d7b92ab59aafeaea96\"" Jan 15 14:14:25.785280 systemd[1]: Started cri-containerd-f6e3663e60272387fba344b935d61ecab5aa60f1393932ccde48b5fc57de82c5.scope - libcontainer container f6e3663e60272387fba344b935d61ecab5aa60f1393932ccde48b5fc57de82c5. Jan 15 14:14:25.813881 containerd[1513]: time="2025-01-15T14:14:25.812637763Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 14:14:25.813881 containerd[1513]: time="2025-01-15T14:14:25.813668108Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 14:14:25.813881 containerd[1513]: time="2025-01-15T14:14:25.813700518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 14:14:25.813881 containerd[1513]: time="2025-01-15T14:14:25.813826791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 14:14:25.864677 systemd-networkd[1415]: cali7245c61c5ff: Gained IPv6LL Jan 15 14:14:25.885235 systemd[1]: Started cri-containerd-01f60bb5e316baa6826d612b46a371f8166e8cc0769285be204342b2feaecd6f.scope - libcontainer container 01f60bb5e316baa6826d612b46a371f8166e8cc0769285be204342b2feaecd6f. Jan 15 14:14:25.915811 containerd[1513]: time="2025-01-15T14:14:25.914327202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2gnwv,Uid:62a77434-74c8-4f4e-93f0-06f47a0ce14a,Namespace:kube-system,Attempt:1,} returns sandbox id \"f6e3663e60272387fba344b935d61ecab5aa60f1393932ccde48b5fc57de82c5\"" Jan 15 14:14:25.924360 containerd[1513]: time="2025-01-15T14:14:25.924256402Z" level=info msg="CreateContainer within sandbox \"f6e3663e60272387fba344b935d61ecab5aa60f1393932ccde48b5fc57de82c5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 15 14:14:25.927275 systemd-networkd[1415]: cali64efce68a26: Gained IPv6LL Jan 15 14:14:25.973493 containerd[1513]: time="2025-01-15T14:14:25.973370708Z" level=info msg="CreateContainer within sandbox \"f6e3663e60272387fba344b935d61ecab5aa60f1393932ccde48b5fc57de82c5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bd693aef31ce4f81eff921ef7acd35a34638fb67eba9f36f2a292be112aa0638\"" Jan 15 14:14:25.976423 containerd[1513]: time="2025-01-15T14:14:25.976374165Z" level=info msg="StartContainer for \"bd693aef31ce4f81eff921ef7acd35a34638fb67eba9f36f2a292be112aa0638\"" Jan 15 14:14:25.984570 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount730553439.mount: Deactivated successfully. Jan 15 14:14:26.061260 containerd[1513]: time="2025-01-15T14:14:26.061124174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-j2sst,Uid:d0a1a3c0-5c7a-49bf-98cd-cbcfee7c8716,Namespace:kube-system,Attempt:1,} returns sandbox id \"01f60bb5e316baa6826d612b46a371f8166e8cc0769285be204342b2feaecd6f\"" Jan 15 14:14:26.068072 containerd[1513]: time="2025-01-15T14:14:26.067559939Z" level=info msg="CreateContainer within sandbox \"01f60bb5e316baa6826d612b46a371f8166e8cc0769285be204342b2feaecd6f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 15 14:14:26.089186 systemd[1]: Started cri-containerd-bd693aef31ce4f81eff921ef7acd35a34638fb67eba9f36f2a292be112aa0638.scope - libcontainer container bd693aef31ce4f81eff921ef7acd35a34638fb67eba9f36f2a292be112aa0638. Jan 15 14:14:26.173006 containerd[1513]: time="2025-01-15T14:14:26.170930120Z" level=info msg="CreateContainer within sandbox \"01f60bb5e316baa6826d612b46a371f8166e8cc0769285be204342b2feaecd6f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1627369a2c20f49012114f829a663aeb4a527ff67dd784a36339e31decf97239\"" Jan 15 14:14:26.174356 containerd[1513]: time="2025-01-15T14:14:26.174221281Z" level=info msg="StartContainer for \"1627369a2c20f49012114f829a663aeb4a527ff67dd784a36339e31decf97239\"" Jan 15 14:14:26.193332 containerd[1513]: time="2025-01-15T14:14:26.193086797Z" level=info msg="StartContainer for \"bd693aef31ce4f81eff921ef7acd35a34638fb67eba9f36f2a292be112aa0638\" returns successfully" Jan 15 14:14:26.247192 systemd[1]: Started cri-containerd-1627369a2c20f49012114f829a663aeb4a527ff67dd784a36339e31decf97239.scope - libcontainer container 1627369a2c20f49012114f829a663aeb4a527ff67dd784a36339e31decf97239. Jan 15 14:14:26.357776 containerd[1513]: time="2025-01-15T14:14:26.357008711Z" level=info msg="StartContainer for \"1627369a2c20f49012114f829a663aeb4a527ff67dd784a36339e31decf97239\" returns successfully" Jan 15 14:14:26.618891 kubelet[2647]: I0115 14:14:26.617329 2647 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-j2sst" podStartSLOduration=57.617262602 podStartE2EDuration="57.617262602s" podCreationTimestamp="2025-01-15 14:13:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-15 14:14:26.617064811 +0000 UTC m=+63.790388385" watchObservedRunningTime="2025-01-15 14:14:26.617262602 +0000 UTC m=+63.790586171" Jan 15 14:14:26.759462 systemd-networkd[1415]: vxlan.calico: Gained IPv6LL Jan 15 14:14:26.823594 systemd-networkd[1415]: cali7e6db23d61c: Gained IPv6LL Jan 15 14:14:27.400026 systemd-networkd[1415]: califbafc1cd3dd: Gained IPv6LL Jan 15 14:14:27.428167 kubelet[2647]: I0115 14:14:27.428046 2647 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-2gnwv" podStartSLOduration=58.428020578 podStartE2EDuration="58.428020578s" podCreationTimestamp="2025-01-15 14:13:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-15 14:14:26.723272231 +0000 UTC m=+63.896595810" watchObservedRunningTime="2025-01-15 14:14:27.428020578 +0000 UTC m=+64.601344141" Jan 15 14:14:27.938506 containerd[1513]: time="2025-01-15T14:14:27.938412609Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:14:27.942134 containerd[1513]: time="2025-01-15T14:14:27.942006106Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 15 14:14:27.944926 containerd[1513]: time="2025-01-15T14:14:27.944864462Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:14:27.955059 containerd[1513]: time="2025-01-15T14:14:27.954188311Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:14:27.955950 containerd[1513]: time="2025-01-15T14:14:27.955881410Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 5.8456234s" Jan 15 14:14:27.956228 containerd[1513]: time="2025-01-15T14:14:27.956199216Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 15 14:14:27.958885 containerd[1513]: time="2025-01-15T14:14:27.958821397Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 15 14:14:27.961820 containerd[1513]: time="2025-01-15T14:14:27.961773785Z" level=info msg="CreateContainer within sandbox \"e4caf114919b966340f6facad83ce3b11fc2697c70b395340a33ee9774c76240\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 15 14:14:27.992709 containerd[1513]: time="2025-01-15T14:14:27.992548892Z" level=info msg="CreateContainer within sandbox \"e4caf114919b966340f6facad83ce3b11fc2697c70b395340a33ee9774c76240\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b0a094962e3d89716571d2124c246d67a24f21efddd4a515f65efeae7cc090c6\"" Jan 15 14:14:27.994038 containerd[1513]: time="2025-01-15T14:14:27.993653635Z" level=info msg="StartContainer for \"b0a094962e3d89716571d2124c246d67a24f21efddd4a515f65efeae7cc090c6\"" Jan 15 14:14:28.063418 systemd[1]: Started cri-containerd-b0a094962e3d89716571d2124c246d67a24f21efddd4a515f65efeae7cc090c6.scope - libcontainer container b0a094962e3d89716571d2124c246d67a24f21efddd4a515f65efeae7cc090c6. Jan 15 14:14:28.141491 containerd[1513]: time="2025-01-15T14:14:28.141426927Z" level=info msg="StartContainer for \"b0a094962e3d89716571d2124c246d67a24f21efddd4a515f65efeae7cc090c6\" returns successfully" Jan 15 14:14:28.621959 kubelet[2647]: I0115 14:14:28.621843 2647 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-d56596c95-lvrnc" podStartSLOduration=46.773361092 podStartE2EDuration="52.621814188s" podCreationTimestamp="2025-01-15 14:13:36 +0000 UTC" firstStartedPulling="2025-01-15 14:14:22.109660262 +0000 UTC m=+59.282983823" lastFinishedPulling="2025-01-15 14:14:27.958113358 +0000 UTC m=+65.131436919" observedRunningTime="2025-01-15 14:14:28.620959906 +0000 UTC m=+65.794283482" watchObservedRunningTime="2025-01-15 14:14:28.621814188 +0000 UTC m=+65.795137749" Jan 15 14:14:29.605525 kubelet[2647]: I0115 14:14:29.605420 2647 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 15 14:14:30.549414 systemd[1]: run-containerd-runc-k8s.io-898ac6d65f4f10dc663dec77896492a58247ee2a615cfed675882ddad84adc04-runc.d3ydvF.mount: Deactivated successfully. Jan 15 14:14:31.157281 containerd[1513]: time="2025-01-15T14:14:31.157176967Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:14:31.159064 containerd[1513]: time="2025-01-15T14:14:31.158464039Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 15 14:14:31.161055 containerd[1513]: time="2025-01-15T14:14:31.160230825Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:14:31.164170 containerd[1513]: time="2025-01-15T14:14:31.164131401Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:14:31.167142 containerd[1513]: time="2025-01-15T14:14:31.167099222Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 3.208193909s" Jan 15 14:14:31.167368 containerd[1513]: time="2025-01-15T14:14:31.167326209Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 15 14:14:31.170740 containerd[1513]: time="2025-01-15T14:14:31.170683347Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 15 14:14:31.178219 containerd[1513]: time="2025-01-15T14:14:31.176448590Z" level=info msg="CreateContainer within sandbox \"40f3729803b6e4300851ad856f905c308455a65e1624f3b15b57d0ca88421e54\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 15 14:14:31.209715 containerd[1513]: time="2025-01-15T14:14:31.209608838Z" level=info msg="CreateContainer within sandbox \"40f3729803b6e4300851ad856f905c308455a65e1624f3b15b57d0ca88421e54\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8d518878565bf4cbbce8717ec7f5f9e128d53bafa4e624a659ca930e996cfb03\"" Jan 15 14:14:31.213101 containerd[1513]: time="2025-01-15T14:14:31.213055973Z" level=info msg="StartContainer for \"8d518878565bf4cbbce8717ec7f5f9e128d53bafa4e624a659ca930e996cfb03\"" Jan 15 14:14:31.293327 systemd[1]: Started cri-containerd-8d518878565bf4cbbce8717ec7f5f9e128d53bafa4e624a659ca930e996cfb03.scope - libcontainer container 8d518878565bf4cbbce8717ec7f5f9e128d53bafa4e624a659ca930e996cfb03. Jan 15 14:14:31.365536 containerd[1513]: time="2025-01-15T14:14:31.365319066Z" level=info msg="StartContainer for \"8d518878565bf4cbbce8717ec7f5f9e128d53bafa4e624a659ca930e996cfb03\" returns successfully" Jan 15 14:14:31.540912 systemd[1]: run-containerd-runc-k8s.io-8d518878565bf4cbbce8717ec7f5f9e128d53bafa4e624a659ca930e996cfb03-runc.Ec4Btr.mount: Deactivated successfully. Jan 15 14:14:35.697349 containerd[1513]: time="2025-01-15T14:14:35.697169566Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:14:35.700865 containerd[1513]: time="2025-01-15T14:14:35.700292804Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 15 14:14:35.702151 containerd[1513]: time="2025-01-15T14:14:35.702093328Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:14:35.708797 containerd[1513]: time="2025-01-15T14:14:35.708728194Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:14:35.714287 containerd[1513]: time="2025-01-15T14:14:35.713402555Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 4.54258742s" Jan 15 14:14:35.714287 containerd[1513]: time="2025-01-15T14:14:35.714107850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 15 14:14:35.716651 containerd[1513]: time="2025-01-15T14:14:35.716346083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 15 14:14:35.764602 containerd[1513]: time="2025-01-15T14:14:35.764210934Z" level=info msg="CreateContainer within sandbox \"26460777b2fc79d9b0123e9597257319158d87ea2d2ccf8596027fac99125aba\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 15 14:14:35.791469 containerd[1513]: time="2025-01-15T14:14:35.791390369Z" level=info msg="CreateContainer within sandbox \"26460777b2fc79d9b0123e9597257319158d87ea2d2ccf8596027fac99125aba\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"3c2aefe4cd49fafb119548b481d1d59858214b31c6555c86686682cf7e4b7d44\"" Jan 15 14:14:35.793558 containerd[1513]: time="2025-01-15T14:14:35.793512766Z" level=info msg="StartContainer for \"3c2aefe4cd49fafb119548b481d1d59858214b31c6555c86686682cf7e4b7d44\"" Jan 15 14:14:35.885888 systemd[1]: Started cri-containerd-3c2aefe4cd49fafb119548b481d1d59858214b31c6555c86686682cf7e4b7d44.scope - libcontainer container 3c2aefe4cd49fafb119548b481d1d59858214b31c6555c86686682cf7e4b7d44. Jan 15 14:14:36.035028 containerd[1513]: time="2025-01-15T14:14:36.034813056Z" level=info msg="StartContainer for \"3c2aefe4cd49fafb119548b481d1d59858214b31c6555c86686682cf7e4b7d44\" returns successfully" Jan 15 14:14:36.775260 systemd[1]: run-containerd-runc-k8s.io-3c2aefe4cd49fafb119548b481d1d59858214b31c6555c86686682cf7e4b7d44-runc.qtPQB3.mount: Deactivated successfully. Jan 15 14:14:36.879170 kubelet[2647]: I0115 14:14:36.879040 2647 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-677f55659-m4s76" podStartSLOduration=49.489184035 podStartE2EDuration="59.878950286s" podCreationTimestamp="2025-01-15 14:13:37 +0000 UTC" firstStartedPulling="2025-01-15 14:14:25.325949772 +0000 UTC m=+62.499273333" lastFinishedPulling="2025-01-15 14:14:35.715716015 +0000 UTC m=+72.889039584" observedRunningTime="2025-01-15 14:14:36.754059989 +0000 UTC m=+73.927383569" watchObservedRunningTime="2025-01-15 14:14:36.878950286 +0000 UTC m=+74.052273849" Jan 15 14:14:37.188089 containerd[1513]: time="2025-01-15T14:14:37.188015894Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:14:37.190666 containerd[1513]: time="2025-01-15T14:14:37.190607521Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 15 14:14:37.195817 containerd[1513]: time="2025-01-15T14:14:37.195742408Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 1.479334917s" Jan 15 14:14:37.195817 containerd[1513]: time="2025-01-15T14:14:37.195795730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 15 14:14:37.198129 containerd[1513]: time="2025-01-15T14:14:37.197382519Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 15 14:14:37.202148 containerd[1513]: time="2025-01-15T14:14:37.202103291Z" level=info msg="CreateContainer within sandbox \"268f429102cc7bdb52f9c5d5aca9a47a5516bb413f8131d7b92ab59aafeaea96\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 15 14:14:37.227456 containerd[1513]: time="2025-01-15T14:14:37.227344846Z" level=info msg="CreateContainer within sandbox \"268f429102cc7bdb52f9c5d5aca9a47a5516bb413f8131d7b92ab59aafeaea96\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b4e1792ca590f4f309384c0d2b8844c53a12457335d89ac324a6a5cac111e4e0\"" Jan 15 14:14:37.229913 containerd[1513]: time="2025-01-15T14:14:37.229617081Z" level=info msg="StartContainer for \"b4e1792ca590f4f309384c0d2b8844c53a12457335d89ac324a6a5cac111e4e0\"" Jan 15 14:14:37.309228 systemd[1]: Started cri-containerd-b4e1792ca590f4f309384c0d2b8844c53a12457335d89ac324a6a5cac111e4e0.scope - libcontainer container b4e1792ca590f4f309384c0d2b8844c53a12457335d89ac324a6a5cac111e4e0. Jan 15 14:14:37.421085 containerd[1513]: time="2025-01-15T14:14:37.420860775Z" level=info msg="StartContainer for \"b4e1792ca590f4f309384c0d2b8844c53a12457335d89ac324a6a5cac111e4e0\" returns successfully" Jan 15 14:14:37.739835 kubelet[2647]: I0115 14:14:37.739706 2647 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-d56596c95-5w28p" podStartSLOduration=50.317543038 podStartE2EDuration="1m1.73966281s" podCreationTimestamp="2025-01-15 14:13:36 +0000 UTC" firstStartedPulling="2025-01-15 14:14:25.774681261 +0000 UTC m=+62.948004818" lastFinishedPulling="2025-01-15 14:14:37.196801021 +0000 UTC m=+74.370124590" observedRunningTime="2025-01-15 14:14:37.736627562 +0000 UTC m=+74.909951136" watchObservedRunningTime="2025-01-15 14:14:37.73966281 +0000 UTC m=+74.912986370" Jan 15 14:14:37.919254 kubelet[2647]: I0115 14:14:37.918181 2647 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 15 14:14:42.238892 containerd[1513]: time="2025-01-15T14:14:42.237130985Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:14:42.253175 containerd[1513]: time="2025-01-15T14:14:42.252996961Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 15 14:14:42.262640 containerd[1513]: time="2025-01-15T14:14:42.262517814Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:14:42.270670 containerd[1513]: time="2025-01-15T14:14:42.270321804Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 5.072876506s" Jan 15 14:14:42.270670 containerd[1513]: time="2025-01-15T14:14:42.270416750Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 15 14:14:42.272324 containerd[1513]: time="2025-01-15T14:14:42.271592789Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:14:42.282165 containerd[1513]: time="2025-01-15T14:14:42.281903172Z" level=info msg="CreateContainer within sandbox \"40f3729803b6e4300851ad856f905c308455a65e1624f3b15b57d0ca88421e54\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 15 14:14:42.334879 containerd[1513]: time="2025-01-15T14:14:42.334782135Z" level=info msg="CreateContainer within sandbox \"40f3729803b6e4300851ad856f905c308455a65e1624f3b15b57d0ca88421e54\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"1d15d4a5d77494e8f732c8774531be825f250f26a03bf774f23c73e922143570\"" Jan 15 14:14:42.338027 containerd[1513]: time="2025-01-15T14:14:42.336328074Z" level=info msg="StartContainer for \"1d15d4a5d77494e8f732c8774531be825f250f26a03bf774f23c73e922143570\"" Jan 15 14:14:42.441222 systemd[1]: Started cri-containerd-1d15d4a5d77494e8f732c8774531be825f250f26a03bf774f23c73e922143570.scope - libcontainer container 1d15d4a5d77494e8f732c8774531be825f250f26a03bf774f23c73e922143570. Jan 15 14:14:42.531050 containerd[1513]: time="2025-01-15T14:14:42.530782047Z" level=info msg="StartContainer for \"1d15d4a5d77494e8f732c8774531be825f250f26a03bf774f23c73e922143570\" returns successfully" Jan 15 14:14:42.768015 kubelet[2647]: I0115 14:14:42.767831 2647 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-hvqsn" podStartSLOduration=46.878966736 podStartE2EDuration="1m5.767726078s" podCreationTimestamp="2025-01-15 14:13:37 +0000 UTC" firstStartedPulling="2025-01-15 14:14:23.386446782 +0000 UTC m=+60.559770338" lastFinishedPulling="2025-01-15 14:14:42.27520612 +0000 UTC m=+79.448529680" observedRunningTime="2025-01-15 14:14:42.766477687 +0000 UTC m=+79.939801254" watchObservedRunningTime="2025-01-15 14:14:42.767726078 +0000 UTC m=+79.941049642" Jan 15 14:14:43.417206 kubelet[2647]: I0115 14:14:43.415086 2647 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 15 14:14:43.426971 kubelet[2647]: I0115 14:14:43.426183 2647 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 15 14:14:54.452464 systemd[1]: Started sshd@7-10.244.21.14:22-147.75.109.163:57824.service - OpenSSH per-connection server daemon (147.75.109.163:57824). Jan 15 14:14:55.421833 sshd[5030]: Accepted publickey for core from 147.75.109.163 port 57824 ssh2: RSA SHA256:QG8B548JP5tdUNqGYa2d+pJD6UDQ9KBL2A0BMPySjNw Jan 15 14:14:55.426014 sshd[5030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 14:14:55.438215 systemd-logind[1493]: New session 10 of user core. Jan 15 14:14:55.443258 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 15 14:14:56.725158 sshd[5030]: pam_unix(sshd:session): session closed for user core Jan 15 14:14:56.731508 systemd[1]: sshd@7-10.244.21.14:22-147.75.109.163:57824.service: Deactivated successfully. Jan 15 14:14:56.741808 systemd[1]: session-10.scope: Deactivated successfully. Jan 15 14:14:56.744021 systemd-logind[1493]: Session 10 logged out. Waiting for processes to exit. Jan 15 14:14:56.747531 systemd-logind[1493]: Removed session 10. Jan 15 14:15:00.590546 systemd[1]: run-containerd-runc-k8s.io-898ac6d65f4f10dc663dec77896492a58247ee2a615cfed675882ddad84adc04-runc.y11ENy.mount: Deactivated successfully. Jan 15 14:15:01.887806 systemd[1]: Started sshd@8-10.244.21.14:22-147.75.109.163:58552.service - OpenSSH per-connection server daemon (147.75.109.163:58552). Jan 15 14:15:02.904537 sshd[5069]: Accepted publickey for core from 147.75.109.163 port 58552 ssh2: RSA SHA256:QG8B548JP5tdUNqGYa2d+pJD6UDQ9KBL2A0BMPySjNw Jan 15 14:15:02.907160 sshd[5069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 14:15:02.915667 systemd-logind[1493]: New session 11 of user core. Jan 15 14:15:02.921226 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 15 14:15:03.734764 sshd[5069]: pam_unix(sshd:session): session closed for user core Jan 15 14:15:03.743574 systemd[1]: sshd@8-10.244.21.14:22-147.75.109.163:58552.service: Deactivated successfully. Jan 15 14:15:03.747355 systemd[1]: session-11.scope: Deactivated successfully. Jan 15 14:15:03.749543 systemd-logind[1493]: Session 11 logged out. Waiting for processes to exit. Jan 15 14:15:03.751644 systemd-logind[1493]: Removed session 11. Jan 15 14:15:08.895791 systemd[1]: Started sshd@9-10.244.21.14:22-147.75.109.163:34084.service - OpenSSH per-connection server daemon (147.75.109.163:34084). Jan 15 14:15:09.827194 sshd[5108]: Accepted publickey for core from 147.75.109.163 port 34084 ssh2: RSA SHA256:QG8B548JP5tdUNqGYa2d+pJD6UDQ9KBL2A0BMPySjNw Jan 15 14:15:09.830385 sshd[5108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 14:15:09.842750 systemd-logind[1493]: New session 12 of user core. Jan 15 14:15:09.849418 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 15 14:15:10.601843 sshd[5108]: pam_unix(sshd:session): session closed for user core Jan 15 14:15:10.608444 systemd[1]: sshd@9-10.244.21.14:22-147.75.109.163:34084.service: Deactivated successfully. Jan 15 14:15:10.612751 systemd[1]: session-12.scope: Deactivated successfully. Jan 15 14:15:10.614358 systemd-logind[1493]: Session 12 logged out. Waiting for processes to exit. Jan 15 14:15:10.615943 systemd-logind[1493]: Removed session 12. Jan 15 14:15:15.764412 systemd[1]: Started sshd@10-10.244.21.14:22-147.75.109.163:34096.service - OpenSSH per-connection server daemon (147.75.109.163:34096). Jan 15 14:15:16.662910 sshd[5121]: Accepted publickey for core from 147.75.109.163 port 34096 ssh2: RSA SHA256:QG8B548JP5tdUNqGYa2d+pJD6UDQ9KBL2A0BMPySjNw Jan 15 14:15:16.666546 sshd[5121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 14:15:16.675557 systemd-logind[1493]: New session 13 of user core. Jan 15 14:15:16.682235 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 15 14:15:17.423453 sshd[5121]: pam_unix(sshd:session): session closed for user core Jan 15 14:15:17.429598 systemd[1]: sshd@10-10.244.21.14:22-147.75.109.163:34096.service: Deactivated successfully. Jan 15 14:15:17.432674 systemd[1]: session-13.scope: Deactivated successfully. Jan 15 14:15:17.433716 systemd-logind[1493]: Session 13 logged out. Waiting for processes to exit. Jan 15 14:15:17.435477 systemd-logind[1493]: Removed session 13. Jan 15 14:15:17.586534 systemd[1]: Started sshd@11-10.244.21.14:22-147.75.109.163:48302.service - OpenSSH per-connection server daemon (147.75.109.163:48302). Jan 15 14:15:18.503819 sshd[5135]: Accepted publickey for core from 147.75.109.163 port 48302 ssh2: RSA SHA256:QG8B548JP5tdUNqGYa2d+pJD6UDQ9KBL2A0BMPySjNw Jan 15 14:15:18.508420 sshd[5135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 14:15:18.517193 systemd-logind[1493]: New session 14 of user core. Jan 15 14:15:18.521249 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 15 14:15:19.320777 sshd[5135]: pam_unix(sshd:session): session closed for user core Jan 15 14:15:19.327011 systemd[1]: sshd@11-10.244.21.14:22-147.75.109.163:48302.service: Deactivated successfully. Jan 15 14:15:19.329830 systemd[1]: session-14.scope: Deactivated successfully. Jan 15 14:15:19.335487 systemd-logind[1493]: Session 14 logged out. Waiting for processes to exit. Jan 15 14:15:19.337736 systemd-logind[1493]: Removed session 14. Jan 15 14:15:19.484647 systemd[1]: Started sshd@12-10.244.21.14:22-147.75.109.163:48310.service - OpenSSH per-connection server daemon (147.75.109.163:48310). Jan 15 14:15:20.395262 sshd[5145]: Accepted publickey for core from 147.75.109.163 port 48310 ssh2: RSA SHA256:QG8B548JP5tdUNqGYa2d+pJD6UDQ9KBL2A0BMPySjNw Jan 15 14:15:20.398033 sshd[5145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 14:15:20.406881 systemd-logind[1493]: New session 15 of user core. Jan 15 14:15:20.415407 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 15 14:15:21.142064 sshd[5145]: pam_unix(sshd:session): session closed for user core Jan 15 14:15:21.149229 systemd[1]: sshd@12-10.244.21.14:22-147.75.109.163:48310.service: Deactivated successfully. Jan 15 14:15:21.154404 systemd[1]: session-15.scope: Deactivated successfully. Jan 15 14:15:21.156067 systemd-logind[1493]: Session 15 logged out. Waiting for processes to exit. Jan 15 14:15:21.160868 systemd-logind[1493]: Removed session 15. Jan 15 14:15:24.234693 containerd[1513]: time="2025-01-15T14:15:24.234416322Z" level=info msg="StopPodSandbox for \"c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8\"" Jan 15 14:15:24.660511 containerd[1513]: 2025-01-15 14:15:24.525 [WARNING][5172] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--8ino3.gb1.brightbox.com-k8s-calico--kube--controllers--677f55659--m4s76-eth0", GenerateName:"calico-kube-controllers-677f55659-", Namespace:"calico-system", SelfLink:"", UID:"ab3b97f5-1e67-4e9f-9a0c-89fe7060ec56", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 14, 13, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"677f55659", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-8ino3.gb1.brightbox.com", ContainerID:"26460777b2fc79d9b0123e9597257319158d87ea2d2ccf8596027fac99125aba", Pod:"calico-kube-controllers-677f55659-m4s76", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.33.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7245c61c5ff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 14:15:24.660511 containerd[1513]: 2025-01-15 14:15:24.536 [INFO][5172] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8" Jan 15 14:15:24.660511 containerd[1513]: 2025-01-15 14:15:24.536 [INFO][5172] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8" iface="eth0" netns="" Jan 15 14:15:24.660511 containerd[1513]: 2025-01-15 14:15:24.536 [INFO][5172] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8" Jan 15 14:15:24.660511 containerd[1513]: 2025-01-15 14:15:24.537 [INFO][5172] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8" Jan 15 14:15:24.660511 containerd[1513]: 2025-01-15 14:15:24.633 [INFO][5179] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8" HandleID="k8s-pod-network.c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8" Workload="srv--8ino3.gb1.brightbox.com-k8s-calico--kube--controllers--677f55659--m4s76-eth0" Jan 15 14:15:24.660511 containerd[1513]: 2025-01-15 14:15:24.636 [INFO][5179] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 14:15:24.660511 containerd[1513]: 2025-01-15 14:15:24.636 [INFO][5179] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 14:15:24.660511 containerd[1513]: 2025-01-15 14:15:24.652 [WARNING][5179] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8" HandleID="k8s-pod-network.c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8" Workload="srv--8ino3.gb1.brightbox.com-k8s-calico--kube--controllers--677f55659--m4s76-eth0" Jan 15 14:15:24.660511 containerd[1513]: 2025-01-15 14:15:24.652 [INFO][5179] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8" HandleID="k8s-pod-network.c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8" Workload="srv--8ino3.gb1.brightbox.com-k8s-calico--kube--controllers--677f55659--m4s76-eth0" Jan 15 14:15:24.660511 containerd[1513]: 2025-01-15 14:15:24.655 [INFO][5179] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 14:15:24.660511 containerd[1513]: 2025-01-15 14:15:24.658 [INFO][5172] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8" Jan 15 14:15:24.662834 containerd[1513]: time="2025-01-15T14:15:24.662191100Z" level=info msg="TearDown network for sandbox \"c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8\" successfully" Jan 15 14:15:24.662834 containerd[1513]: time="2025-01-15T14:15:24.662281137Z" level=info msg="StopPodSandbox for \"c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8\" returns successfully" Jan 15 14:15:24.663959 containerd[1513]: time="2025-01-15T14:15:24.663880341Z" level=info msg="RemovePodSandbox for \"c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8\"" Jan 15 14:15:24.664228 containerd[1513]: time="2025-01-15T14:15:24.663970418Z" level=info msg="Forcibly stopping sandbox \"c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8\"" Jan 15 14:15:24.791025 containerd[1513]: 2025-01-15 14:15:24.733 [WARNING][5199] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--8ino3.gb1.brightbox.com-k8s-calico--kube--controllers--677f55659--m4s76-eth0", GenerateName:"calico-kube-controllers-677f55659-", Namespace:"calico-system", SelfLink:"", UID:"ab3b97f5-1e67-4e9f-9a0c-89fe7060ec56", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 14, 13, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"677f55659", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-8ino3.gb1.brightbox.com", ContainerID:"26460777b2fc79d9b0123e9597257319158d87ea2d2ccf8596027fac99125aba", Pod:"calico-kube-controllers-677f55659-m4s76", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.33.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7245c61c5ff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 14:15:24.791025 containerd[1513]: 2025-01-15 14:15:24.734 [INFO][5199] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8" Jan 15 14:15:24.791025 containerd[1513]: 2025-01-15 14:15:24.734 [INFO][5199] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8" iface="eth0" netns="" Jan 15 14:15:24.791025 containerd[1513]: 2025-01-15 14:15:24.734 [INFO][5199] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8" Jan 15 14:15:24.791025 containerd[1513]: 2025-01-15 14:15:24.734 [INFO][5199] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8" Jan 15 14:15:24.791025 containerd[1513]: 2025-01-15 14:15:24.771 [INFO][5205] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8" HandleID="k8s-pod-network.c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8" Workload="srv--8ino3.gb1.brightbox.com-k8s-calico--kube--controllers--677f55659--m4s76-eth0" Jan 15 14:15:24.791025 containerd[1513]: 2025-01-15 14:15:24.771 [INFO][5205] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 14:15:24.791025 containerd[1513]: 2025-01-15 14:15:24.771 [INFO][5205] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 14:15:24.791025 containerd[1513]: 2025-01-15 14:15:24.780 [WARNING][5205] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8" HandleID="k8s-pod-network.c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8" Workload="srv--8ino3.gb1.brightbox.com-k8s-calico--kube--controllers--677f55659--m4s76-eth0" Jan 15 14:15:24.791025 containerd[1513]: 2025-01-15 14:15:24.780 [INFO][5205] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8" HandleID="k8s-pod-network.c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8" Workload="srv--8ino3.gb1.brightbox.com-k8s-calico--kube--controllers--677f55659--m4s76-eth0" Jan 15 14:15:24.791025 containerd[1513]: 2025-01-15 14:15:24.785 [INFO][5205] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 14:15:24.791025 containerd[1513]: 2025-01-15 14:15:24.787 [INFO][5199] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8" Jan 15 14:15:24.791025 containerd[1513]: time="2025-01-15T14:15:24.790940498Z" level=info msg="TearDown network for sandbox \"c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8\" successfully" Jan 15 14:15:24.808209 containerd[1513]: time="2025-01-15T14:15:24.808077780Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 15 14:15:24.808477 containerd[1513]: time="2025-01-15T14:15:24.808244908Z" level=info msg="RemovePodSandbox \"c0f73d59b92c39a01505364bda0ea63955a53e8e3f4ea63d8153187a5e1a3df8\" returns successfully" Jan 15 14:15:24.809453 containerd[1513]: time="2025-01-15T14:15:24.809405306Z" level=info msg="StopPodSandbox for \"7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb\"" Jan 15 14:15:24.936765 containerd[1513]: 2025-01-15 14:15:24.872 [WARNING][5224] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--8ino3.gb1.brightbox.com-k8s-csi--node--driver--hvqsn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7cb0d9ef-84b8-4638-9648-eb1fe2376a04", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 14, 13, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-8ino3.gb1.brightbox.com", ContainerID:"40f3729803b6e4300851ad856f905c308455a65e1624f3b15b57d0ca88421e54", Pod:"csi-node-driver-hvqsn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.33.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid2cf6173d3e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 14:15:24.936765 containerd[1513]: 2025-01-15 14:15:24.872 [INFO][5224] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb" Jan 15 14:15:24.936765 containerd[1513]: 2025-01-15 14:15:24.872 [INFO][5224] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb" iface="eth0" netns="" Jan 15 14:15:24.936765 containerd[1513]: 2025-01-15 14:15:24.873 [INFO][5224] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb" Jan 15 14:15:24.936765 containerd[1513]: 2025-01-15 14:15:24.873 [INFO][5224] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb" Jan 15 14:15:24.936765 containerd[1513]: 2025-01-15 14:15:24.908 [INFO][5230] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb" HandleID="k8s-pod-network.7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb" Workload="srv--8ino3.gb1.brightbox.com-k8s-csi--node--driver--hvqsn-eth0" Jan 15 14:15:24.936765 containerd[1513]: 2025-01-15 14:15:24.909 [INFO][5230] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 14:15:24.936765 containerd[1513]: 2025-01-15 14:15:24.909 [INFO][5230] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 14:15:24.936765 containerd[1513]: 2025-01-15 14:15:24.924 [WARNING][5230] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb" HandleID="k8s-pod-network.7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb" Workload="srv--8ino3.gb1.brightbox.com-k8s-csi--node--driver--hvqsn-eth0" Jan 15 14:15:24.936765 containerd[1513]: 2025-01-15 14:15:24.924 [INFO][5230] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb" HandleID="k8s-pod-network.7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb" Workload="srv--8ino3.gb1.brightbox.com-k8s-csi--node--driver--hvqsn-eth0" Jan 15 14:15:24.936765 containerd[1513]: 2025-01-15 14:15:24.930 [INFO][5230] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 14:15:24.936765 containerd[1513]: 2025-01-15 14:15:24.934 [INFO][5224] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb" Jan 15 14:15:24.939051 containerd[1513]: time="2025-01-15T14:15:24.936724155Z" level=info msg="TearDown network for sandbox \"7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb\" successfully" Jan 15 14:15:24.939051 containerd[1513]: time="2025-01-15T14:15:24.937113440Z" level=info msg="StopPodSandbox for \"7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb\" returns successfully" Jan 15 14:15:24.939051 containerd[1513]: time="2025-01-15T14:15:24.938061554Z" level=info msg="RemovePodSandbox for \"7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb\"" Jan 15 14:15:24.939051 containerd[1513]: time="2025-01-15T14:15:24.938112906Z" level=info msg="Forcibly stopping sandbox \"7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb\"" Jan 15 14:15:25.050265 containerd[1513]: 2025-01-15 14:15:24.993 [WARNING][5248] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--8ino3.gb1.brightbox.com-k8s-csi--node--driver--hvqsn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7cb0d9ef-84b8-4638-9648-eb1fe2376a04", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 14, 13, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-8ino3.gb1.brightbox.com", ContainerID:"40f3729803b6e4300851ad856f905c308455a65e1624f3b15b57d0ca88421e54", Pod:"csi-node-driver-hvqsn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.33.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid2cf6173d3e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 14:15:25.050265 containerd[1513]: 2025-01-15 14:15:24.994 [INFO][5248] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb" Jan 15 14:15:25.050265 containerd[1513]: 2025-01-15 14:15:24.994 [INFO][5248] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb" iface="eth0" netns="" Jan 15 14:15:25.050265 containerd[1513]: 2025-01-15 14:15:24.994 [INFO][5248] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb" Jan 15 14:15:25.050265 containerd[1513]: 2025-01-15 14:15:24.994 [INFO][5248] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb" Jan 15 14:15:25.050265 containerd[1513]: 2025-01-15 14:15:25.026 [INFO][5254] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb" HandleID="k8s-pod-network.7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb" Workload="srv--8ino3.gb1.brightbox.com-k8s-csi--node--driver--hvqsn-eth0" Jan 15 14:15:25.050265 containerd[1513]: 2025-01-15 14:15:25.027 [INFO][5254] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 14:15:25.050265 containerd[1513]: 2025-01-15 14:15:25.027 [INFO][5254] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 14:15:25.050265 containerd[1513]: 2025-01-15 14:15:25.041 [WARNING][5254] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb" HandleID="k8s-pod-network.7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb" Workload="srv--8ino3.gb1.brightbox.com-k8s-csi--node--driver--hvqsn-eth0" Jan 15 14:15:25.050265 containerd[1513]: 2025-01-15 14:15:25.041 [INFO][5254] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb" HandleID="k8s-pod-network.7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb" Workload="srv--8ino3.gb1.brightbox.com-k8s-csi--node--driver--hvqsn-eth0" Jan 15 14:15:25.050265 containerd[1513]: 2025-01-15 14:15:25.045 [INFO][5254] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 14:15:25.050265 containerd[1513]: 2025-01-15 14:15:25.047 [INFO][5248] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb" Jan 15 14:15:25.053360 containerd[1513]: time="2025-01-15T14:15:25.050309836Z" level=info msg="TearDown network for sandbox \"7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb\" successfully" Jan 15 14:15:25.062040 containerd[1513]: time="2025-01-15T14:15:25.061941731Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 15 14:15:25.062195 containerd[1513]: time="2025-01-15T14:15:25.062105868Z" level=info msg="RemovePodSandbox \"7f2dd7c4d418b7234f9aade5fed5da1e2c0560dc66e820235425d8da9adb51fb\" returns successfully" Jan 15 14:15:25.063352 containerd[1513]: time="2025-01-15T14:15:25.062873042Z" level=info msg="StopPodSandbox for \"1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b\"" Jan 15 14:15:25.181183 containerd[1513]: 2025-01-15 14:15:25.121 [WARNING][5272] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--8ino3.gb1.brightbox.com-k8s-coredns--6f6b679f8f--2gnwv-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"62a77434-74c8-4f4e-93f0-06f47a0ce14a", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 14, 13, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-8ino3.gb1.brightbox.com", ContainerID:"f6e3663e60272387fba344b935d61ecab5aa60f1393932ccde48b5fc57de82c5", Pod:"coredns-6f6b679f8f-2gnwv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.33.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califbafc1cd3dd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 14:15:25.181183 containerd[1513]: 2025-01-15 14:15:25.122 [INFO][5272] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b" Jan 15 14:15:25.181183 containerd[1513]: 2025-01-15 14:15:25.122 [INFO][5272] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b" iface="eth0" netns="" Jan 15 14:15:25.181183 containerd[1513]: 2025-01-15 14:15:25.122 [INFO][5272] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b" Jan 15 14:15:25.181183 containerd[1513]: 2025-01-15 14:15:25.122 [INFO][5272] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b" Jan 15 14:15:25.181183 containerd[1513]: 2025-01-15 14:15:25.166 [INFO][5278] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b" HandleID="k8s-pod-network.1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b" Workload="srv--8ino3.gb1.brightbox.com-k8s-coredns--6f6b679f8f--2gnwv-eth0" Jan 15 14:15:25.181183 containerd[1513]: 2025-01-15 14:15:25.166 [INFO][5278] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 14:15:25.181183 containerd[1513]: 2025-01-15 14:15:25.166 [INFO][5278] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 14:15:25.181183 containerd[1513]: 2025-01-15 14:15:25.175 [WARNING][5278] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b" HandleID="k8s-pod-network.1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b" Workload="srv--8ino3.gb1.brightbox.com-k8s-coredns--6f6b679f8f--2gnwv-eth0" Jan 15 14:15:25.181183 containerd[1513]: 2025-01-15 14:15:25.175 [INFO][5278] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b" HandleID="k8s-pod-network.1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b" Workload="srv--8ino3.gb1.brightbox.com-k8s-coredns--6f6b679f8f--2gnwv-eth0" Jan 15 14:15:25.181183 containerd[1513]: 2025-01-15 14:15:25.177 [INFO][5278] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 14:15:25.181183 containerd[1513]: 2025-01-15 14:15:25.179 [INFO][5272] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b" Jan 15 14:15:25.182206 containerd[1513]: time="2025-01-15T14:15:25.182154877Z" level=info msg="TearDown network for sandbox \"1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b\" successfully" Jan 15 14:15:25.182303 containerd[1513]: time="2025-01-15T14:15:25.182208517Z" level=info msg="StopPodSandbox for \"1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b\" returns successfully" Jan 15 14:15:25.182881 containerd[1513]: time="2025-01-15T14:15:25.182838286Z" level=info msg="RemovePodSandbox for \"1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b\"" Jan 15 14:15:25.182953 containerd[1513]: time="2025-01-15T14:15:25.182887361Z" level=info msg="Forcibly stopping sandbox \"1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b\"" Jan 15 14:15:25.298298 containerd[1513]: 2025-01-15 14:15:25.235 [WARNING][5296] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--8ino3.gb1.brightbox.com-k8s-coredns--6f6b679f8f--2gnwv-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"62a77434-74c8-4f4e-93f0-06f47a0ce14a", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 14, 13, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-8ino3.gb1.brightbox.com", ContainerID:"f6e3663e60272387fba344b935d61ecab5aa60f1393932ccde48b5fc57de82c5", Pod:"coredns-6f6b679f8f-2gnwv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.33.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califbafc1cd3dd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 14:15:25.298298 containerd[1513]: 2025-01-15 14:15:25.235 [INFO][5296] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b" Jan 15 14:15:25.298298 containerd[1513]: 2025-01-15 14:15:25.235 [INFO][5296] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b" iface="eth0" netns="" Jan 15 14:15:25.298298 containerd[1513]: 2025-01-15 14:15:25.235 [INFO][5296] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b" Jan 15 14:15:25.298298 containerd[1513]: 2025-01-15 14:15:25.235 [INFO][5296] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b" Jan 15 14:15:25.298298 containerd[1513]: 2025-01-15 14:15:25.271 [INFO][5302] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b" HandleID="k8s-pod-network.1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b" Workload="srv--8ino3.gb1.brightbox.com-k8s-coredns--6f6b679f8f--2gnwv-eth0" Jan 15 14:15:25.298298 containerd[1513]: 2025-01-15 14:15:25.271 [INFO][5302] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 14:15:25.298298 containerd[1513]: 2025-01-15 14:15:25.271 [INFO][5302] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 14:15:25.298298 containerd[1513]: 2025-01-15 14:15:25.287 [WARNING][5302] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b" HandleID="k8s-pod-network.1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b" Workload="srv--8ino3.gb1.brightbox.com-k8s-coredns--6f6b679f8f--2gnwv-eth0" Jan 15 14:15:25.298298 containerd[1513]: 2025-01-15 14:15:25.287 [INFO][5302] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b" HandleID="k8s-pod-network.1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b" Workload="srv--8ino3.gb1.brightbox.com-k8s-coredns--6f6b679f8f--2gnwv-eth0" Jan 15 14:15:25.298298 containerd[1513]: 2025-01-15 14:15:25.291 [INFO][5302] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 14:15:25.298298 containerd[1513]: 2025-01-15 14:15:25.293 [INFO][5296] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b" Jan 15 14:15:25.298298 containerd[1513]: time="2025-01-15T14:15:25.295772767Z" level=info msg="TearDown network for sandbox \"1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b\" successfully" Jan 15 14:15:25.306366 containerd[1513]: time="2025-01-15T14:15:25.306269837Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 15 14:15:25.306531 containerd[1513]: time="2025-01-15T14:15:25.306424565Z" level=info msg="RemovePodSandbox \"1bb5d375331c932a08896934bc7088837468ded723c7a79d865b167547dd508b\" returns successfully" Jan 15 14:15:25.307262 containerd[1513]: time="2025-01-15T14:15:25.307214817Z" level=info msg="StopPodSandbox for \"0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593\"" Jan 15 14:15:25.417265 containerd[1513]: 2025-01-15 14:15:25.371 [WARNING][5321] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--8ino3.gb1.brightbox.com-k8s-coredns--6f6b679f8f--j2sst-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"d0a1a3c0-5c7a-49bf-98cd-cbcfee7c8716", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 14, 13, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-8ino3.gb1.brightbox.com", ContainerID:"01f60bb5e316baa6826d612b46a371f8166e8cc0769285be204342b2feaecd6f", Pod:"coredns-6f6b679f8f-j2sst", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.33.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7e6db23d61c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 14:15:25.417265 containerd[1513]: 2025-01-15 14:15:25.371 [INFO][5321] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593" Jan 15 14:15:25.417265 containerd[1513]: 2025-01-15 14:15:25.372 [INFO][5321] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593" iface="eth0" netns="" Jan 15 14:15:25.417265 containerd[1513]: 2025-01-15 14:15:25.372 [INFO][5321] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593" Jan 15 14:15:25.417265 containerd[1513]: 2025-01-15 14:15:25.372 [INFO][5321] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593" Jan 15 14:15:25.417265 containerd[1513]: 2025-01-15 14:15:25.400 [INFO][5327] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593" HandleID="k8s-pod-network.0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593" Workload="srv--8ino3.gb1.brightbox.com-k8s-coredns--6f6b679f8f--j2sst-eth0" Jan 15 14:15:25.417265 containerd[1513]: 2025-01-15 14:15:25.400 [INFO][5327] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 14:15:25.417265 containerd[1513]: 2025-01-15 14:15:25.400 [INFO][5327] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 14:15:25.417265 containerd[1513]: 2025-01-15 14:15:25.410 [WARNING][5327] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593" HandleID="k8s-pod-network.0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593" Workload="srv--8ino3.gb1.brightbox.com-k8s-coredns--6f6b679f8f--j2sst-eth0" Jan 15 14:15:25.417265 containerd[1513]: 2025-01-15 14:15:25.410 [INFO][5327] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593" HandleID="k8s-pod-network.0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593" Workload="srv--8ino3.gb1.brightbox.com-k8s-coredns--6f6b679f8f--j2sst-eth0" Jan 15 14:15:25.417265 containerd[1513]: 2025-01-15 14:15:25.413 [INFO][5327] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 14:15:25.417265 containerd[1513]: 2025-01-15 14:15:25.415 [INFO][5321] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593" Jan 15 14:15:25.418305 containerd[1513]: time="2025-01-15T14:15:25.417404004Z" level=info msg="TearDown network for sandbox \"0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593\" successfully" Jan 15 14:15:25.418305 containerd[1513]: time="2025-01-15T14:15:25.417461780Z" level=info msg="StopPodSandbox for \"0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593\" returns successfully" Jan 15 14:15:25.418556 containerd[1513]: time="2025-01-15T14:15:25.418514909Z" level=info msg="RemovePodSandbox for \"0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593\"" Jan 15 14:15:25.418630 containerd[1513]: time="2025-01-15T14:15:25.418582620Z" level=info msg="Forcibly stopping sandbox \"0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593\"" Jan 15 14:15:25.541432 containerd[1513]: 2025-01-15 14:15:25.488 [WARNING][5345] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--8ino3.gb1.brightbox.com-k8s-coredns--6f6b679f8f--j2sst-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"d0a1a3c0-5c7a-49bf-98cd-cbcfee7c8716", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 14, 13, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-8ino3.gb1.brightbox.com", ContainerID:"01f60bb5e316baa6826d612b46a371f8166e8cc0769285be204342b2feaecd6f", Pod:"coredns-6f6b679f8f-j2sst", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.33.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7e6db23d61c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 14:15:25.541432 containerd[1513]: 2025-01-15 14:15:25.491 [INFO][5345] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593" Jan 15 14:15:25.541432 containerd[1513]: 2025-01-15 14:15:25.491 [INFO][5345] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593" iface="eth0" netns="" Jan 15 14:15:25.541432 containerd[1513]: 2025-01-15 14:15:25.491 [INFO][5345] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593" Jan 15 14:15:25.541432 containerd[1513]: 2025-01-15 14:15:25.491 [INFO][5345] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593" Jan 15 14:15:25.541432 containerd[1513]: 2025-01-15 14:15:25.527 [INFO][5351] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593" HandleID="k8s-pod-network.0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593" Workload="srv--8ino3.gb1.brightbox.com-k8s-coredns--6f6b679f8f--j2sst-eth0" Jan 15 14:15:25.541432 containerd[1513]: 2025-01-15 14:15:25.527 [INFO][5351] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 14:15:25.541432 containerd[1513]: 2025-01-15 14:15:25.528 [INFO][5351] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 14:15:25.541432 containerd[1513]: 2025-01-15 14:15:25.536 [WARNING][5351] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593" HandleID="k8s-pod-network.0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593" Workload="srv--8ino3.gb1.brightbox.com-k8s-coredns--6f6b679f8f--j2sst-eth0" Jan 15 14:15:25.541432 containerd[1513]: 2025-01-15 14:15:25.536 [INFO][5351] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593" HandleID="k8s-pod-network.0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593" Workload="srv--8ino3.gb1.brightbox.com-k8s-coredns--6f6b679f8f--j2sst-eth0" Jan 15 14:15:25.541432 containerd[1513]: 2025-01-15 14:15:25.538 [INFO][5351] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 14:15:25.541432 containerd[1513]: 2025-01-15 14:15:25.539 [INFO][5345] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593" Jan 15 14:15:25.542860 containerd[1513]: time="2025-01-15T14:15:25.541501534Z" level=info msg="TearDown network for sandbox \"0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593\" successfully" Jan 15 14:15:25.549904 containerd[1513]: time="2025-01-15T14:15:25.549596823Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 15 14:15:25.549904 containerd[1513]: time="2025-01-15T14:15:25.549669399Z" level=info msg="RemovePodSandbox \"0038eea4ae756a586b6b809e95f15e0cd0fad2702a0dc5b940a7ae0ba0fb8593\" returns successfully" Jan 15 14:15:25.551690 containerd[1513]: time="2025-01-15T14:15:25.551225409Z" level=info msg="StopPodSandbox for \"debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926\"" Jan 15 14:15:25.655039 containerd[1513]: 2025-01-15 14:15:25.606 [WARNING][5370] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--8ino3.gb1.brightbox.com-k8s-calico--apiserver--d56596c95--5w28p-eth0", GenerateName:"calico-apiserver-d56596c95-", Namespace:"calico-apiserver", SelfLink:"", UID:"db57f239-8d03-4c8d-9b83-ba0c7c7afb10", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 14, 13, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d56596c95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-8ino3.gb1.brightbox.com", ContainerID:"268f429102cc7bdb52f9c5d5aca9a47a5516bb413f8131d7b92ab59aafeaea96", Pod:"calico-apiserver-d56596c95-5w28p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali64efce68a26", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 14:15:25.655039 containerd[1513]: 2025-01-15 14:15:25.607 [INFO][5370] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926" Jan 15 14:15:25.655039 containerd[1513]: 2025-01-15 14:15:25.607 [INFO][5370] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926" iface="eth0" netns="" Jan 15 14:15:25.655039 containerd[1513]: 2025-01-15 14:15:25.607 [INFO][5370] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926" Jan 15 14:15:25.655039 containerd[1513]: 2025-01-15 14:15:25.607 [INFO][5370] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926" Jan 15 14:15:25.655039 containerd[1513]: 2025-01-15 14:15:25.637 [INFO][5376] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926" HandleID="k8s-pod-network.debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926" Workload="srv--8ino3.gb1.brightbox.com-k8s-calico--apiserver--d56596c95--5w28p-eth0" Jan 15 14:15:25.655039 containerd[1513]: 2025-01-15 14:15:25.637 [INFO][5376] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 14:15:25.655039 containerd[1513]: 2025-01-15 14:15:25.637 [INFO][5376] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 14:15:25.655039 containerd[1513]: 2025-01-15 14:15:25.649 [WARNING][5376] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926" HandleID="k8s-pod-network.debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926" Workload="srv--8ino3.gb1.brightbox.com-k8s-calico--apiserver--d56596c95--5w28p-eth0" Jan 15 14:15:25.655039 containerd[1513]: 2025-01-15 14:15:25.649 [INFO][5376] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926" HandleID="k8s-pod-network.debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926" Workload="srv--8ino3.gb1.brightbox.com-k8s-calico--apiserver--d56596c95--5w28p-eth0" Jan 15 14:15:25.655039 containerd[1513]: 2025-01-15 14:15:25.651 [INFO][5376] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 14:15:25.655039 containerd[1513]: 2025-01-15 14:15:25.653 [INFO][5370] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926" Jan 15 14:15:25.657233 containerd[1513]: time="2025-01-15T14:15:25.656052614Z" level=info msg="TearDown network for sandbox \"debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926\" successfully" Jan 15 14:15:25.657233 containerd[1513]: time="2025-01-15T14:15:25.656112213Z" level=info msg="StopPodSandbox for \"debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926\" returns successfully" Jan 15 14:15:25.657233 containerd[1513]: time="2025-01-15T14:15:25.656788231Z" level=info msg="RemovePodSandbox for \"debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926\"" Jan 15 14:15:25.657233 containerd[1513]: time="2025-01-15T14:15:25.656822948Z" level=info msg="Forcibly stopping sandbox \"debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926\"" Jan 15 14:15:25.757150 containerd[1513]: 2025-01-15 14:15:25.709 [WARNING][5394] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--8ino3.gb1.brightbox.com-k8s-calico--apiserver--d56596c95--5w28p-eth0", GenerateName:"calico-apiserver-d56596c95-", Namespace:"calico-apiserver", SelfLink:"", UID:"db57f239-8d03-4c8d-9b83-ba0c7c7afb10", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 14, 13, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d56596c95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-8ino3.gb1.brightbox.com", ContainerID:"268f429102cc7bdb52f9c5d5aca9a47a5516bb413f8131d7b92ab59aafeaea96", Pod:"calico-apiserver-d56596c95-5w28p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali64efce68a26", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 14:15:25.757150 containerd[1513]: 2025-01-15 14:15:25.710 [INFO][5394] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926" Jan 15 14:15:25.757150 containerd[1513]: 2025-01-15 14:15:25.710 [INFO][5394] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926" iface="eth0" netns="" Jan 15 14:15:25.757150 containerd[1513]: 2025-01-15 14:15:25.710 [INFO][5394] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926" Jan 15 14:15:25.757150 containerd[1513]: 2025-01-15 14:15:25.710 [INFO][5394] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926" Jan 15 14:15:25.757150 containerd[1513]: 2025-01-15 14:15:25.740 [INFO][5400] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926" HandleID="k8s-pod-network.debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926" Workload="srv--8ino3.gb1.brightbox.com-k8s-calico--apiserver--d56596c95--5w28p-eth0" Jan 15 14:15:25.757150 containerd[1513]: 2025-01-15 14:15:25.741 [INFO][5400] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 14:15:25.757150 containerd[1513]: 2025-01-15 14:15:25.741 [INFO][5400] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 14:15:25.757150 containerd[1513]: 2025-01-15 14:15:25.750 [WARNING][5400] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926" HandleID="k8s-pod-network.debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926" Workload="srv--8ino3.gb1.brightbox.com-k8s-calico--apiserver--d56596c95--5w28p-eth0" Jan 15 14:15:25.757150 containerd[1513]: 2025-01-15 14:15:25.750 [INFO][5400] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926" HandleID="k8s-pod-network.debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926" Workload="srv--8ino3.gb1.brightbox.com-k8s-calico--apiserver--d56596c95--5w28p-eth0" Jan 15 14:15:25.757150 containerd[1513]: 2025-01-15 14:15:25.753 [INFO][5400] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 14:15:25.757150 containerd[1513]: 2025-01-15 14:15:25.755 [INFO][5394] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926" Jan 15 14:15:25.757150 containerd[1513]: time="2025-01-15T14:15:25.757077880Z" level=info msg="TearDown network for sandbox \"debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926\" successfully" Jan 15 14:15:25.762364 containerd[1513]: time="2025-01-15T14:15:25.762237420Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 15 14:15:25.762364 containerd[1513]: time="2025-01-15T14:15:25.762316094Z" level=info msg="RemovePodSandbox \"debedb969f2bb981294f79ac71df821ea868fe4d8b325c53f7a0eef61bfb7926\" returns successfully" Jan 15 14:15:26.293494 systemd[1]: Started sshd@13-10.244.21.14:22-147.75.109.163:48326.service - OpenSSH per-connection server daemon (147.75.109.163:48326). Jan 15 14:15:27.262229 sshd[5408]: Accepted publickey for core from 147.75.109.163 port 48326 ssh2: RSA SHA256:QG8B548JP5tdUNqGYa2d+pJD6UDQ9KBL2A0BMPySjNw Jan 15 14:15:27.266338 sshd[5408]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 14:15:27.279465 systemd-logind[1493]: New session 16 of user core. Jan 15 14:15:27.286389 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 15 14:15:28.076714 sshd[5408]: pam_unix(sshd:session): session closed for user core Jan 15 14:15:28.081170 systemd-logind[1493]: Session 16 logged out. Waiting for processes to exit. Jan 15 14:15:28.081887 systemd[1]: sshd@13-10.244.21.14:22-147.75.109.163:48326.service: Deactivated successfully. Jan 15 14:15:28.084958 systemd[1]: session-16.scope: Deactivated successfully. Jan 15 14:15:28.087970 systemd-logind[1493]: Removed session 16. Jan 15 14:15:33.255489 systemd[1]: Started sshd@14-10.244.21.14:22-147.75.109.163:35542.service - OpenSSH per-connection server daemon (147.75.109.163:35542). Jan 15 14:15:34.228304 sshd[5452]: Accepted publickey for core from 147.75.109.163 port 35542 ssh2: RSA SHA256:QG8B548JP5tdUNqGYa2d+pJD6UDQ9KBL2A0BMPySjNw Jan 15 14:15:34.233596 sshd[5452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 14:15:34.243174 systemd-logind[1493]: New session 17 of user core. Jan 15 14:15:34.251225 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 15 14:15:35.026706 sshd[5452]: pam_unix(sshd:session): session closed for user core Jan 15 14:15:35.035356 systemd[1]: sshd@14-10.244.21.14:22-147.75.109.163:35542.service: Deactivated successfully. Jan 15 14:15:35.036153 systemd-logind[1493]: Session 17 logged out. Waiting for processes to exit. Jan 15 14:15:35.038659 systemd[1]: session-17.scope: Deactivated successfully. Jan 15 14:15:35.045247 systemd-logind[1493]: Removed session 17. Jan 15 14:15:40.191610 systemd[1]: Started sshd@15-10.244.21.14:22-147.75.109.163:46788.service - OpenSSH per-connection server daemon (147.75.109.163:46788). Jan 15 14:15:41.123266 sshd[5504]: Accepted publickey for core from 147.75.109.163 port 46788 ssh2: RSA SHA256:QG8B548JP5tdUNqGYa2d+pJD6UDQ9KBL2A0BMPySjNw Jan 15 14:15:41.126494 sshd[5504]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 14:15:41.141861 systemd-logind[1493]: New session 18 of user core. Jan 15 14:15:41.153437 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 15 14:15:41.952937 sshd[5504]: pam_unix(sshd:session): session closed for user core Jan 15 14:15:41.960125 systemd[1]: sshd@15-10.244.21.14:22-147.75.109.163:46788.service: Deactivated successfully. Jan 15 14:15:41.967268 systemd[1]: session-18.scope: Deactivated successfully. Jan 15 14:15:41.974101 systemd-logind[1493]: Session 18 logged out. Waiting for processes to exit. Jan 15 14:15:41.977530 systemd-logind[1493]: Removed session 18. Jan 15 14:15:42.112441 systemd[1]: Started sshd@16-10.244.21.14:22-147.75.109.163:46792.service - OpenSSH per-connection server daemon (147.75.109.163:46792). Jan 15 14:15:43.010029 sshd[5517]: Accepted publickey for core from 147.75.109.163 port 46792 ssh2: RSA SHA256:QG8B548JP5tdUNqGYa2d+pJD6UDQ9KBL2A0BMPySjNw Jan 15 14:15:43.012653 sshd[5517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 14:15:43.022833 systemd-logind[1493]: New session 19 of user core. Jan 15 14:15:43.031480 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 15 14:15:44.010970 sshd[5517]: pam_unix(sshd:session): session closed for user core Jan 15 14:15:44.023969 systemd[1]: sshd@16-10.244.21.14:22-147.75.109.163:46792.service: Deactivated successfully. Jan 15 14:15:44.028963 systemd[1]: session-19.scope: Deactivated successfully. Jan 15 14:15:44.030340 systemd-logind[1493]: Session 19 logged out. Waiting for processes to exit. Jan 15 14:15:44.032275 systemd-logind[1493]: Removed session 19. Jan 15 14:15:44.171004 systemd[1]: Started sshd@17-10.244.21.14:22-147.75.109.163:46806.service - OpenSSH per-connection server daemon (147.75.109.163:46806). Jan 15 14:15:45.094341 sshd[5531]: Accepted publickey for core from 147.75.109.163 port 46806 ssh2: RSA SHA256:QG8B548JP5tdUNqGYa2d+pJD6UDQ9KBL2A0BMPySjNw Jan 15 14:15:45.096918 sshd[5531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 14:15:45.105445 systemd-logind[1493]: New session 20 of user core. Jan 15 14:15:45.110306 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 15 14:15:48.606035 sshd[5531]: pam_unix(sshd:session): session closed for user core Jan 15 14:15:48.621062 systemd[1]: sshd@17-10.244.21.14:22-147.75.109.163:46806.service: Deactivated successfully. Jan 15 14:15:48.623892 systemd[1]: session-20.scope: Deactivated successfully. Jan 15 14:15:48.631680 systemd-logind[1493]: Session 20 logged out. Waiting for processes to exit. Jan 15 14:15:48.633845 systemd-logind[1493]: Removed session 20. Jan 15 14:15:48.751221 systemd[1]: Started sshd@18-10.244.21.14:22-147.75.109.163:56160.service - OpenSSH per-connection server daemon (147.75.109.163:56160). Jan 15 14:15:49.692075 sshd[5557]: Accepted publickey for core from 147.75.109.163 port 56160 ssh2: RSA SHA256:QG8B548JP5tdUNqGYa2d+pJD6UDQ9KBL2A0BMPySjNw Jan 15 14:15:49.694700 sshd[5557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 14:15:49.707785 systemd-logind[1493]: New session 21 of user core. Jan 15 14:15:49.714460 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 15 14:15:50.856906 sshd[5557]: pam_unix(sshd:session): session closed for user core Jan 15 14:15:50.862393 systemd[1]: sshd@18-10.244.21.14:22-147.75.109.163:56160.service: Deactivated successfully. Jan 15 14:15:50.866230 systemd[1]: session-21.scope: Deactivated successfully. Jan 15 14:15:50.868707 systemd-logind[1493]: Session 21 logged out. Waiting for processes to exit. Jan 15 14:15:50.871219 systemd-logind[1493]: Removed session 21. Jan 15 14:15:51.022519 systemd[1]: Started sshd@19-10.244.21.14:22-147.75.109.163:56170.service - OpenSSH per-connection server daemon (147.75.109.163:56170). Jan 15 14:15:52.011646 sshd[5567]: Accepted publickey for core from 147.75.109.163 port 56170 ssh2: RSA SHA256:QG8B548JP5tdUNqGYa2d+pJD6UDQ9KBL2A0BMPySjNw Jan 15 14:15:52.014491 sshd[5567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 14:15:52.062851 systemd-logind[1493]: New session 22 of user core. Jan 15 14:15:52.072439 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 15 14:15:52.764012 sshd[5567]: pam_unix(sshd:session): session closed for user core Jan 15 14:15:52.770917 systemd[1]: sshd@19-10.244.21.14:22-147.75.109.163:56170.service: Deactivated successfully. Jan 15 14:15:52.773940 systemd[1]: session-22.scope: Deactivated successfully. Jan 15 14:15:52.776946 systemd-logind[1493]: Session 22 logged out. Waiting for processes to exit. Jan 15 14:15:52.778938 systemd-logind[1493]: Removed session 22. Jan 15 14:15:57.939856 systemd[1]: Started sshd@20-10.244.21.14:22-147.75.109.163:38400.service - OpenSSH per-connection server daemon (147.75.109.163:38400). Jan 15 14:15:58.827491 sshd[5588]: Accepted publickey for core from 147.75.109.163 port 38400 ssh2: RSA SHA256:QG8B548JP5tdUNqGYa2d+pJD6UDQ9KBL2A0BMPySjNw Jan 15 14:15:58.830025 sshd[5588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 14:15:58.838140 systemd-logind[1493]: New session 23 of user core. Jan 15 14:15:58.845613 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 15 14:15:59.552385 sshd[5588]: pam_unix(sshd:session): session closed for user core Jan 15 14:15:59.558256 systemd[1]: sshd@20-10.244.21.14:22-147.75.109.163:38400.service: Deactivated successfully. Jan 15 14:15:59.562532 systemd[1]: session-23.scope: Deactivated successfully. Jan 15 14:15:59.564088 systemd-logind[1493]: Session 23 logged out. Waiting for processes to exit. Jan 15 14:15:59.566169 systemd-logind[1493]: Removed session 23. Jan 15 14:16:04.719548 systemd[1]: Started sshd@21-10.244.21.14:22-147.75.109.163:38406.service - OpenSSH per-connection server daemon (147.75.109.163:38406). Jan 15 14:16:05.661839 sshd[5635]: Accepted publickey for core from 147.75.109.163 port 38406 ssh2: RSA SHA256:QG8B548JP5tdUNqGYa2d+pJD6UDQ9KBL2A0BMPySjNw Jan 15 14:16:05.666444 sshd[5635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 14:16:05.679794 systemd-logind[1493]: New session 24 of user core. Jan 15 14:16:05.687632 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 15 14:16:06.419584 sshd[5635]: pam_unix(sshd:session): session closed for user core Jan 15 14:16:06.427599 systemd[1]: sshd@21-10.244.21.14:22-147.75.109.163:38406.service: Deactivated successfully. Jan 15 14:16:06.433694 systemd[1]: session-24.scope: Deactivated successfully. Jan 15 14:16:06.436754 systemd-logind[1493]: Session 24 logged out. Waiting for processes to exit. Jan 15 14:16:06.439392 systemd-logind[1493]: Removed session 24. Jan 15 14:16:11.580563 systemd[1]: Started sshd@22-10.244.21.14:22-147.75.109.163:45774.service - OpenSSH per-connection server daemon (147.75.109.163:45774). Jan 15 14:16:12.500658 sshd[5666]: Accepted publickey for core from 147.75.109.163 port 45774 ssh2: RSA SHA256:QG8B548JP5tdUNqGYa2d+pJD6UDQ9KBL2A0BMPySjNw Jan 15 14:16:12.503687 sshd[5666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 14:16:12.516322 systemd-logind[1493]: New session 25 of user core. Jan 15 14:16:12.518231 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 15 14:16:13.299892 sshd[5666]: pam_unix(sshd:session): session closed for user core Jan 15 14:16:13.307448 systemd[1]: sshd@22-10.244.21.14:22-147.75.109.163:45774.service: Deactivated successfully. Jan 15 14:16:13.314310 systemd[1]: session-25.scope: Deactivated successfully. Jan 15 14:16:13.316295 systemd-logind[1493]: Session 25 logged out. Waiting for processes to exit. Jan 15 14:16:13.318954 systemd-logind[1493]: Removed session 25.