Feb 14 01:00:11.020918 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 18:03:41 -00 2025 Feb 14 01:00:11.020954 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 14 01:00:11.020981 kernel: BIOS-provided physical RAM map: Feb 14 01:00:11.021001 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 14 01:00:11.021012 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 14 01:00:11.021022 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 14 01:00:11.021034 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Feb 14 01:00:11.021045 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Feb 14 01:00:11.021056 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Feb 14 01:00:11.021078 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Feb 14 01:00:11.021090 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 14 01:00:11.021101 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 14 01:00:11.021117 kernel: NX (Execute Disable) protection: active Feb 14 01:00:11.021129 kernel: APIC: Static calls initialized Feb 14 01:00:11.021141 kernel: SMBIOS 2.8 present. Feb 14 01:00:11.021154 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Feb 14 01:00:11.021165 kernel: Hypervisor detected: KVM Feb 14 01:00:11.021182 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 14 01:00:11.021194 kernel: kvm-clock: using sched offset of 4427896150 cycles Feb 14 01:00:11.021206 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 14 01:00:11.021218 kernel: tsc: Detected 2499.998 MHz processor Feb 14 01:00:11.021230 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 14 01:00:11.021242 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 14 01:00:11.021254 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Feb 14 01:00:11.021265 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 14 01:00:11.021277 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 14 01:00:11.021293 kernel: Using GB pages for direct mapping Feb 14 01:00:11.021305 kernel: ACPI: Early table checksum verification disabled Feb 14 01:00:11.021317 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Feb 14 01:00:11.021329 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 14 01:00:11.021340 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 14 01:00:11.021352 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 14 01:00:11.021364 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Feb 14 01:00:11.021375 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 14 01:00:11.021387 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 14 01:00:11.021404 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 14 01:00:11.021416 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 14 01:00:11.021428 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Feb 14 01:00:11.021440 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Feb 14 01:00:11.021452 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Feb 14 01:00:11.021470 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Feb 14 01:00:11.021483 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Feb 14 01:00:11.021500 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Feb 14 01:00:11.021512 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Feb 14 01:00:11.021524 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 14 01:00:11.021537 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 14 01:00:11.021549 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Feb 14 01:00:11.021561 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Feb 14 01:00:11.021573 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Feb 14 01:00:11.021590 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Feb 14 01:00:11.021603 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Feb 14 01:00:11.021615 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Feb 14 01:00:11.021627 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Feb 14 01:00:11.021639 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Feb 14 01:00:11.021651 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Feb 14 01:00:11.021664 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Feb 14 01:00:11.021676 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Feb 14 01:00:11.021688 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Feb 14 01:00:11.021700 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Feb 14 01:00:11.021717 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Feb 14 01:00:11.021729 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 14 01:00:11.021742 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Feb 14 01:00:11.021754 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Feb 14 01:00:11.021766 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Feb 14 01:00:11.021779 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Feb 14 01:00:11.021791 kernel: Zone ranges: Feb 14 01:00:11.021803 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 14 01:00:11.021816 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Feb 14 01:00:11.021833 kernel: Normal empty Feb 14 01:00:11.021845 kernel: Movable zone start for each node Feb 14 01:00:11.021857 kernel: Early memory node ranges Feb 14 01:00:11.021869 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 14 01:00:11.021882 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Feb 14 01:00:11.021894 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Feb 14 01:00:11.021906 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 14 01:00:11.021918 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 14 01:00:11.021931 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Feb 14 01:00:11.021943 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 14 01:00:11.021961 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 14 01:00:11.022874 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 14 01:00:11.022891 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 14 01:00:11.022903 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 14 01:00:11.022916 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 14 01:00:11.022928 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 14 01:00:11.022940 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 14 01:00:11.022953 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 14 01:00:11.022965 kernel: TSC deadline timer available Feb 14 01:00:11.023000 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Feb 14 01:00:11.023013 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 14 01:00:11.023026 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Feb 14 01:00:11.023038 kernel: Booting paravirtualized kernel on KVM Feb 14 01:00:11.023051 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 14 01:00:11.023075 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Feb 14 01:00:11.023090 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Feb 14 01:00:11.023103 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Feb 14 01:00:11.023115 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Feb 14 01:00:11.023133 kernel: kvm-guest: PV spinlocks enabled Feb 14 01:00:11.023146 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 14 01:00:11.023160 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 14 01:00:11.023173 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 14 01:00:11.023185 kernel: random: crng init done Feb 14 01:00:11.023198 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 14 01:00:11.023210 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 14 01:00:11.023228 kernel: Fallback order for Node 0: 0 Feb 14 01:00:11.023241 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Feb 14 01:00:11.023253 kernel: Policy zone: DMA32 Feb 14 01:00:11.023266 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 14 01:00:11.023278 kernel: software IO TLB: area num 16. Feb 14 01:00:11.023291 kernel: Memory: 1901520K/2096616K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42840K init, 2352K bss, 194836K reserved, 0K cma-reserved) Feb 14 01:00:11.023303 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Feb 14 01:00:11.023316 kernel: Kernel/User page tables isolation: enabled Feb 14 01:00:11.023328 kernel: ftrace: allocating 37921 entries in 149 pages Feb 14 01:00:11.023346 kernel: ftrace: allocated 149 pages with 4 groups Feb 14 01:00:11.023359 kernel: Dynamic Preempt: voluntary Feb 14 01:00:11.023371 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 14 01:00:11.023384 kernel: rcu: RCU event tracing is enabled. Feb 14 01:00:11.023397 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Feb 14 01:00:11.023410 kernel: Trampoline variant of Tasks RCU enabled. Feb 14 01:00:11.023436 kernel: Rude variant of Tasks RCU enabled. Feb 14 01:00:11.023453 kernel: Tracing variant of Tasks RCU enabled. Feb 14 01:00:11.023467 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 14 01:00:11.023480 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Feb 14 01:00:11.023493 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Feb 14 01:00:11.023505 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 14 01:00:11.023523 kernel: Console: colour VGA+ 80x25 Feb 14 01:00:11.023536 kernel: printk: console [tty0] enabled Feb 14 01:00:11.023549 kernel: printk: console [ttyS0] enabled Feb 14 01:00:11.023562 kernel: ACPI: Core revision 20230628 Feb 14 01:00:11.023576 kernel: APIC: Switch to symmetric I/O mode setup Feb 14 01:00:11.023593 kernel: x2apic enabled Feb 14 01:00:11.023606 kernel: APIC: Switched APIC routing to: physical x2apic Feb 14 01:00:11.023620 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Feb 14 01:00:11.023633 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Feb 14 01:00:11.023646 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 14 01:00:11.023659 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Feb 14 01:00:11.023672 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Feb 14 01:00:11.023684 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 14 01:00:11.023697 kernel: Spectre V2 : Mitigation: Retpolines Feb 14 01:00:11.023710 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 14 01:00:11.023728 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 14 01:00:11.023741 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Feb 14 01:00:11.023754 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 14 01:00:11.023767 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 14 01:00:11.023780 kernel: MDS: Mitigation: Clear CPU buffers Feb 14 01:00:11.023793 kernel: MMIO Stale Data: Unknown: No mitigations Feb 14 01:00:11.023806 kernel: SRBDS: Unknown: Dependent on hypervisor status Feb 14 01:00:11.023818 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 14 01:00:11.023832 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 14 01:00:11.023844 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 14 01:00:11.023857 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 14 01:00:11.023875 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 14 01:00:11.023889 kernel: Freeing SMP alternatives memory: 32K Feb 14 01:00:11.023902 kernel: pid_max: default: 32768 minimum: 301 Feb 14 01:00:11.023914 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 14 01:00:11.023927 kernel: landlock: Up and running. Feb 14 01:00:11.023940 kernel: SELinux: Initializing. Feb 14 01:00:11.023953 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 14 01:00:11.023966 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 14 01:00:11.023994 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Feb 14 01:00:11.024008 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 14 01:00:11.024021 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 14 01:00:11.024041 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 14 01:00:11.024054 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Feb 14 01:00:11.024078 kernel: signal: max sigframe size: 1776 Feb 14 01:00:11.024092 kernel: rcu: Hierarchical SRCU implementation. Feb 14 01:00:11.024106 kernel: rcu: Max phase no-delay instances is 400. Feb 14 01:00:11.024119 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 14 01:00:11.024132 kernel: smp: Bringing up secondary CPUs ... Feb 14 01:00:11.024145 kernel: smpboot: x86: Booting SMP configuration: Feb 14 01:00:11.024158 kernel: .... node #0, CPUs: #1 Feb 14 01:00:11.024177 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Feb 14 01:00:11.024190 kernel: smp: Brought up 1 node, 2 CPUs Feb 14 01:00:11.024203 kernel: smpboot: Max logical packages: 16 Feb 14 01:00:11.024216 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Feb 14 01:00:11.024229 kernel: devtmpfs: initialized Feb 14 01:00:11.024242 kernel: x86/mm: Memory block size: 128MB Feb 14 01:00:11.024256 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 14 01:00:11.024269 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Feb 14 01:00:11.024282 kernel: pinctrl core: initialized pinctrl subsystem Feb 14 01:00:11.024300 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 14 01:00:11.024313 kernel: audit: initializing netlink subsys (disabled) Feb 14 01:00:11.024326 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 14 01:00:11.024339 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 14 01:00:11.024352 kernel: audit: type=2000 audit(1739494809.445:1): state=initialized audit_enabled=0 res=1 Feb 14 01:00:11.024365 kernel: cpuidle: using governor menu Feb 14 01:00:11.024378 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 14 01:00:11.024391 kernel: dca service started, version 1.12.1 Feb 14 01:00:11.024404 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Feb 14 01:00:11.024422 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Feb 14 01:00:11.024436 kernel: PCI: Using configuration type 1 for base access Feb 14 01:00:11.024449 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 14 01:00:11.024462 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 14 01:00:11.024475 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 14 01:00:11.024488 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 14 01:00:11.024502 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 14 01:00:11.024515 kernel: ACPI: Added _OSI(Module Device) Feb 14 01:00:11.024528 kernel: ACPI: Added _OSI(Processor Device) Feb 14 01:00:11.024546 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 14 01:00:11.024559 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 14 01:00:11.024572 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 14 01:00:11.024585 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 14 01:00:11.024598 kernel: ACPI: Interpreter enabled Feb 14 01:00:11.024613 kernel: ACPI: PM: (supports S0 S5) Feb 14 01:00:11.024626 kernel: ACPI: Using IOAPIC for interrupt routing Feb 14 01:00:11.024639 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 14 01:00:11.024652 kernel: PCI: Using E820 reservations for host bridge windows Feb 14 01:00:11.024670 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Feb 14 01:00:11.024683 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 14 01:00:11.024940 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 14 01:00:11.025156 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 14 01:00:11.025326 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 14 01:00:11.025347 kernel: PCI host bridge to bus 0000:00 Feb 14 01:00:11.025526 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 14 01:00:11.025694 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 14 01:00:11.025848 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 14 01:00:11.026026 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Feb 14 01:00:11.026321 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Feb 14 01:00:11.026478 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Feb 14 01:00:11.026629 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 14 01:00:11.026822 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Feb 14 01:00:11.027576 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Feb 14 01:00:11.027758 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Feb 14 01:00:11.027928 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Feb 14 01:00:11.028140 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Feb 14 01:00:11.028313 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 14 01:00:11.028494 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Feb 14 01:00:11.028680 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Feb 14 01:00:11.028860 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Feb 14 01:00:11.029048 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Feb 14 01:00:11.029251 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Feb 14 01:00:11.029423 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Feb 14 01:00:11.029602 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Feb 14 01:00:11.029781 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Feb 14 01:00:11.030036 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Feb 14 01:00:11.030283 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Feb 14 01:00:11.030469 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Feb 14 01:00:11.030639 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Feb 14 01:00:11.030817 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Feb 14 01:00:11.031043 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Feb 14 01:00:11.031235 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Feb 14 01:00:11.031401 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Feb 14 01:00:11.031586 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Feb 14 01:00:11.031773 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Feb 14 01:00:11.032022 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Feb 14 01:00:11.032294 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Feb 14 01:00:11.032473 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Feb 14 01:00:11.032659 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 14 01:00:11.032826 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 14 01:00:11.033598 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Feb 14 01:00:11.033782 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Feb 14 01:00:11.033963 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Feb 14 01:00:11.034166 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Feb 14 01:00:11.034377 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Feb 14 01:00:11.034544 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Feb 14 01:00:11.034711 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Feb 14 01:00:11.034905 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Feb 14 01:00:11.035126 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Feb 14 01:00:11.035309 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Feb 14 01:00:11.035495 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Feb 14 01:00:11.035664 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Feb 14 01:00:11.035829 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Feb 14 01:00:11.036017 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Feb 14 01:00:11.036214 kernel: pci_bus 0000:02: extended config space not accessible Feb 14 01:00:11.036411 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Feb 14 01:00:11.036615 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Feb 14 01:00:11.036795 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Feb 14 01:00:11.036985 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Feb 14 01:00:11.037189 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Feb 14 01:00:11.037389 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Feb 14 01:00:11.037570 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Feb 14 01:00:11.037741 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Feb 14 01:00:11.037917 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Feb 14 01:00:11.038134 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Feb 14 01:00:11.038309 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Feb 14 01:00:11.038500 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Feb 14 01:00:11.038693 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Feb 14 01:00:11.038870 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Feb 14 01:00:11.039102 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Feb 14 01:00:11.039275 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Feb 14 01:00:11.039449 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Feb 14 01:00:11.039613 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Feb 14 01:00:11.039776 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Feb 14 01:00:11.039941 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Feb 14 01:00:11.043940 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Feb 14 01:00:11.044178 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Feb 14 01:00:11.044351 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Feb 14 01:00:11.044522 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Feb 14 01:00:11.044699 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Feb 14 01:00:11.044866 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Feb 14 01:00:11.045109 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Feb 14 01:00:11.045277 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Feb 14 01:00:11.045441 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Feb 14 01:00:11.045462 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 14 01:00:11.045476 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 14 01:00:11.045490 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 14 01:00:11.045516 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 14 01:00:11.045530 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Feb 14 01:00:11.045543 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Feb 14 01:00:11.045556 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Feb 14 01:00:11.045570 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Feb 14 01:00:11.045588 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Feb 14 01:00:11.045601 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Feb 14 01:00:11.045614 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Feb 14 01:00:11.045627 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Feb 14 01:00:11.045645 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Feb 14 01:00:11.045659 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Feb 14 01:00:11.045672 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Feb 14 01:00:11.045685 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Feb 14 01:00:11.045698 kernel: iommu: Default domain type: Translated Feb 14 01:00:11.045711 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 14 01:00:11.045724 kernel: PCI: Using ACPI for IRQ routing Feb 14 01:00:11.045738 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 14 01:00:11.045750 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 14 01:00:11.045768 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Feb 14 01:00:11.045932 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Feb 14 01:00:11.047189 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Feb 14 01:00:11.047366 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 14 01:00:11.047388 kernel: vgaarb: loaded Feb 14 01:00:11.047403 kernel: clocksource: Switched to clocksource kvm-clock Feb 14 01:00:11.047416 kernel: VFS: Disk quotas dquot_6.6.0 Feb 14 01:00:11.047430 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 14 01:00:11.047452 kernel: pnp: PnP ACPI init Feb 14 01:00:11.047637 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Feb 14 01:00:11.047659 kernel: pnp: PnP ACPI: found 5 devices Feb 14 01:00:11.047673 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 14 01:00:11.047687 kernel: NET: Registered PF_INET protocol family Feb 14 01:00:11.047700 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 14 01:00:11.047714 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 14 01:00:11.047727 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 14 01:00:11.047748 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 14 01:00:11.047761 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 14 01:00:11.047775 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 14 01:00:11.047788 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 14 01:00:11.047802 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 14 01:00:11.047815 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 14 01:00:11.047829 kernel: NET: Registered PF_XDP protocol family Feb 14 01:00:11.048011 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Feb 14 01:00:11.048199 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 14 01:00:11.048377 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 14 01:00:11.048545 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Feb 14 01:00:11.048714 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Feb 14 01:00:11.048882 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Feb 14 01:00:11.049668 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Feb 14 01:00:11.049841 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Feb 14 01:00:11.052211 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Feb 14 01:00:11.052390 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Feb 14 01:00:11.052561 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Feb 14 01:00:11.052730 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Feb 14 01:00:11.052897 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Feb 14 01:00:11.053094 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Feb 14 01:00:11.053264 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Feb 14 01:00:11.053440 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Feb 14 01:00:11.053640 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Feb 14 01:00:11.053821 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Feb 14 01:00:11.054008 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Feb 14 01:00:11.054193 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Feb 14 01:00:11.054364 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Feb 14 01:00:11.054532 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Feb 14 01:00:11.054701 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Feb 14 01:00:11.054869 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Feb 14 01:00:11.055075 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Feb 14 01:00:11.055249 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Feb 14 01:00:11.055417 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Feb 14 01:00:11.055584 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Feb 14 01:00:11.055752 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Feb 14 01:00:11.055929 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Feb 14 01:00:11.058163 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Feb 14 01:00:11.058346 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Feb 14 01:00:11.058556 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Feb 14 01:00:11.058732 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Feb 14 01:00:11.058904 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Feb 14 01:00:11.059118 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Feb 14 01:00:11.059290 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Feb 14 01:00:11.059461 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Feb 14 01:00:11.059640 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Feb 14 01:00:11.059814 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Feb 14 01:00:11.065823 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Feb 14 01:00:11.066053 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Feb 14 01:00:11.066247 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Feb 14 01:00:11.066431 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Feb 14 01:00:11.066601 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Feb 14 01:00:11.066768 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Feb 14 01:00:11.066937 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Feb 14 01:00:11.067134 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Feb 14 01:00:11.067303 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Feb 14 01:00:11.067469 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Feb 14 01:00:11.067628 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 14 01:00:11.067781 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 14 01:00:11.067940 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 14 01:00:11.069527 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Feb 14 01:00:11.069710 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Feb 14 01:00:11.069877 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Feb 14 01:00:11.070114 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Feb 14 01:00:11.070293 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Feb 14 01:00:11.070466 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Feb 14 01:00:11.070658 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Feb 14 01:00:11.070841 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Feb 14 01:00:11.072105 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Feb 14 01:00:11.072271 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Feb 14 01:00:11.072439 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Feb 14 01:00:11.072596 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Feb 14 01:00:11.072751 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Feb 14 01:00:11.072926 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Feb 14 01:00:11.073129 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Feb 14 01:00:11.073287 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Feb 14 01:00:11.073463 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Feb 14 01:00:11.073621 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Feb 14 01:00:11.073777 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Feb 14 01:00:11.073942 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Feb 14 01:00:11.077334 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Feb 14 01:00:11.077498 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Feb 14 01:00:11.077666 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Feb 14 01:00:11.077822 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Feb 14 01:00:11.078027 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Feb 14 01:00:11.078211 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Feb 14 01:00:11.078378 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Feb 14 01:00:11.078533 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Feb 14 01:00:11.078555 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Feb 14 01:00:11.078569 kernel: PCI: CLS 0 bytes, default 64 Feb 14 01:00:11.078583 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 14 01:00:11.078597 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Feb 14 01:00:11.078612 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 14 01:00:11.078626 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Feb 14 01:00:11.078640 kernel: Initialise system trusted keyrings Feb 14 01:00:11.078661 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 14 01:00:11.078675 kernel: Key type asymmetric registered Feb 14 01:00:11.078689 kernel: Asymmetric key parser 'x509' registered Feb 14 01:00:11.078702 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 14 01:00:11.078716 kernel: io scheduler mq-deadline registered Feb 14 01:00:11.078730 kernel: io scheduler kyber registered Feb 14 01:00:11.078743 kernel: io scheduler bfq registered Feb 14 01:00:11.078910 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Feb 14 01:00:11.079111 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Feb 14 01:00:11.079289 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 14 01:00:11.079458 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Feb 14 01:00:11.079624 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Feb 14 01:00:11.079789 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 14 01:00:11.079959 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Feb 14 01:00:11.086888 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Feb 14 01:00:11.087106 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 14 01:00:11.087279 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Feb 14 01:00:11.087446 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Feb 14 01:00:11.087613 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 14 01:00:11.087781 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Feb 14 01:00:11.087946 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Feb 14 01:00:11.088158 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 14 01:00:11.088326 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Feb 14 01:00:11.088489 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Feb 14 01:00:11.088656 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 14 01:00:11.088833 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Feb 14 01:00:11.089022 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Feb 14 01:00:11.089223 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 14 01:00:11.089392 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Feb 14 01:00:11.089563 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Feb 14 01:00:11.089729 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 14 01:00:11.089751 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 14 01:00:11.089766 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Feb 14 01:00:11.089787 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Feb 14 01:00:11.089801 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 14 01:00:11.089816 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 14 01:00:11.089830 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 14 01:00:11.089844 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 14 01:00:11.089858 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 14 01:00:11.089872 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 14 01:00:11.090097 kernel: rtc_cmos 00:03: RTC can wake from S4 Feb 14 01:00:11.090268 kernel: rtc_cmos 00:03: registered as rtc0 Feb 14 01:00:11.090425 kernel: rtc_cmos 00:03: setting system clock to 2025-02-14T01:00:10 UTC (1739494810) Feb 14 01:00:11.090587 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Feb 14 01:00:11.090608 kernel: intel_pstate: CPU model not supported Feb 14 01:00:11.090622 kernel: NET: Registered PF_INET6 protocol family Feb 14 01:00:11.090636 kernel: Segment Routing with IPv6 Feb 14 01:00:11.090650 kernel: In-situ OAM (IOAM) with IPv6 Feb 14 01:00:11.090663 kernel: NET: Registered PF_PACKET protocol family Feb 14 01:00:11.090677 kernel: Key type dns_resolver registered Feb 14 01:00:11.090698 kernel: IPI shorthand broadcast: enabled Feb 14 01:00:11.090712 kernel: sched_clock: Marking stable (1269003602, 237133765)->(1631388127, -125250760) Feb 14 01:00:11.090726 kernel: registered taskstats version 1 Feb 14 01:00:11.090740 kernel: Loading compiled-in X.509 certificates Feb 14 01:00:11.090754 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6e17590ca2768b672aa48f3e0cedc4061febfe93' Feb 14 01:00:11.090768 kernel: Key type .fscrypt registered Feb 14 01:00:11.090781 kernel: Key type fscrypt-provisioning registered Feb 14 01:00:11.090795 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 14 01:00:11.090814 kernel: ima: Allocated hash algorithm: sha1 Feb 14 01:00:11.090828 kernel: ima: No architecture policies found Feb 14 01:00:11.090842 kernel: clk: Disabling unused clocks Feb 14 01:00:11.090856 kernel: Freeing unused kernel image (initmem) memory: 42840K Feb 14 01:00:11.090870 kernel: Write protecting the kernel read-only data: 36864k Feb 14 01:00:11.090884 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Feb 14 01:00:11.090897 kernel: Run /init as init process Feb 14 01:00:11.090911 kernel: with arguments: Feb 14 01:00:11.090925 kernel: /init Feb 14 01:00:11.090938 kernel: with environment: Feb 14 01:00:11.090957 kernel: HOME=/ Feb 14 01:00:11.091003 kernel: TERM=linux Feb 14 01:00:11.091020 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 14 01:00:11.091037 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 14 01:00:11.091054 systemd[1]: Detected virtualization kvm. Feb 14 01:00:11.091081 systemd[1]: Detected architecture x86-64. Feb 14 01:00:11.091096 systemd[1]: Running in initrd. Feb 14 01:00:11.091117 systemd[1]: No hostname configured, using default hostname. Feb 14 01:00:11.091132 systemd[1]: Hostname set to . Feb 14 01:00:11.091147 systemd[1]: Initializing machine ID from VM UUID. Feb 14 01:00:11.091161 systemd[1]: Queued start job for default target initrd.target. Feb 14 01:00:11.091176 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 14 01:00:11.091190 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 14 01:00:11.091205 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 14 01:00:11.091221 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 14 01:00:11.091241 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 14 01:00:11.091256 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 14 01:00:11.091273 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 14 01:00:11.091288 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 14 01:00:11.091303 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 14 01:00:11.091317 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 14 01:00:11.091332 systemd[1]: Reached target paths.target - Path Units. Feb 14 01:00:11.091352 systemd[1]: Reached target slices.target - Slice Units. Feb 14 01:00:11.091366 systemd[1]: Reached target swap.target - Swaps. Feb 14 01:00:11.091386 systemd[1]: Reached target timers.target - Timer Units. Feb 14 01:00:11.091400 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 14 01:00:11.091415 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 14 01:00:11.091430 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 14 01:00:11.091445 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 14 01:00:11.091460 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 14 01:00:11.091475 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 14 01:00:11.091494 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 14 01:00:11.091509 systemd[1]: Reached target sockets.target - Socket Units. Feb 14 01:00:11.091524 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 14 01:00:11.091539 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 14 01:00:11.091553 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 14 01:00:11.091568 systemd[1]: Starting systemd-fsck-usr.service... Feb 14 01:00:11.091583 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 14 01:00:11.091598 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 14 01:00:11.091617 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 14 01:00:11.091673 systemd-journald[200]: Collecting audit messages is disabled. Feb 14 01:00:11.091707 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 14 01:00:11.091722 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 14 01:00:11.091743 systemd[1]: Finished systemd-fsck-usr.service. Feb 14 01:00:11.091759 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 14 01:00:11.091775 systemd-journald[200]: Journal started Feb 14 01:00:11.091807 systemd-journald[200]: Runtime Journal (/run/log/journal/0c8230c7a1af4d3d8ab69f511b49463b) is 4.7M, max 38.0M, 33.2M free. Feb 14 01:00:11.032309 systemd-modules-load[201]: Inserted module 'overlay' Feb 14 01:00:11.132509 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 14 01:00:11.132544 kernel: Bridge firewalling registered Feb 14 01:00:11.096921 systemd-modules-load[201]: Inserted module 'br_netfilter' Feb 14 01:00:11.138991 systemd[1]: Started systemd-journald.service - Journal Service. Feb 14 01:00:11.138405 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 14 01:00:11.140504 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 14 01:00:11.142401 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 14 01:00:11.151183 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 14 01:00:11.170234 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 14 01:00:11.175165 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 14 01:00:11.183739 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 14 01:00:11.189038 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 14 01:00:11.201605 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 14 01:00:11.203797 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 14 01:00:11.212183 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 14 01:00:11.213462 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 14 01:00:11.224049 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 14 01:00:11.230192 dracut-cmdline[234]: dracut-dracut-053 Feb 14 01:00:11.237326 dracut-cmdline[234]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 14 01:00:11.273292 systemd-resolved[238]: Positive Trust Anchors: Feb 14 01:00:11.273323 systemd-resolved[238]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 14 01:00:11.273371 systemd-resolved[238]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 14 01:00:11.278076 systemd-resolved[238]: Defaulting to hostname 'linux'. Feb 14 01:00:11.279937 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 14 01:00:11.281690 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 14 01:00:11.341029 kernel: SCSI subsystem initialized Feb 14 01:00:11.353016 kernel: Loading iSCSI transport class v2.0-870. Feb 14 01:00:11.367011 kernel: iscsi: registered transport (tcp) Feb 14 01:00:11.393312 kernel: iscsi: registered transport (qla4xxx) Feb 14 01:00:11.393378 kernel: QLogic iSCSI HBA Driver Feb 14 01:00:11.448042 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 14 01:00:11.456193 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 14 01:00:11.488687 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 14 01:00:11.488764 kernel: device-mapper: uevent: version 1.0.3 Feb 14 01:00:11.489620 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 14 01:00:11.538059 kernel: raid6: sse2x4 gen() 13949 MB/s Feb 14 01:00:11.556034 kernel: raid6: sse2x2 gen() 9660 MB/s Feb 14 01:00:11.575994 kernel: raid6: sse2x1 gen() 10075 MB/s Feb 14 01:00:11.576133 kernel: raid6: using algorithm sse2x4 gen() 13949 MB/s Feb 14 01:00:11.594717 kernel: raid6: .... xor() 7738 MB/s, rmw enabled Feb 14 01:00:11.594781 kernel: raid6: using ssse3x2 recovery algorithm Feb 14 01:00:11.622016 kernel: xor: automatically using best checksumming function avx Feb 14 01:00:11.819054 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 14 01:00:11.834269 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 14 01:00:11.841261 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 14 01:00:11.864867 systemd-udevd[419]: Using default interface naming scheme 'v255'. Feb 14 01:00:11.871875 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 14 01:00:11.880153 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 14 01:00:11.908657 dracut-pre-trigger[429]: rd.md=0: removing MD RAID activation Feb 14 01:00:11.947871 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 14 01:00:11.955261 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 14 01:00:12.063053 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 14 01:00:12.071377 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 14 01:00:12.093773 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 14 01:00:12.099111 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 14 01:00:12.101584 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 14 01:00:12.102345 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 14 01:00:12.113222 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 14 01:00:12.130902 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 14 01:00:12.185012 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Feb 14 01:00:12.227323 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Feb 14 01:00:12.227802 kernel: cryptd: max_cpu_qlen set to 1000 Feb 14 01:00:12.227826 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 14 01:00:12.227854 kernel: GPT:17805311 != 125829119 Feb 14 01:00:12.227872 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 14 01:00:12.227889 kernel: GPT:17805311 != 125829119 Feb 14 01:00:12.227905 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 14 01:00:12.227922 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 14 01:00:12.227939 kernel: AVX version of gcm_enc/dec engaged. Feb 14 01:00:12.227955 kernel: AES CTR mode by8 optimization enabled Feb 14 01:00:12.233646 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 14 01:00:12.233834 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 14 01:00:12.238684 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 14 01:00:12.241373 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 14 01:00:12.255171 kernel: ACPI: bus type USB registered Feb 14 01:00:12.255203 kernel: usbcore: registered new interface driver usbfs Feb 14 01:00:12.255222 kernel: usbcore: registered new interface driver hub Feb 14 01:00:12.241522 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 14 01:00:12.254287 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 14 01:00:12.260983 kernel: usbcore: registered new device driver usb Feb 14 01:00:12.265174 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 14 01:00:12.316994 kernel: BTRFS: device fsid 892c7470-7713-4b0f-880a-4c5f7bf5b72d devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (475) Feb 14 01:00:12.326993 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (474) Feb 14 01:00:12.357343 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 14 01:00:12.411293 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Feb 14 01:00:12.411553 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Feb 14 01:00:12.411765 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Feb 14 01:00:12.411987 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Feb 14 01:00:12.412211 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Feb 14 01:00:12.412414 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Feb 14 01:00:12.412613 kernel: hub 1-0:1.0: USB hub found Feb 14 01:00:12.412859 kernel: hub 1-0:1.0: 4 ports detected Feb 14 01:00:12.413147 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Feb 14 01:00:12.413424 kernel: hub 2-0:1.0: USB hub found Feb 14 01:00:12.413659 kernel: hub 2-0:1.0: 4 ports detected Feb 14 01:00:12.413875 kernel: libata version 3.00 loaded. Feb 14 01:00:12.410569 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 14 01:00:12.416886 kernel: ahci 0000:00:1f.2: version 3.0 Feb 14 01:00:12.423715 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Feb 14 01:00:12.423756 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Feb 14 01:00:12.423986 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Feb 14 01:00:12.424208 kernel: scsi host0: ahci Feb 14 01:00:12.424429 kernel: scsi host1: ahci Feb 14 01:00:12.424650 kernel: scsi host2: ahci Feb 14 01:00:12.425180 kernel: scsi host3: ahci Feb 14 01:00:12.425392 kernel: scsi host4: ahci Feb 14 01:00:12.425587 kernel: scsi host5: ahci Feb 14 01:00:12.425777 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Feb 14 01:00:12.425799 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Feb 14 01:00:12.425825 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Feb 14 01:00:12.425844 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Feb 14 01:00:12.425862 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Feb 14 01:00:12.425880 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Feb 14 01:00:12.418991 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 14 01:00:12.448757 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 14 01:00:12.454700 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 14 01:00:12.455660 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 14 01:00:12.463207 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 14 01:00:12.466177 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 14 01:00:12.475308 disk-uuid[563]: Primary Header is updated. Feb 14 01:00:12.475308 disk-uuid[563]: Secondary Entries is updated. Feb 14 01:00:12.475308 disk-uuid[563]: Secondary Header is updated. Feb 14 01:00:12.482017 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 14 01:00:12.493355 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 14 01:00:12.505060 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 14 01:00:12.509064 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 14 01:00:12.606176 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Feb 14 01:00:12.731126 kernel: ata1: SATA link down (SStatus 0 SControl 300) Feb 14 01:00:12.731262 kernel: ata2: SATA link down (SStatus 0 SControl 300) Feb 14 01:00:12.731997 kernel: ata3: SATA link down (SStatus 0 SControl 300) Feb 14 01:00:12.740234 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 14 01:00:12.740277 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 14 01:00:12.743278 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 14 01:00:12.757029 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 14 01:00:12.764125 kernel: usbcore: registered new interface driver usbhid Feb 14 01:00:12.764210 kernel: usbhid: USB HID core driver Feb 14 01:00:12.771623 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Feb 14 01:00:12.771664 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Feb 14 01:00:13.499096 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 14 01:00:13.500063 disk-uuid[564]: The operation has completed successfully. Feb 14 01:00:13.557124 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 14 01:00:13.557293 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 14 01:00:13.571117 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 14 01:00:13.576659 sh[587]: Success Feb 14 01:00:13.593445 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Feb 14 01:00:13.652554 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 14 01:00:13.661117 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 14 01:00:13.665788 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 14 01:00:13.695029 kernel: BTRFS info (device dm-0): first mount of filesystem 892c7470-7713-4b0f-880a-4c5f7bf5b72d Feb 14 01:00:13.695091 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 14 01:00:13.695112 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 14 01:00:13.696653 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 14 01:00:13.699847 kernel: BTRFS info (device dm-0): using free space tree Feb 14 01:00:13.707734 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 14 01:00:13.709231 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 14 01:00:13.715171 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 14 01:00:13.717177 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 14 01:00:13.733371 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 14 01:00:13.737048 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 14 01:00:13.737100 kernel: BTRFS info (device vda6): using free space tree Feb 14 01:00:13.742013 kernel: BTRFS info (device vda6): auto enabling async discard Feb 14 01:00:13.756515 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 14 01:00:13.760169 kernel: BTRFS info (device vda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 14 01:00:13.766499 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 14 01:00:13.775220 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 14 01:00:13.899334 ignition[676]: Ignition 2.19.0 Feb 14 01:00:13.900635 ignition[676]: Stage: fetch-offline Feb 14 01:00:13.903217 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 14 01:00:13.900720 ignition[676]: no configs at "/usr/lib/ignition/base.d" Feb 14 01:00:13.900739 ignition[676]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 14 01:00:13.900949 ignition[676]: parsed url from cmdline: "" Feb 14 01:00:13.900956 ignition[676]: no config URL provided Feb 14 01:00:13.900966 ignition[676]: reading system config file "/usr/lib/ignition/user.ign" Feb 14 01:00:13.901020 ignition[676]: no config at "/usr/lib/ignition/user.ign" Feb 14 01:00:13.901029 ignition[676]: failed to fetch config: resource requires networking Feb 14 01:00:13.901297 ignition[676]: Ignition finished successfully Feb 14 01:00:13.927674 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 14 01:00:13.932246 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 14 01:00:13.965743 systemd-networkd[777]: lo: Link UP Feb 14 01:00:13.966714 systemd-networkd[777]: lo: Gained carrier Feb 14 01:00:13.968856 systemd-networkd[777]: Enumeration completed Feb 14 01:00:13.969295 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 14 01:00:13.969407 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 14 01:00:13.969413 systemd-networkd[777]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 14 01:00:13.970578 systemd-networkd[777]: eth0: Link UP Feb 14 01:00:13.970584 systemd-networkd[777]: eth0: Gained carrier Feb 14 01:00:13.970595 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 14 01:00:13.970745 systemd[1]: Reached target network.target - Network. Feb 14 01:00:13.979194 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 14 01:00:13.990107 systemd-networkd[777]: eth0: DHCPv4 address 10.230.12.186/30, gateway 10.230.12.185 acquired from 10.230.12.185 Feb 14 01:00:14.000594 ignition[779]: Ignition 2.19.0 Feb 14 01:00:14.000612 ignition[779]: Stage: fetch Feb 14 01:00:14.000840 ignition[779]: no configs at "/usr/lib/ignition/base.d" Feb 14 01:00:14.000860 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 14 01:00:14.001029 ignition[779]: parsed url from cmdline: "" Feb 14 01:00:14.001036 ignition[779]: no config URL provided Feb 14 01:00:14.001046 ignition[779]: reading system config file "/usr/lib/ignition/user.ign" Feb 14 01:00:14.001062 ignition[779]: no config at "/usr/lib/ignition/user.ign" Feb 14 01:00:14.001250 ignition[779]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Feb 14 01:00:14.001351 ignition[779]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Feb 14 01:00:14.001380 ignition[779]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Feb 14 01:00:14.054878 ignition[779]: GET result: OK Feb 14 01:00:14.055110 ignition[779]: parsing config with SHA512: 55c739a805e4e53d78abb82b9d7ab04c26d869478176ff7fac35afe936831d3aaca76676d0b7a7e875ce11b0a1ff419d9f83d4b2959c460a8d97ba0e32c121c8 Feb 14 01:00:14.060233 unknown[779]: fetched base config from "system" Feb 14 01:00:14.060250 unknown[779]: fetched base config from "system" Feb 14 01:00:14.061086 ignition[779]: fetch: fetch complete Feb 14 01:00:14.060260 unknown[779]: fetched user config from "openstack" Feb 14 01:00:14.061095 ignition[779]: fetch: fetch passed Feb 14 01:00:14.063571 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 14 01:00:14.061162 ignition[779]: Ignition finished successfully Feb 14 01:00:14.071276 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 14 01:00:14.095111 ignition[786]: Ignition 2.19.0 Feb 14 01:00:14.095134 ignition[786]: Stage: kargs Feb 14 01:00:14.095394 ignition[786]: no configs at "/usr/lib/ignition/base.d" Feb 14 01:00:14.098046 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 14 01:00:14.095415 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 14 01:00:14.096577 ignition[786]: kargs: kargs passed Feb 14 01:00:14.096666 ignition[786]: Ignition finished successfully Feb 14 01:00:14.106204 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 14 01:00:14.126834 ignition[792]: Ignition 2.19.0 Feb 14 01:00:14.126850 ignition[792]: Stage: disks Feb 14 01:00:14.127135 ignition[792]: no configs at "/usr/lib/ignition/base.d" Feb 14 01:00:14.127157 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 14 01:00:14.129521 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 14 01:00:14.128233 ignition[792]: disks: disks passed Feb 14 01:00:14.131208 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 14 01:00:14.128304 ignition[792]: Ignition finished successfully Feb 14 01:00:14.132044 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 14 01:00:14.133575 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 14 01:00:14.134928 systemd[1]: Reached target sysinit.target - System Initialization. Feb 14 01:00:14.136571 systemd[1]: Reached target basic.target - Basic System. Feb 14 01:00:14.147212 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 14 01:00:14.165495 systemd-fsck[800]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Feb 14 01:00:14.169040 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 14 01:00:14.175115 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 14 01:00:14.295073 kernel: EXT4-fs (vda9): mounted filesystem 85215ce4-0be3-4782-863e-8dde129924f0 r/w with ordered data mode. Quota mode: none. Feb 14 01:00:14.296173 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 14 01:00:14.297463 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 14 01:00:14.311134 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 14 01:00:14.314189 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 14 01:00:14.316530 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 14 01:00:14.318412 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Feb 14 01:00:14.322434 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 14 01:00:14.329744 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (808) Feb 14 01:00:14.329777 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 14 01:00:14.322481 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 14 01:00:14.334355 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 14 01:00:14.334390 kernel: BTRFS info (device vda6): using free space tree Feb 14 01:00:14.340829 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 14 01:00:14.342365 kernel: BTRFS info (device vda6): auto enabling async discard Feb 14 01:00:14.343026 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 14 01:00:14.355222 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 14 01:00:14.425738 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Feb 14 01:00:14.433131 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Feb 14 01:00:14.442449 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Feb 14 01:00:14.451706 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Feb 14 01:00:14.553168 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 14 01:00:14.560118 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 14 01:00:14.561795 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 14 01:00:14.577997 kernel: BTRFS info (device vda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 14 01:00:14.600713 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 14 01:00:14.608095 ignition[927]: INFO : Ignition 2.19.0 Feb 14 01:00:14.608095 ignition[927]: INFO : Stage: mount Feb 14 01:00:14.610861 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 14 01:00:14.610861 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 14 01:00:14.610861 ignition[927]: INFO : mount: mount passed Feb 14 01:00:14.610861 ignition[927]: INFO : Ignition finished successfully Feb 14 01:00:14.611947 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 14 01:00:14.691187 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 14 01:00:15.562298 systemd-networkd[777]: eth0: Gained IPv6LL Feb 14 01:00:17.069192 systemd-networkd[777]: eth0: Ignoring DHCPv6 address 2a02:1348:179:832e:24:19ff:fee6:cba/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:832e:24:19ff:fee6:cba/64 assigned by NDisc. Feb 14 01:00:17.069210 systemd-networkd[777]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Feb 14 01:00:21.495388 coreos-metadata[810]: Feb 14 01:00:21.495 WARN failed to locate config-drive, using the metadata service API instead Feb 14 01:00:21.524641 coreos-metadata[810]: Feb 14 01:00:21.524 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Feb 14 01:00:21.537053 coreos-metadata[810]: Feb 14 01:00:21.537 INFO Fetch successful Feb 14 01:00:21.537931 coreos-metadata[810]: Feb 14 01:00:21.537 INFO wrote hostname srv-jzpa0.gb1.brightbox.com to /sysroot/etc/hostname Feb 14 01:00:21.540038 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Feb 14 01:00:21.540329 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Feb 14 01:00:21.555135 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 14 01:00:21.579249 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 14 01:00:21.593022 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (942) Feb 14 01:00:21.596511 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 14 01:00:21.596555 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 14 01:00:21.598311 kernel: BTRFS info (device vda6): using free space tree Feb 14 01:00:21.604031 kernel: BTRFS info (device vda6): auto enabling async discard Feb 14 01:00:21.607123 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 14 01:00:21.638443 ignition[960]: INFO : Ignition 2.19.0 Feb 14 01:00:21.638443 ignition[960]: INFO : Stage: files Feb 14 01:00:21.640520 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 14 01:00:21.640520 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 14 01:00:21.640520 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Feb 14 01:00:21.643382 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 14 01:00:21.643382 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 14 01:00:21.645521 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 14 01:00:21.646618 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 14 01:00:21.646618 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 14 01:00:21.646300 unknown[960]: wrote ssh authorized keys file for user: core Feb 14 01:00:21.649674 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Feb 14 01:00:21.649674 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Feb 14 01:00:21.853300 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 14 01:00:22.171346 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Feb 14 01:00:22.171346 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 14 01:00:22.174221 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 14 01:00:22.174221 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 14 01:00:22.174221 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 14 01:00:22.174221 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 14 01:00:22.174221 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 14 01:00:22.174221 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 14 01:00:22.174221 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 14 01:00:22.174221 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 14 01:00:22.174221 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 14 01:00:22.174221 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 14 01:00:22.174221 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 14 01:00:22.174221 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 14 01:00:22.174221 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Feb 14 01:00:22.727055 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 14 01:00:26.818543 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 14 01:00:26.820805 ignition[960]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 14 01:00:26.820805 ignition[960]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 14 01:00:26.820805 ignition[960]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 14 01:00:26.820805 ignition[960]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 14 01:00:26.830489 ignition[960]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Feb 14 01:00:26.830489 ignition[960]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Feb 14 01:00:26.830489 ignition[960]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 14 01:00:26.830489 ignition[960]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 14 01:00:26.830489 ignition[960]: INFO : files: files passed Feb 14 01:00:26.830489 ignition[960]: INFO : Ignition finished successfully Feb 14 01:00:26.823557 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 14 01:00:26.837335 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 14 01:00:26.839892 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 14 01:00:26.855093 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 14 01:00:26.855292 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 14 01:00:26.866502 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 14 01:00:26.866502 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 14 01:00:26.870566 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 14 01:00:26.873623 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 14 01:00:26.875074 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 14 01:00:26.882234 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 14 01:00:26.916422 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 14 01:00:26.916662 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 14 01:00:26.919360 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 14 01:00:26.920200 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 14 01:00:26.921976 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 14 01:00:26.932250 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 14 01:00:26.951723 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 14 01:00:26.959186 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 14 01:00:26.976033 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 14 01:00:26.977901 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 14 01:00:26.980000 systemd[1]: Stopped target timers.target - Timer Units. Feb 14 01:00:26.980810 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 14 01:00:26.981037 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 14 01:00:26.983302 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 14 01:00:26.984282 systemd[1]: Stopped target basic.target - Basic System. Feb 14 01:00:26.985882 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 14 01:00:26.987417 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 14 01:00:26.989057 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 14 01:00:26.990753 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 14 01:00:26.992387 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 14 01:00:26.994078 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 14 01:00:26.995639 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 14 01:00:26.997370 systemd[1]: Stopped target swap.target - Swaps. Feb 14 01:00:26.998781 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 14 01:00:26.998991 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 14 01:00:27.000844 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 14 01:00:27.001868 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 14 01:00:27.003403 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 14 01:00:27.005073 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 14 01:00:27.006162 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 14 01:00:27.006419 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 14 01:00:27.008241 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 14 01:00:27.008439 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 14 01:00:27.009363 systemd[1]: ignition-files.service: Deactivated successfully. Feb 14 01:00:27.009526 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 14 01:00:27.018362 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 14 01:00:27.021313 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 14 01:00:27.022572 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 14 01:00:27.024155 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 14 01:00:27.026260 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 14 01:00:27.027183 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 14 01:00:27.039416 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 14 01:00:27.039563 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 14 01:00:27.058123 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 14 01:00:27.063178 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 14 01:00:27.064071 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 14 01:00:27.069038 ignition[1012]: INFO : Ignition 2.19.0 Feb 14 01:00:27.069038 ignition[1012]: INFO : Stage: umount Feb 14 01:00:27.070690 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 14 01:00:27.070690 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 14 01:00:27.072552 ignition[1012]: INFO : umount: umount passed Feb 14 01:00:27.072552 ignition[1012]: INFO : Ignition finished successfully Feb 14 01:00:27.072481 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 14 01:00:27.072701 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 14 01:00:27.074866 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 14 01:00:27.075285 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 14 01:00:27.076567 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 14 01:00:27.076658 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 14 01:00:27.077949 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 14 01:00:27.078082 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 14 01:00:27.079372 systemd[1]: Stopped target network.target - Network. Feb 14 01:00:27.080731 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 14 01:00:27.080804 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 14 01:00:27.082303 systemd[1]: Stopped target paths.target - Path Units. Feb 14 01:00:27.083672 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 14 01:00:27.086090 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 14 01:00:27.087832 systemd[1]: Stopped target slices.target - Slice Units. Feb 14 01:00:27.089497 systemd[1]: Stopped target sockets.target - Socket Units. Feb 14 01:00:27.091070 systemd[1]: iscsid.socket: Deactivated successfully. Feb 14 01:00:27.091141 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 14 01:00:27.092726 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 14 01:00:27.092796 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 14 01:00:27.094488 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 14 01:00:27.094563 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 14 01:00:27.096038 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 14 01:00:27.096107 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 14 01:00:27.097667 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 14 01:00:27.097758 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 14 01:00:27.099623 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 14 01:00:27.102585 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 14 01:00:27.104238 systemd-networkd[777]: eth0: DHCPv6 lease lost Feb 14 01:00:27.106722 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 14 01:00:27.106893 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 14 01:00:27.111753 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 14 01:00:27.111869 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 14 01:00:27.119167 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 14 01:00:27.120371 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 14 01:00:27.120456 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 14 01:00:27.130224 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 14 01:00:27.133225 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 14 01:00:27.134018 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 14 01:00:27.151430 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 14 01:00:27.151687 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 14 01:00:27.155220 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 14 01:00:27.155305 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 14 01:00:27.157336 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 14 01:00:27.157402 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 14 01:00:27.158921 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 14 01:00:27.159027 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 14 01:00:27.161304 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 14 01:00:27.161370 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 14 01:00:27.162792 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 14 01:00:27.162875 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 14 01:00:27.169192 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 14 01:00:27.170052 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 14 01:00:27.170125 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 14 01:00:27.171753 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 14 01:00:27.171821 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 14 01:00:27.175296 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 14 01:00:27.175409 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 14 01:00:27.178126 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 14 01:00:27.178202 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 14 01:00:27.179776 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 14 01:00:27.179848 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 14 01:00:27.183286 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 14 01:00:27.183364 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 14 01:00:27.186006 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 14 01:00:27.186110 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 14 01:00:27.190800 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 14 01:00:27.190972 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 14 01:00:27.193915 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 14 01:00:27.194343 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 14 01:00:27.196670 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 14 01:00:27.206743 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 14 01:00:27.216634 systemd[1]: Switching root. Feb 14 01:00:27.247594 systemd-journald[200]: Journal stopped Feb 14 01:00:28.843030 systemd-journald[200]: Received SIGTERM from PID 1 (systemd). Feb 14 01:00:28.843183 kernel: SELinux: policy capability network_peer_controls=1 Feb 14 01:00:28.843228 kernel: SELinux: policy capability open_perms=1 Feb 14 01:00:28.843261 kernel: SELinux: policy capability extended_socket_class=1 Feb 14 01:00:28.843287 kernel: SELinux: policy capability always_check_network=0 Feb 14 01:00:28.843313 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 14 01:00:28.843341 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 14 01:00:28.843368 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 14 01:00:28.843400 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 14 01:00:28.843435 kernel: audit: type=1403 audit(1739494827.633:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 14 01:00:28.843472 systemd[1]: Successfully loaded SELinux policy in 51.092ms. Feb 14 01:00:28.843516 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.594ms. Feb 14 01:00:28.843539 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 14 01:00:28.843560 systemd[1]: Detected virtualization kvm. Feb 14 01:00:28.843606 systemd[1]: Detected architecture x86-64. Feb 14 01:00:28.843629 systemd[1]: Detected first boot. Feb 14 01:00:28.843649 systemd[1]: Hostname set to . Feb 14 01:00:28.843692 systemd[1]: Initializing machine ID from VM UUID. Feb 14 01:00:28.843715 zram_generator::config[1054]: No configuration found. Feb 14 01:00:28.843745 systemd[1]: Populated /etc with preset unit settings. Feb 14 01:00:28.843782 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 14 01:00:28.843803 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 14 01:00:28.843824 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 14 01:00:28.843846 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 14 01:00:28.843866 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 14 01:00:28.843899 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 14 01:00:28.843921 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 14 01:00:28.843942 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 14 01:00:28.843962 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 14 01:00:28.844005 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 14 01:00:28.844033 systemd[1]: Created slice user.slice - User and Session Slice. Feb 14 01:00:28.844068 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 14 01:00:28.844088 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 14 01:00:28.844121 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 14 01:00:28.844153 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 14 01:00:28.844175 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 14 01:00:28.844194 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 14 01:00:28.844213 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 14 01:00:28.844245 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 14 01:00:28.844265 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 14 01:00:28.844285 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 14 01:00:28.844331 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 14 01:00:28.844353 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 14 01:00:28.844373 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 14 01:00:28.846008 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 14 01:00:28.846037 systemd[1]: Reached target slices.target - Slice Units. Feb 14 01:00:28.846067 systemd[1]: Reached target swap.target - Swaps. Feb 14 01:00:28.846087 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 14 01:00:28.846123 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 14 01:00:28.846157 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 14 01:00:28.846215 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 14 01:00:28.846236 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 14 01:00:28.846269 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 14 01:00:28.846289 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 14 01:00:28.846310 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 14 01:00:28.847017 systemd[1]: Mounting media.mount - External Media Directory... Feb 14 01:00:28.847046 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 14 01:00:28.847067 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 14 01:00:28.847090 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 14 01:00:28.847128 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 14 01:00:28.847151 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 14 01:00:28.847175 systemd[1]: Reached target machines.target - Containers. Feb 14 01:00:28.847196 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 14 01:00:28.847231 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 14 01:00:28.847253 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 14 01:00:28.847281 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 14 01:00:28.847302 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 14 01:00:28.847322 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 14 01:00:28.847350 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 14 01:00:28.847390 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 14 01:00:28.847418 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 14 01:00:28.847440 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 14 01:00:28.847480 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 14 01:00:28.847503 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 14 01:00:28.847524 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 14 01:00:28.847551 systemd[1]: Stopped systemd-fsck-usr.service. Feb 14 01:00:28.847587 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 14 01:00:28.847613 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 14 01:00:28.847635 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 14 01:00:28.847656 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 14 01:00:28.847676 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 14 01:00:28.847714 systemd[1]: verity-setup.service: Deactivated successfully. Feb 14 01:00:28.847737 systemd[1]: Stopped verity-setup.service. Feb 14 01:00:28.847758 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 14 01:00:28.847778 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 14 01:00:28.847798 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 14 01:00:28.847818 systemd[1]: Mounted media.mount - External Media Directory. Feb 14 01:00:28.847838 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 14 01:00:28.847875 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 14 01:00:28.847898 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 14 01:00:28.847932 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 14 01:00:28.847954 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 14 01:00:28.847998 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 14 01:00:28.848022 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 14 01:00:28.848057 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 14 01:00:28.848126 systemd-journald[1151]: Collecting audit messages is disabled. Feb 14 01:00:28.848186 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 14 01:00:28.848224 kernel: loop: module loaded Feb 14 01:00:28.848258 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 14 01:00:28.850035 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 14 01:00:28.850057 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 14 01:00:28.850080 systemd-journald[1151]: Journal started Feb 14 01:00:28.850113 systemd-journald[1151]: Runtime Journal (/run/log/journal/0c8230c7a1af4d3d8ab69f511b49463b) is 4.7M, max 38.0M, 33.2M free. Feb 14 01:00:28.424040 systemd[1]: Queued start job for default target multi-user.target. Feb 14 01:00:28.444081 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 14 01:00:28.444743 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 14 01:00:28.855502 systemd[1]: Started systemd-journald.service - Journal Service. Feb 14 01:00:28.857076 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 14 01:00:28.857317 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 14 01:00:28.858547 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 14 01:00:28.860111 kernel: fuse: init (API version 7.39) Feb 14 01:00:28.860858 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 14 01:00:28.862430 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 14 01:00:28.862721 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 14 01:00:28.880357 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 14 01:00:28.893052 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 14 01:00:28.900044 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 14 01:00:28.902119 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 14 01:00:28.902168 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 14 01:00:28.905424 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 14 01:00:28.916274 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 14 01:00:28.917007 kernel: ACPI: bus type drm_connector registered Feb 14 01:00:28.923222 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 14 01:00:28.924229 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 14 01:00:28.930181 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 14 01:00:28.939176 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 14 01:00:28.940121 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 14 01:00:28.944106 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 14 01:00:28.944941 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 14 01:00:28.950210 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 14 01:00:28.952785 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 14 01:00:28.959211 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 14 01:00:28.964803 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 14 01:00:28.967079 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 14 01:00:28.968355 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 14 01:00:28.969371 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 14 01:00:28.971586 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 14 01:00:29.013395 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 14 01:00:29.020475 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 14 01:00:29.037016 kernel: loop0: detected capacity change from 0 to 8 Feb 14 01:00:29.036241 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 14 01:00:29.058261 systemd-journald[1151]: Time spent on flushing to /var/log/journal/0c8230c7a1af4d3d8ab69f511b49463b is 67.652ms for 1145 entries. Feb 14 01:00:29.058261 systemd-journald[1151]: System Journal (/var/log/journal/0c8230c7a1af4d3d8ab69f511b49463b) is 8.0M, max 584.8M, 576.8M free. Feb 14 01:00:29.156127 systemd-journald[1151]: Received client request to flush runtime journal. Feb 14 01:00:29.156199 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 14 01:00:29.156230 kernel: loop1: detected capacity change from 0 to 218376 Feb 14 01:00:29.106918 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 14 01:00:29.111616 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 14 01:00:29.134957 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 14 01:00:29.157135 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Feb 14 01:00:29.157162 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Feb 14 01:00:29.163079 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 14 01:00:29.178344 kernel: loop2: detected capacity change from 0 to 140768 Feb 14 01:00:29.189399 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 14 01:00:29.213287 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 14 01:00:29.258545 kernel: loop3: detected capacity change from 0 to 142488 Feb 14 01:00:29.340028 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 14 01:00:29.352247 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 14 01:00:29.359781 kernel: loop4: detected capacity change from 0 to 8 Feb 14 01:00:29.377775 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 14 01:00:29.392425 kernel: loop5: detected capacity change from 0 to 218376 Feb 14 01:00:29.393327 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 14 01:00:29.420463 kernel: loop6: detected capacity change from 0 to 140768 Feb 14 01:00:29.456070 kernel: loop7: detected capacity change from 0 to 142488 Feb 14 01:00:29.468302 udevadm[1214]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 14 01:00:29.470970 systemd-tmpfiles[1212]: ACLs are not supported, ignoring. Feb 14 01:00:29.471027 systemd-tmpfiles[1212]: ACLs are not supported, ignoring. Feb 14 01:00:29.486608 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 14 01:00:29.489384 (sd-merge)[1211]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Feb 14 01:00:29.490876 (sd-merge)[1211]: Merged extensions into '/usr'. Feb 14 01:00:29.506310 systemd[1]: Reloading requested from client PID 1186 ('systemd-sysext') (unit systemd-sysext.service)... Feb 14 01:00:29.506431 systemd[1]: Reloading... Feb 14 01:00:29.662998 zram_generator::config[1242]: No configuration found. Feb 14 01:00:29.823447 ldconfig[1181]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 14 01:00:29.922032 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 14 01:00:29.995201 systemd[1]: Reloading finished in 487 ms. Feb 14 01:00:30.025859 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 14 01:00:30.031069 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 14 01:00:30.044232 systemd[1]: Starting ensure-sysext.service... Feb 14 01:00:30.054695 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 14 01:00:30.079135 systemd[1]: Reloading requested from client PID 1298 ('systemctl') (unit ensure-sysext.service)... Feb 14 01:00:30.079165 systemd[1]: Reloading... Feb 14 01:00:30.111919 systemd-tmpfiles[1299]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 14 01:00:30.112665 systemd-tmpfiles[1299]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 14 01:00:30.115742 systemd-tmpfiles[1299]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 14 01:00:30.116226 systemd-tmpfiles[1299]: ACLs are not supported, ignoring. Feb 14 01:00:30.116353 systemd-tmpfiles[1299]: ACLs are not supported, ignoring. Feb 14 01:00:30.145781 systemd-tmpfiles[1299]: Detected autofs mount point /boot during canonicalization of boot. Feb 14 01:00:30.148236 systemd-tmpfiles[1299]: Skipping /boot Feb 14 01:00:30.153400 zram_generator::config[1322]: No configuration found. Feb 14 01:00:30.200984 systemd-tmpfiles[1299]: Detected autofs mount point /boot during canonicalization of boot. Feb 14 01:00:30.201211 systemd-tmpfiles[1299]: Skipping /boot Feb 14 01:00:30.368103 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 14 01:00:30.439810 systemd[1]: Reloading finished in 360 ms. Feb 14 01:00:30.460157 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 14 01:00:30.470021 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 14 01:00:30.492199 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 14 01:00:30.498439 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 14 01:00:30.503427 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 14 01:00:30.509244 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 14 01:00:30.513123 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 14 01:00:30.517199 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 14 01:00:30.528388 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 14 01:00:30.528708 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 14 01:00:30.538370 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 14 01:00:30.543310 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 14 01:00:30.548304 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 14 01:00:30.549266 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 14 01:00:30.549474 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 14 01:00:30.553949 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 14 01:00:30.554255 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 14 01:00:30.554486 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 14 01:00:30.554634 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 14 01:00:30.562101 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 14 01:00:30.562470 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 14 01:00:30.573316 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 14 01:00:30.575304 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 14 01:00:30.575528 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 14 01:00:30.579932 systemd[1]: Finished ensure-sysext.service. Feb 14 01:00:30.593323 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 14 01:00:30.628396 systemd-udevd[1389]: Using default interface naming scheme 'v255'. Feb 14 01:00:30.631423 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 14 01:00:30.634213 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 14 01:00:30.640525 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 14 01:00:30.641442 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 14 01:00:30.644376 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 14 01:00:30.646695 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 14 01:00:30.646929 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 14 01:00:30.656813 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 14 01:00:30.658138 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 14 01:00:30.663893 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 14 01:00:30.665274 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 14 01:00:30.667898 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 14 01:00:30.668024 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 14 01:00:30.668078 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 14 01:00:30.674421 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 14 01:00:30.688190 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 14 01:00:30.710481 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 14 01:00:30.724012 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 14 01:00:30.737625 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 14 01:00:30.752356 augenrules[1437]: No rules Feb 14 01:00:30.755169 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 14 01:00:30.772374 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 14 01:00:30.940979 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 14 01:00:30.942781 systemd[1]: Reached target time-set.target - System Time Set. Feb 14 01:00:30.947543 systemd-networkd[1417]: lo: Link UP Feb 14 01:00:30.947900 systemd-networkd[1417]: lo: Gained carrier Feb 14 01:00:30.956154 systemd-networkd[1417]: Enumeration completed Feb 14 01:00:30.956490 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 14 01:00:30.957281 systemd-resolved[1388]: Positive Trust Anchors: Feb 14 01:00:30.958054 systemd-resolved[1388]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 14 01:00:30.958230 systemd-resolved[1388]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 14 01:00:30.958947 systemd-networkd[1417]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 14 01:00:30.958960 systemd-networkd[1417]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 14 01:00:30.965153 systemd-networkd[1417]: eth0: Link UP Feb 14 01:00:30.965292 systemd-networkd[1417]: eth0: Gained carrier Feb 14 01:00:30.965325 systemd-networkd[1417]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 14 01:00:30.966227 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 14 01:00:30.974515 systemd-resolved[1388]: Using system hostname 'srv-jzpa0.gb1.brightbox.com'. Feb 14 01:00:30.979596 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 14 01:00:30.980501 systemd[1]: Reached target network.target - Network. Feb 14 01:00:30.981095 systemd-networkd[1417]: eth0: DHCPv4 address 10.230.12.186/30, gateway 10.230.12.185 acquired from 10.230.12.185 Feb 14 01:00:30.981493 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 14 01:00:30.984170 systemd-timesyncd[1402]: Network configuration changed, trying to establish connection. Feb 14 01:00:31.009297 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 14 01:00:31.015495 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1414) Feb 14 01:00:31.042508 systemd-networkd[1417]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 14 01:00:31.093952 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 14 01:00:31.104410 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 14 01:00:31.131050 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 14 01:00:31.136323 kernel: mousedev: PS/2 mouse device common for all mice Feb 14 01:00:31.142617 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 14 01:00:31.147022 kernel: ACPI: button: Power Button [PWRF] Feb 14 01:00:31.197020 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Feb 14 01:00:31.207080 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Feb 14 01:00:31.215959 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Feb 14 01:00:31.216303 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Feb 14 01:00:31.291411 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 14 01:00:31.443433 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 14 01:00:31.451175 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 14 01:00:31.525575 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 14 01:00:31.544042 lvm[1471]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 14 01:00:31.577625 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 14 01:00:31.579546 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 14 01:00:31.580452 systemd[1]: Reached target sysinit.target - System Initialization. Feb 14 01:00:31.581572 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 14 01:00:31.582538 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 14 01:00:31.583732 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 14 01:00:31.584680 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 14 01:00:31.585541 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 14 01:00:31.586429 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 14 01:00:31.586501 systemd[1]: Reached target paths.target - Path Units. Feb 14 01:00:31.587217 systemd[1]: Reached target timers.target - Timer Units. Feb 14 01:00:31.589292 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 14 01:00:31.592117 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 14 01:00:31.598886 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 14 01:00:31.601638 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 14 01:00:31.603269 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 14 01:00:31.604203 systemd[1]: Reached target sockets.target - Socket Units. Feb 14 01:00:31.604916 systemd[1]: Reached target basic.target - Basic System. Feb 14 01:00:31.605686 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 14 01:00:31.605741 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 14 01:00:31.616260 systemd[1]: Starting containerd.service - containerd container runtime... Feb 14 01:00:31.622191 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 14 01:00:31.632033 lvm[1476]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 14 01:00:31.636594 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 14 01:00:31.643131 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 14 01:00:31.647261 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 14 01:00:31.649113 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 14 01:00:31.657174 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 14 01:00:31.672521 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 14 01:00:31.685920 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 14 01:00:31.693144 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 14 01:00:31.695258 jq[1480]: false Feb 14 01:00:31.710204 extend-filesystems[1481]: Found loop4 Feb 14 01:00:31.710204 extend-filesystems[1481]: Found loop5 Feb 14 01:00:31.710204 extend-filesystems[1481]: Found loop6 Feb 14 01:00:31.710204 extend-filesystems[1481]: Found loop7 Feb 14 01:00:31.710204 extend-filesystems[1481]: Found vda Feb 14 01:00:31.710204 extend-filesystems[1481]: Found vda1 Feb 14 01:00:31.710204 extend-filesystems[1481]: Found vda2 Feb 14 01:00:31.710204 extend-filesystems[1481]: Found vda3 Feb 14 01:00:31.710204 extend-filesystems[1481]: Found usr Feb 14 01:00:31.710204 extend-filesystems[1481]: Found vda4 Feb 14 01:00:31.710204 extend-filesystems[1481]: Found vda6 Feb 14 01:00:31.710204 extend-filesystems[1481]: Found vda7 Feb 14 01:00:31.710204 extend-filesystems[1481]: Found vda9 Feb 14 01:00:31.710204 extend-filesystems[1481]: Checking size of /dev/vda9 Feb 14 01:00:31.868332 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Feb 14 01:00:31.868567 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1442) Feb 14 01:00:31.707225 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 14 01:00:31.869856 extend-filesystems[1481]: Resized partition /dev/vda9 Feb 14 01:00:31.759803 dbus-daemon[1479]: [system] SELinux support is enabled Feb 14 01:00:31.709872 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 14 01:00:31.875645 extend-filesystems[1502]: resize2fs 1.47.1 (20-May-2024) Feb 14 01:00:31.778122 dbus-daemon[1479]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1417 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 14 01:00:31.711762 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 14 01:00:31.796358 dbus-daemon[1479]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 14 01:00:31.723431 systemd[1]: Starting update-engine.service - Update Engine... Feb 14 01:00:31.747401 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 14 01:00:31.887050 update_engine[1493]: I20250214 01:00:31.839697 1493 main.cc:92] Flatcar Update Engine starting Feb 14 01:00:31.887050 update_engine[1493]: I20250214 01:00:31.862193 1493 update_check_scheduler.cc:74] Next update check in 10m19s Feb 14 01:00:31.753062 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 14 01:00:31.762372 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 14 01:00:31.768806 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 14 01:00:31.769112 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 14 01:00:31.897120 jq[1501]: true Feb 14 01:00:31.898593 tar[1504]: linux-amd64/LICENSE Feb 14 01:00:31.898593 tar[1504]: linux-amd64/helm Feb 14 01:00:31.776920 systemd[1]: motdgen.service: Deactivated successfully. Feb 14 01:00:31.778271 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 14 01:00:31.779889 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 14 01:00:31.780901 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 14 01:00:31.803900 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 14 01:00:31.804008 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 14 01:00:31.814749 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 14 01:00:31.816400 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 14 01:00:31.816461 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 14 01:00:31.834227 systemd-logind[1490]: Watching system buttons on /dev/input/event2 (Power Button) Feb 14 01:00:31.834261 systemd-logind[1490]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 14 01:00:31.849289 systemd-logind[1490]: New seat seat0. Feb 14 01:00:31.853204 systemd[1]: Started systemd-logind.service - User Login Management. Feb 14 01:00:31.862990 systemd[1]: Started update-engine.service - Update Engine. Feb 14 01:00:31.873149 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 14 01:00:31.910247 (ntainerd)[1514]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 14 01:00:31.961271 jq[1518]: true Feb 14 01:00:32.119168 dbus-daemon[1479]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 14 01:00:32.125740 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 14 01:00:32.119811 dbus-daemon[1479]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1510 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 14 01:00:32.142546 systemd[1]: Starting polkit.service - Authorization Manager... Feb 14 01:00:32.150182 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Feb 14 01:00:32.180752 extend-filesystems[1502]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 14 01:00:32.180752 extend-filesystems[1502]: old_desc_blocks = 1, new_desc_blocks = 8 Feb 14 01:00:32.180752 extend-filesystems[1502]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Feb 14 01:00:32.190887 extend-filesystems[1481]: Resized filesystem in /dev/vda9 Feb 14 01:00:32.186341 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 14 01:00:32.182637 polkitd[1534]: Started polkitd version 121 Feb 14 01:00:32.186869 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 14 01:00:32.206441 polkitd[1534]: Loading rules from directory /etc/polkit-1/rules.d Feb 14 01:00:32.206564 polkitd[1534]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 14 01:00:32.215385 polkitd[1534]: Finished loading, compiling and executing 2 rules Feb 14 01:00:32.218003 systemd[1]: Started polkit.service - Authorization Manager. Feb 14 01:00:32.217718 dbus-daemon[1479]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 14 01:00:32.219108 polkitd[1534]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 14 01:00:32.234000 bash[1543]: Updated "/home/core/.ssh/authorized_keys" Feb 14 01:00:32.242058 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 14 01:00:32.252104 locksmithd[1516]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 14 01:00:32.252392 systemd[1]: Starting sshkeys.service... Feb 14 01:00:32.257958 systemd-hostnamed[1510]: Hostname set to (static) Feb 14 01:00:32.288248 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 14 01:00:32.303523 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 14 01:00:32.394452 systemd-networkd[1417]: eth0: Gained IPv6LL Feb 14 01:00:32.401406 systemd-timesyncd[1402]: Network configuration changed, trying to establish connection. Feb 14 01:00:32.410189 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 14 01:00:32.412641 systemd[1]: Reached target network-online.target - Network is Online. Feb 14 01:00:32.420423 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 01:00:32.430778 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 14 01:00:32.459766 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 14 01:00:32.500287 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 14 01:00:32.571811 containerd[1514]: time="2025-02-14T01:00:32.571549960Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 14 01:00:32.672737 containerd[1514]: time="2025-02-14T01:00:32.671143272Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 14 01:00:32.679517 containerd[1514]: time="2025-02-14T01:00:32.679455582Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 14 01:00:32.679580 containerd[1514]: time="2025-02-14T01:00:32.679514897Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 14 01:00:32.679580 containerd[1514]: time="2025-02-14T01:00:32.679543061Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 14 01:00:32.679825 containerd[1514]: time="2025-02-14T01:00:32.679794003Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 14 01:00:32.679911 containerd[1514]: time="2025-02-14T01:00:32.679835149Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 14 01:00:32.681647 containerd[1514]: time="2025-02-14T01:00:32.681606333Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 14 01:00:32.681704 containerd[1514]: time="2025-02-14T01:00:32.681646406Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 14 01:00:32.684212 containerd[1514]: time="2025-02-14T01:00:32.684176978Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 14 01:00:32.684278 containerd[1514]: time="2025-02-14T01:00:32.684211880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 14 01:00:32.684278 containerd[1514]: time="2025-02-14T01:00:32.684234545Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 14 01:00:32.684278 containerd[1514]: time="2025-02-14T01:00:32.684250797Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 14 01:00:32.684419 containerd[1514]: time="2025-02-14T01:00:32.684391809Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 14 01:00:32.684843 containerd[1514]: time="2025-02-14T01:00:32.684813224Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 14 01:00:32.690272 containerd[1514]: time="2025-02-14T01:00:32.684960466Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 14 01:00:32.690272 containerd[1514]: time="2025-02-14T01:00:32.688963303Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 14 01:00:32.690272 containerd[1514]: time="2025-02-14T01:00:32.689164276Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 14 01:00:32.690272 containerd[1514]: time="2025-02-14T01:00:32.689281791Z" level=info msg="metadata content store policy set" policy=shared Feb 14 01:00:32.700315 containerd[1514]: time="2025-02-14T01:00:32.700273037Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 14 01:00:32.700407 containerd[1514]: time="2025-02-14T01:00:32.700364364Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 14 01:00:32.700407 containerd[1514]: time="2025-02-14T01:00:32.700395953Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 14 01:00:32.700487 containerd[1514]: time="2025-02-14T01:00:32.700423741Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 14 01:00:32.700487 containerd[1514]: time="2025-02-14T01:00:32.700456350Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 14 01:00:32.700725 containerd[1514]: time="2025-02-14T01:00:32.700692239Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 14 01:00:32.704183 containerd[1514]: time="2025-02-14T01:00:32.703038803Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 14 01:00:32.704183 containerd[1514]: time="2025-02-14T01:00:32.703247404Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 14 01:00:32.704183 containerd[1514]: time="2025-02-14T01:00:32.703274587Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 14 01:00:32.704183 containerd[1514]: time="2025-02-14T01:00:32.703295356Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 14 01:00:32.704183 containerd[1514]: time="2025-02-14T01:00:32.703324756Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 14 01:00:32.704183 containerd[1514]: time="2025-02-14T01:00:32.703354914Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 14 01:00:32.704183 containerd[1514]: time="2025-02-14T01:00:32.703381874Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 14 01:00:32.704183 containerd[1514]: time="2025-02-14T01:00:32.703404479Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 14 01:00:32.704183 containerd[1514]: time="2025-02-14T01:00:32.703426796Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 14 01:00:32.704183 containerd[1514]: time="2025-02-14T01:00:32.703449374Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 14 01:00:32.704183 containerd[1514]: time="2025-02-14T01:00:32.703469594Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 14 01:00:32.704183 containerd[1514]: time="2025-02-14T01:00:32.703502005Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 14 01:00:32.704183 containerd[1514]: time="2025-02-14T01:00:32.703539504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 14 01:00:32.704183 containerd[1514]: time="2025-02-14T01:00:32.703569038Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 14 01:00:32.704669 containerd[1514]: time="2025-02-14T01:00:32.703589937Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 14 01:00:32.704669 containerd[1514]: time="2025-02-14T01:00:32.703611272Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 14 01:00:32.704669 containerd[1514]: time="2025-02-14T01:00:32.703631081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 14 01:00:32.704669 containerd[1514]: time="2025-02-14T01:00:32.703651105Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 14 01:00:32.704669 containerd[1514]: time="2025-02-14T01:00:32.703670045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 14 01:00:32.704669 containerd[1514]: time="2025-02-14T01:00:32.703692045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 14 01:00:32.704669 containerd[1514]: time="2025-02-14T01:00:32.703725797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 14 01:00:32.704669 containerd[1514]: time="2025-02-14T01:00:32.703752497Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 14 01:00:32.704669 containerd[1514]: time="2025-02-14T01:00:32.703773605Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 14 01:00:32.704669 containerd[1514]: time="2025-02-14T01:00:32.703793676Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 14 01:00:32.704669 containerd[1514]: time="2025-02-14T01:00:32.703815077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 14 01:00:32.704669 containerd[1514]: time="2025-02-14T01:00:32.703837544Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 14 01:00:32.704669 containerd[1514]: time="2025-02-14T01:00:32.703886633Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 14 01:00:32.704669 containerd[1514]: time="2025-02-14T01:00:32.703912239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 14 01:00:32.704669 containerd[1514]: time="2025-02-14T01:00:32.703931140Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 14 01:00:32.705193 containerd[1514]: time="2025-02-14T01:00:32.704905479Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 14 01:00:32.705193 containerd[1514]: time="2025-02-14T01:00:32.704949390Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 14 01:00:32.705193 containerd[1514]: time="2025-02-14T01:00:32.705000023Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 14 01:00:32.705193 containerd[1514]: time="2025-02-14T01:00:32.705026413Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 14 01:00:32.705193 containerd[1514]: time="2025-02-14T01:00:32.705044462Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 14 01:00:32.705193 containerd[1514]: time="2025-02-14T01:00:32.705080806Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 14 01:00:32.709495 containerd[1514]: time="2025-02-14T01:00:32.705108417Z" level=info msg="NRI interface is disabled by configuration." Feb 14 01:00:32.709495 containerd[1514]: time="2025-02-14T01:00:32.707042580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 14 01:00:32.709606 containerd[1514]: time="2025-02-14T01:00:32.707456285Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 14 01:00:32.709606 containerd[1514]: time="2025-02-14T01:00:32.707575805Z" level=info msg="Connect containerd service" Feb 14 01:00:32.709606 containerd[1514]: time="2025-02-14T01:00:32.707631385Z" level=info msg="using legacy CRI server" Feb 14 01:00:32.709606 containerd[1514]: time="2025-02-14T01:00:32.707648030Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 14 01:00:32.714366 containerd[1514]: time="2025-02-14T01:00:32.710007032Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 14 01:00:32.714366 containerd[1514]: time="2025-02-14T01:00:32.711556125Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 14 01:00:32.715415 containerd[1514]: time="2025-02-14T01:00:32.715383397Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 14 01:00:32.716924 containerd[1514]: time="2025-02-14T01:00:32.716889555Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 14 01:00:32.717129 containerd[1514]: time="2025-02-14T01:00:32.716108840Z" level=info msg="Start subscribing containerd event" Feb 14 01:00:32.717257 containerd[1514]: time="2025-02-14T01:00:32.717230745Z" level=info msg="Start recovering state" Feb 14 01:00:32.717623 containerd[1514]: time="2025-02-14T01:00:32.717590648Z" level=info msg="Start event monitor" Feb 14 01:00:32.721562 containerd[1514]: time="2025-02-14T01:00:32.721524645Z" level=info msg="Start snapshots syncer" Feb 14 01:00:32.721562 containerd[1514]: time="2025-02-14T01:00:32.721560848Z" level=info msg="Start cni network conf syncer for default" Feb 14 01:00:32.721660 containerd[1514]: time="2025-02-14T01:00:32.721577232Z" level=info msg="Start streaming server" Feb 14 01:00:32.721660 containerd[1514]: time="2025-02-14T01:00:32.721684733Z" level=info msg="containerd successfully booted in 0.154304s" Feb 14 01:00:32.721801 systemd[1]: Started containerd.service - containerd container runtime. Feb 14 01:00:33.001835 sshd_keygen[1515]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 14 01:00:33.064183 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 14 01:00:33.074519 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 14 01:00:33.082449 systemd[1]: Started sshd@0-10.230.12.186:22-218.92.0.237:56934.service - OpenSSH per-connection server daemon (218.92.0.237:56934). Feb 14 01:00:33.104178 systemd[1]: issuegen.service: Deactivated successfully. Feb 14 01:00:33.104532 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 14 01:00:33.114025 tar[1504]: linux-amd64/README.md Feb 14 01:00:33.115334 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 14 01:00:33.139370 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 14 01:00:33.164728 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 14 01:00:33.177767 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 14 01:00:33.185558 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 14 01:00:33.187691 systemd[1]: Reached target getty.target - Login Prompts. Feb 14 01:00:33.645135 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 01:00:33.655751 (kubelet)[1608]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 14 01:00:33.900867 systemd-timesyncd[1402]: Network configuration changed, trying to establish connection. Feb 14 01:00:33.902475 systemd-networkd[1417]: eth0: Ignoring DHCPv6 address 2a02:1348:179:832e:24:19ff:fee6:cba/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:832e:24:19ff:fee6:cba/64 assigned by NDisc. Feb 14 01:00:33.902827 systemd-networkd[1417]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Feb 14 01:00:34.283122 kubelet[1608]: E0214 01:00:34.282892 1608 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 14 01:00:34.286337 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 14 01:00:34.286645 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 14 01:00:34.287460 systemd[1]: kubelet.service: Consumed 1.059s CPU time. Feb 14 01:00:34.473244 sshd[1616]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.237 user=root Feb 14 01:00:34.906724 systemd-timesyncd[1402]: Network configuration changed, trying to establish connection. Feb 14 01:00:35.019048 systemd-timesyncd[1402]: Network configuration changed, trying to establish connection. Feb 14 01:00:35.585545 systemd[1]: Started sshd@1-10.230.12.186:22-147.75.109.163:35542.service - OpenSSH per-connection server daemon (147.75.109.163:35542). Feb 14 01:00:36.306004 sshd[1590]: PAM: Permission denied for root from 218.92.0.237 Feb 14 01:00:36.481705 sshd[1621]: Accepted publickey for core from 147.75.109.163 port 35542 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 01:00:36.485030 sshd[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:00:36.501633 systemd-logind[1490]: New session 1 of user core. Feb 14 01:00:36.505058 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 14 01:00:36.511441 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 14 01:00:36.542694 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 14 01:00:36.554540 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 14 01:00:36.560136 (systemd)[1626]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 14 01:00:36.676861 sshd[1623]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.237 user=root Feb 14 01:00:36.698235 systemd[1626]: Queued start job for default target default.target. Feb 14 01:00:36.709819 systemd[1626]: Created slice app.slice - User Application Slice. Feb 14 01:00:36.709874 systemd[1626]: Reached target paths.target - Paths. Feb 14 01:00:36.709898 systemd[1626]: Reached target timers.target - Timers. Feb 14 01:00:36.712051 systemd[1626]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 14 01:00:36.736377 systemd[1626]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 14 01:00:36.736566 systemd[1626]: Reached target sockets.target - Sockets. Feb 14 01:00:36.736593 systemd[1626]: Reached target basic.target - Basic System. Feb 14 01:00:36.736669 systemd[1626]: Reached target default.target - Main User Target. Feb 14 01:00:36.736753 systemd[1626]: Startup finished in 167ms. Feb 14 01:00:36.736792 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 14 01:00:36.748509 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 14 01:00:37.401428 systemd[1]: Started sshd@2-10.230.12.186:22-147.75.109.163:35554.service - OpenSSH per-connection server daemon (147.75.109.163:35554). Feb 14 01:00:38.246647 login[1600]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 14 01:00:38.248136 login[1601]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 14 01:00:38.255955 systemd-logind[1490]: New session 3 of user core. Feb 14 01:00:38.265348 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 14 01:00:38.270355 systemd-logind[1490]: New session 2 of user core. Feb 14 01:00:38.276259 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 14 01:00:38.298543 sshd[1637]: Accepted publickey for core from 147.75.109.163 port 35554 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 01:00:38.299243 sshd[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:00:38.310424 systemd-logind[1490]: New session 4 of user core. Feb 14 01:00:38.316890 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 14 01:00:38.750398 coreos-metadata[1478]: Feb 14 01:00:38.750 WARN failed to locate config-drive, using the metadata service API instead Feb 14 01:00:38.784120 coreos-metadata[1478]: Feb 14 01:00:38.784 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Feb 14 01:00:38.798544 coreos-metadata[1478]: Feb 14 01:00:38.798 INFO Fetch failed with 404: resource not found Feb 14 01:00:38.798629 coreos-metadata[1478]: Feb 14 01:00:38.798 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Feb 14 01:00:38.799400 coreos-metadata[1478]: Feb 14 01:00:38.799 INFO Fetch successful Feb 14 01:00:38.799584 coreos-metadata[1478]: Feb 14 01:00:38.799 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Feb 14 01:00:38.816472 coreos-metadata[1478]: Feb 14 01:00:38.816 INFO Fetch successful Feb 14 01:00:38.816772 coreos-metadata[1478]: Feb 14 01:00:38.816 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Feb 14 01:00:38.833661 coreos-metadata[1478]: Feb 14 01:00:38.833 INFO Fetch successful Feb 14 01:00:38.833882 coreos-metadata[1478]: Feb 14 01:00:38.833 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Feb 14 01:00:38.845839 coreos-metadata[1478]: Feb 14 01:00:38.845 INFO Fetch successful Feb 14 01:00:38.846014 coreos-metadata[1478]: Feb 14 01:00:38.845 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Feb 14 01:00:38.919485 sshd[1637]: pam_unix(sshd:session): session closed for user core Feb 14 01:00:38.925703 systemd-logind[1490]: Session 4 logged out. Waiting for processes to exit. Feb 14 01:00:38.926733 systemd[1]: sshd@2-10.230.12.186:22-147.75.109.163:35554.service: Deactivated successfully. Feb 14 01:00:38.929352 systemd[1]: session-4.scope: Deactivated successfully. Feb 14 01:00:38.930906 systemd-logind[1490]: Removed session 4. Feb 14 01:00:39.078423 systemd[1]: Started sshd@3-10.230.12.186:22-147.75.109.163:59066.service - OpenSSH per-connection server daemon (147.75.109.163:59066). Feb 14 01:00:39.116200 sshd[1590]: PAM: Permission denied for root from 218.92.0.237 Feb 14 01:00:39.438669 coreos-metadata[1561]: Feb 14 01:00:39.438 WARN failed to locate config-drive, using the metadata service API instead Feb 14 01:00:39.462818 coreos-metadata[1561]: Feb 14 01:00:39.462 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Feb 14 01:00:39.486048 coreos-metadata[1561]: Feb 14 01:00:39.485 INFO Fetch successful Feb 14 01:00:39.486185 coreos-metadata[1561]: Feb 14 01:00:39.486 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 14 01:00:39.489075 sshd[1674]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.237 user=root Feb 14 01:00:39.527090 coreos-metadata[1561]: Feb 14 01:00:39.527 INFO Fetch successful Feb 14 01:00:39.529046 unknown[1561]: wrote ssh authorized keys file for user: core Feb 14 01:00:39.561228 update-ssh-keys[1677]: Updated "/home/core/.ssh/authorized_keys" Feb 14 01:00:39.562220 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 14 01:00:39.565185 systemd[1]: Finished sshkeys.service. Feb 14 01:00:39.877369 coreos-metadata[1478]: Feb 14 01:00:39.877 INFO Fetch successful Feb 14 01:00:39.902570 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 14 01:00:39.903365 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 14 01:00:39.903588 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 14 01:00:39.903782 systemd[1]: Startup finished in 1.444s (kernel) + 16.883s (initrd) + 12.319s (userspace) = 30.647s. Feb 14 01:00:39.978561 sshd[1672]: Accepted publickey for core from 147.75.109.163 port 59066 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 01:00:39.980573 sshd[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:00:39.987037 systemd-logind[1490]: New session 5 of user core. Feb 14 01:00:39.994216 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 14 01:00:40.600578 sshd[1672]: pam_unix(sshd:session): session closed for user core Feb 14 01:00:40.606088 systemd[1]: sshd@3-10.230.12.186:22-147.75.109.163:59066.service: Deactivated successfully. Feb 14 01:00:40.608371 systemd[1]: session-5.scope: Deactivated successfully. Feb 14 01:00:40.609363 systemd-logind[1490]: Session 5 logged out. Waiting for processes to exit. Feb 14 01:00:40.610945 systemd-logind[1490]: Removed session 5. Feb 14 01:00:41.672979 sshd[1590]: PAM: Permission denied for root from 218.92.0.237 Feb 14 01:00:41.859018 sshd[1590]: Received disconnect from 218.92.0.237 port 56934:11: [preauth] Feb 14 01:00:41.859018 sshd[1590]: Disconnected from authenticating user root 218.92.0.237 port 56934 [preauth] Feb 14 01:00:41.861634 systemd[1]: sshd@0-10.230.12.186:22-218.92.0.237:56934.service: Deactivated successfully. Feb 14 01:00:42.086613 systemd[1]: Started sshd@4-10.230.12.186:22-218.92.0.237:18936.service - OpenSSH per-connection server daemon (218.92.0.237:18936). Feb 14 01:00:43.717793 sshd[1695]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.237 user=root Feb 14 01:00:44.537136 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 14 01:00:44.551295 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 01:00:44.715936 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 01:00:44.729741 (kubelet)[1704]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 14 01:00:44.841999 kubelet[1704]: E0214 01:00:44.841727 1704 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 14 01:00:44.845570 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 14 01:00:44.845809 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 14 01:00:46.453234 sshd[1693]: PAM: Permission denied for root from 218.92.0.237 Feb 14 01:00:46.890254 sshd[1711]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.237 user=root Feb 14 01:00:48.702516 sshd[1693]: PAM: Permission denied for root from 218.92.0.237 Feb 14 01:00:49.140556 sshd[1712]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.237 user=root Feb 14 01:00:50.753960 systemd[1]: Started sshd@5-10.230.12.186:22-147.75.109.163:44820.service - OpenSSH per-connection server daemon (147.75.109.163:44820). Feb 14 01:00:51.560270 sshd[1693]: PAM: Permission denied for root from 218.92.0.237 Feb 14 01:00:51.650577 sshd[1714]: Accepted publickey for core from 147.75.109.163 port 44820 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 01:00:51.653167 sshd[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:00:51.660464 systemd-logind[1490]: New session 6 of user core. Feb 14 01:00:51.669216 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 14 01:00:51.780826 sshd[1693]: Received disconnect from 218.92.0.237 port 18936:11: [preauth] Feb 14 01:00:51.781992 sshd[1693]: Disconnected from authenticating user root 218.92.0.237 port 18936 [preauth] Feb 14 01:00:51.784333 systemd[1]: sshd@4-10.230.12.186:22-218.92.0.237:18936.service: Deactivated successfully. Feb 14 01:00:52.018339 systemd[1]: Started sshd@6-10.230.12.186:22-218.92.0.237:20754.service - OpenSSH per-connection server daemon (218.92.0.237:20754). Feb 14 01:00:52.281580 sshd[1714]: pam_unix(sshd:session): session closed for user core Feb 14 01:00:52.286496 systemd-logind[1490]: Session 6 logged out. Waiting for processes to exit. Feb 14 01:00:52.287042 systemd[1]: sshd@5-10.230.12.186:22-147.75.109.163:44820.service: Deactivated successfully. Feb 14 01:00:52.289490 systemd[1]: session-6.scope: Deactivated successfully. Feb 14 01:00:52.291624 systemd-logind[1490]: Removed session 6. Feb 14 01:00:52.444340 systemd[1]: Started sshd@7-10.230.12.186:22-147.75.109.163:44822.service - OpenSSH per-connection server daemon (147.75.109.163:44822). Feb 14 01:00:53.359492 sshd[1726]: Accepted publickey for core from 147.75.109.163 port 44822 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 01:00:53.362112 sshd[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:00:53.370062 systemd-logind[1490]: New session 7 of user core. Feb 14 01:00:53.380223 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 14 01:00:53.635281 sshd[1729]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.237 user=root Feb 14 01:00:53.994720 sshd[1726]: pam_unix(sshd:session): session closed for user core Feb 14 01:00:53.999773 systemd[1]: sshd@7-10.230.12.186:22-147.75.109.163:44822.service: Deactivated successfully. Feb 14 01:00:54.001790 systemd[1]: session-7.scope: Deactivated successfully. Feb 14 01:00:54.002621 systemd-logind[1490]: Session 7 logged out. Waiting for processes to exit. Feb 14 01:00:54.004166 systemd-logind[1490]: Removed session 7. Feb 14 01:00:54.162434 systemd[1]: Started sshd@8-10.230.12.186:22-147.75.109.163:44828.service - OpenSSH per-connection server daemon (147.75.109.163:44828). Feb 14 01:00:54.924452 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 14 01:00:54.936260 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 01:00:55.086088 sshd[1734]: Accepted publickey for core from 147.75.109.163 port 44828 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 01:00:55.088698 sshd[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:00:55.090199 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 01:00:55.099066 systemd-logind[1490]: New session 8 of user core. Feb 14 01:00:55.100463 (kubelet)[1743]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 14 01:00:55.101328 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 14 01:00:55.158228 kubelet[1743]: E0214 01:00:55.158141 1743 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 14 01:00:55.160951 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 14 01:00:55.161235 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 14 01:00:55.408019 sshd[1720]: PAM: Permission denied for root from 218.92.0.237 Feb 14 01:00:55.740141 sshd[1734]: pam_unix(sshd:session): session closed for user core Feb 14 01:00:55.744152 systemd[1]: sshd@8-10.230.12.186:22-147.75.109.163:44828.service: Deactivated successfully. Feb 14 01:00:55.746787 systemd[1]: session-8.scope: Deactivated successfully. Feb 14 01:00:55.748908 systemd-logind[1490]: Session 8 logged out. Waiting for processes to exit. Feb 14 01:00:55.750691 systemd-logind[1490]: Removed session 8. Feb 14 01:00:55.845739 sshd[1754]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.237 user=root Feb 14 01:00:55.911355 systemd[1]: Started sshd@9-10.230.12.186:22-147.75.109.163:44836.service - OpenSSH per-connection server daemon (147.75.109.163:44836). Feb 14 01:00:57.223796 sshd[1758]: Accepted publickey for core from 147.75.109.163 port 44836 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 01:00:57.226119 sshd[1758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:00:57.233645 systemd-logind[1490]: New session 9 of user core. Feb 14 01:00:57.245271 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 14 01:00:57.753851 sudo[1761]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 14 01:00:57.755090 sudo[1761]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 14 01:00:57.767763 sudo[1761]: pam_unix(sudo:session): session closed for user root Feb 14 01:00:57.915492 sshd[1758]: pam_unix(sshd:session): session closed for user core Feb 14 01:00:57.920027 systemd[1]: sshd@9-10.230.12.186:22-147.75.109.163:44836.service: Deactivated successfully. Feb 14 01:00:57.922648 systemd[1]: session-9.scope: Deactivated successfully. Feb 14 01:00:57.924700 systemd-logind[1490]: Session 9 logged out. Waiting for processes to exit. Feb 14 01:00:57.926437 systemd-logind[1490]: Removed session 9. Feb 14 01:00:58.080392 systemd[1]: Started sshd@10-10.230.12.186:22-147.75.109.163:44844.service - OpenSSH per-connection server daemon (147.75.109.163:44844). Feb 14 01:00:58.561481 sshd[1720]: PAM: Permission denied for root from 218.92.0.237 Feb 14 01:00:58.975813 sshd[1766]: Accepted publickey for core from 147.75.109.163 port 44844 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 01:00:58.978104 sshd[1766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:00:58.985036 systemd-logind[1490]: New session 10 of user core. Feb 14 01:00:58.993212 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 14 01:00:58.999360 sshd[1768]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.237 user=root Feb 14 01:00:59.464848 sudo[1771]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 14 01:00:59.465715 sudo[1771]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 14 01:00:59.471920 sudo[1771]: pam_unix(sudo:session): session closed for user root Feb 14 01:00:59.480104 sudo[1770]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 14 01:00:59.480543 sudo[1770]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 14 01:00:59.499338 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Feb 14 01:00:59.509831 auditctl[1774]: No rules Feb 14 01:00:59.510406 systemd[1]: audit-rules.service: Deactivated successfully. Feb 14 01:00:59.510731 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Feb 14 01:00:59.518482 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 14 01:00:59.567837 augenrules[1792]: No rules Feb 14 01:00:59.568940 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 14 01:00:59.570850 sudo[1770]: pam_unix(sudo:session): session closed for user root Feb 14 01:00:59.720009 sshd[1766]: pam_unix(sshd:session): session closed for user core Feb 14 01:00:59.726107 systemd[1]: sshd@10-10.230.12.186:22-147.75.109.163:44844.service: Deactivated successfully. Feb 14 01:00:59.728400 systemd[1]: session-10.scope: Deactivated successfully. Feb 14 01:00:59.729302 systemd-logind[1490]: Session 10 logged out. Waiting for processes to exit. Feb 14 01:00:59.731050 systemd-logind[1490]: Removed session 10. Feb 14 01:00:59.893385 systemd[1]: Started sshd@11-10.230.12.186:22-147.75.109.163:54520.service - OpenSSH per-connection server daemon (147.75.109.163:54520). Feb 14 01:01:00.873243 sshd[1800]: Accepted publickey for core from 147.75.109.163 port 54520 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 01:01:00.875544 sshd[1800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:01:00.884817 systemd-logind[1490]: New session 11 of user core. Feb 14 01:01:00.892333 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 14 01:01:01.376680 sudo[1803]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 14 01:01:01.377272 sudo[1803]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 14 01:01:01.458183 sshd[1720]: PAM: Permission denied for root from 218.92.0.237 Feb 14 01:01:01.676921 sshd[1720]: Received disconnect from 218.92.0.237 port 20754:11: [preauth] Feb 14 01:01:01.676921 sshd[1720]: Disconnected from authenticating user root 218.92.0.237 port 20754 [preauth] Feb 14 01:01:01.681087 systemd[1]: sshd@6-10.230.12.186:22-218.92.0.237:20754.service: Deactivated successfully. Feb 14 01:01:01.871391 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 14 01:01:01.871951 (dockerd)[1822]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 14 01:01:02.343153 dockerd[1822]: time="2025-02-14T01:01:02.343021311Z" level=info msg="Starting up" Feb 14 01:01:02.482607 systemd[1]: var-lib-docker-metacopy\x2dcheck2591650931-merged.mount: Deactivated successfully. Feb 14 01:01:02.504258 dockerd[1822]: time="2025-02-14T01:01:02.504185113Z" level=info msg="Loading containers: start." Feb 14 01:01:02.672505 kernel: Initializing XFRM netlink socket Feb 14 01:01:02.710319 systemd-timesyncd[1402]: Network configuration changed, trying to establish connection. Feb 14 01:01:02.784301 systemd-networkd[1417]: docker0: Link UP Feb 14 01:01:02.805154 dockerd[1822]: time="2025-02-14T01:01:02.805099471Z" level=info msg="Loading containers: done." Feb 14 01:01:02.826182 dockerd[1822]: time="2025-02-14T01:01:02.826129581Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 14 01:01:02.826385 dockerd[1822]: time="2025-02-14T01:01:02.826314504Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 14 01:01:02.826530 dockerd[1822]: time="2025-02-14T01:01:02.826491682Z" level=info msg="Daemon has completed initialization" Feb 14 01:01:02.826772 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck973997920-merged.mount: Deactivated successfully. Feb 14 01:01:02.865669 dockerd[1822]: time="2025-02-14T01:01:02.865546611Z" level=info msg="API listen on /run/docker.sock" Feb 14 01:01:02.866248 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 14 01:01:03.701736 systemd-resolved[1388]: Clock change detected. Flushing caches. Feb 14 01:01:03.703149 systemd-timesyncd[1402]: Contacted time server [2a03:b0c0:1:d0::1f9:f001]:123 (2.flatcar.pool.ntp.org). Feb 14 01:01:03.703660 systemd-timesyncd[1402]: Initial clock synchronization to Fri 2025-02-14 01:01:03.701589 UTC. Feb 14 01:01:04.259805 containerd[1514]: time="2025-02-14T01:01:04.259695192Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\"" Feb 14 01:01:04.410968 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 14 01:01:05.044304 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1353938775.mount: Deactivated successfully. Feb 14 01:01:05.764019 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 14 01:01:05.771793 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 01:01:05.925820 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 01:01:05.941989 (kubelet)[2028]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 14 01:01:05.996301 kubelet[2028]: E0214 01:01:05.996183 2028 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 14 01:01:05.999512 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 14 01:01:05.999792 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 14 01:01:08.759843 containerd[1514]: time="2025-02-14T01:01:08.759644105Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:01:08.761322 containerd[1514]: time="2025-02-14T01:01:08.761276214Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.2: active requests=0, bytes read=28673939" Feb 14 01:01:08.762164 containerd[1514]: time="2025-02-14T01:01:08.762086963Z" level=info msg="ImageCreate event name:\"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:01:08.766175 containerd[1514]: time="2025-02-14T01:01:08.766110903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:01:08.768725 containerd[1514]: time="2025-02-14T01:01:08.767720751Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.2\" with image id \"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\", size \"28670731\" in 4.507908453s" Feb 14 01:01:08.768725 containerd[1514]: time="2025-02-14T01:01:08.767794246Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\" returns image reference \"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef\"" Feb 14 01:01:08.769238 containerd[1514]: time="2025-02-14T01:01:08.769207312Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\"" Feb 14 01:01:11.592501 containerd[1514]: time="2025-02-14T01:01:11.592207675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:01:11.594037 containerd[1514]: time="2025-02-14T01:01:11.593951336Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.2: active requests=0, bytes read=24771792" Feb 14 01:01:11.594907 containerd[1514]: time="2025-02-14T01:01:11.594812469Z" level=info msg="ImageCreate event name:\"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:01:11.598843 containerd[1514]: time="2025-02-14T01:01:11.598781401Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:01:11.600941 containerd[1514]: time="2025-02-14T01:01:11.600503589Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.2\" with image id \"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\", size \"26259392\" in 2.831096565s" Feb 14 01:01:11.600941 containerd[1514]: time="2025-02-14T01:01:11.600593663Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\" returns image reference \"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389\"" Feb 14 01:01:11.601576 containerd[1514]: time="2025-02-14T01:01:11.601516844Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\"" Feb 14 01:01:14.148316 containerd[1514]: time="2025-02-14T01:01:14.148241334Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:01:14.149811 containerd[1514]: time="2025-02-14T01:01:14.149765143Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.2: active requests=0, bytes read=19170284" Feb 14 01:01:14.150722 containerd[1514]: time="2025-02-14T01:01:14.150651070Z" level=info msg="ImageCreate event name:\"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:01:14.154780 containerd[1514]: time="2025-02-14T01:01:14.154644462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:01:14.156854 containerd[1514]: time="2025-02-14T01:01:14.156575235Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.2\" with image id \"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\", size \"20657902\" in 2.554981778s" Feb 14 01:01:14.156854 containerd[1514]: time="2025-02-14T01:01:14.156625404Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\" returns image reference \"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d\"" Feb 14 01:01:14.157599 containerd[1514]: time="2025-02-14T01:01:14.157485205Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\"" Feb 14 01:01:16.013953 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Feb 14 01:01:16.024953 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 01:01:16.070442 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1848306891.mount: Deactivated successfully. Feb 14 01:01:16.208233 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 01:01:16.223055 (kubelet)[2060]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 14 01:01:16.290501 kubelet[2060]: E0214 01:01:16.290232 2060 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 14 01:01:16.294110 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 14 01:01:16.294588 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 14 01:01:17.098625 containerd[1514]: time="2025-02-14T01:01:17.097837662Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:01:17.100194 containerd[1514]: time="2025-02-14T01:01:17.099433070Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.2: active requests=0, bytes read=30908847" Feb 14 01:01:17.101174 containerd[1514]: time="2025-02-14T01:01:17.101091867Z" level=info msg="ImageCreate event name:\"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:01:17.104078 containerd[1514]: time="2025-02-14T01:01:17.104000661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:01:17.105610 containerd[1514]: time="2025-02-14T01:01:17.105193395Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.2\" with image id \"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\", repo tag \"registry.k8s.io/kube-proxy:v1.32.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\", size \"30907858\" in 2.94734252s" Feb 14 01:01:17.105610 containerd[1514]: time="2025-02-14T01:01:17.105253463Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\" returns image reference \"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\"" Feb 14 01:01:17.106467 containerd[1514]: time="2025-02-14T01:01:17.106428206Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Feb 14 01:01:17.728461 update_engine[1493]: I20250214 01:01:17.728168 1493 update_attempter.cc:509] Updating boot flags... Feb 14 01:01:17.836640 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2078) Feb 14 01:01:17.887593 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2077) Feb 14 01:01:17.888906 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4032661035.mount: Deactivated successfully. Feb 14 01:01:19.306387 containerd[1514]: time="2025-02-14T01:01:19.306178881Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:01:19.307745 containerd[1514]: time="2025-02-14T01:01:19.307683291Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Feb 14 01:01:19.308637 containerd[1514]: time="2025-02-14T01:01:19.308589780Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:01:19.313140 containerd[1514]: time="2025-02-14T01:01:19.313101200Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:01:19.315545 containerd[1514]: time="2025-02-14T01:01:19.315422827Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.208752769s" Feb 14 01:01:19.315545 containerd[1514]: time="2025-02-14T01:01:19.315483594Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Feb 14 01:01:19.316333 containerd[1514]: time="2025-02-14T01:01:19.316302466Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 14 01:01:19.883564 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3342093260.mount: Deactivated successfully. Feb 14 01:01:19.891126 containerd[1514]: time="2025-02-14T01:01:19.891070086Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:01:19.892526 containerd[1514]: time="2025-02-14T01:01:19.892471252Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Feb 14 01:01:19.892873 containerd[1514]: time="2025-02-14T01:01:19.892809930Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:01:19.897607 containerd[1514]: time="2025-02-14T01:01:19.897516285Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:01:19.898826 containerd[1514]: time="2025-02-14T01:01:19.898764917Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 582.416774ms" Feb 14 01:01:19.899006 containerd[1514]: time="2025-02-14T01:01:19.898827068Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Feb 14 01:01:19.899793 containerd[1514]: time="2025-02-14T01:01:19.899330982Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Feb 14 01:01:20.587066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1687944838.mount: Deactivated successfully. Feb 14 01:01:26.171296 containerd[1514]: time="2025-02-14T01:01:26.171103869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:01:26.172945 containerd[1514]: time="2025-02-14T01:01:26.172894785Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551328" Feb 14 01:01:26.173321 containerd[1514]: time="2025-02-14T01:01:26.173282587Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:01:26.178101 containerd[1514]: time="2025-02-14T01:01:26.178051920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:01:26.179999 containerd[1514]: time="2025-02-14T01:01:26.179958899Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 6.280581489s" Feb 14 01:01:26.180282 containerd[1514]: time="2025-02-14T01:01:26.180115457Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Feb 14 01:01:26.514104 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Feb 14 01:01:26.525471 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 01:01:26.802002 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 01:01:26.808761 (kubelet)[2211]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 14 01:01:26.963902 kubelet[2211]: E0214 01:01:26.963810 2211 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 14 01:01:26.967255 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 14 01:01:26.967891 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 14 01:01:29.635784 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 01:01:29.651049 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 01:01:29.690028 systemd[1]: Reloading requested from client PID 2237 ('systemctl') (unit session-11.scope)... Feb 14 01:01:29.690412 systemd[1]: Reloading... Feb 14 01:01:29.888600 zram_generator::config[2272]: No configuration found. Feb 14 01:01:30.063559 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 14 01:01:30.171631 systemd[1]: Reloading finished in 480 ms. Feb 14 01:01:30.236391 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 14 01:01:30.236553 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 14 01:01:30.237067 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 01:01:30.239705 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 01:01:30.387660 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 01:01:30.403988 (kubelet)[2342]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 14 01:01:30.499846 kubelet[2342]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 14 01:01:30.501366 kubelet[2342]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 14 01:01:30.501366 kubelet[2342]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 14 01:01:30.501366 kubelet[2342]: I0214 01:01:30.500679 2342 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 14 01:01:31.134357 kubelet[2342]: I0214 01:01:31.134259 2342 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 14 01:01:31.134357 kubelet[2342]: I0214 01:01:31.134304 2342 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 14 01:01:31.134776 kubelet[2342]: I0214 01:01:31.134742 2342 server.go:954] "Client rotation is on, will bootstrap in background" Feb 14 01:01:31.170043 kubelet[2342]: E0214 01:01:31.169970 2342 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.12.186:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.12.186:6443: connect: connection refused" logger="UnhandledError" Feb 14 01:01:31.170618 kubelet[2342]: I0214 01:01:31.170578 2342 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 14 01:01:31.190449 kubelet[2342]: E0214 01:01:31.190369 2342 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 14 01:01:31.190449 kubelet[2342]: I0214 01:01:31.190449 2342 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 14 01:01:31.198014 kubelet[2342]: I0214 01:01:31.197969 2342 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 14 01:01:31.202674 kubelet[2342]: I0214 01:01:31.202548 2342 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 14 01:01:31.202896 kubelet[2342]: I0214 01:01:31.202617 2342 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-jzpa0.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 14 01:01:31.204666 kubelet[2342]: I0214 01:01:31.204577 2342 topology_manager.go:138] "Creating topology manager with none policy" Feb 14 01:01:31.204666 kubelet[2342]: I0214 01:01:31.204615 2342 container_manager_linux.go:304] "Creating device plugin manager" Feb 14 01:01:31.204892 kubelet[2342]: I0214 01:01:31.204846 2342 state_mem.go:36] "Initialized new in-memory state store" Feb 14 01:01:31.209309 kubelet[2342]: I0214 01:01:31.209275 2342 kubelet.go:446] "Attempting to sync node with API server" Feb 14 01:01:31.209309 kubelet[2342]: I0214 01:01:31.209310 2342 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 14 01:01:31.209443 kubelet[2342]: I0214 01:01:31.209347 2342 kubelet.go:352] "Adding apiserver pod source" Feb 14 01:01:31.209443 kubelet[2342]: I0214 01:01:31.209369 2342 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 14 01:01:31.218315 kubelet[2342]: I0214 01:01:31.218155 2342 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 14 01:01:31.221824 kubelet[2342]: I0214 01:01:31.221645 2342 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 14 01:01:31.223474 kubelet[2342]: W0214 01:01:31.222398 2342 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 14 01:01:31.223615 kubelet[2342]: I0214 01:01:31.223594 2342 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 14 01:01:31.223686 kubelet[2342]: I0214 01:01:31.223650 2342 server.go:1287] "Started kubelet" Feb 14 01:01:31.224486 kubelet[2342]: W0214 01:01:31.223856 2342 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.12.186:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.12.186:6443: connect: connection refused Feb 14 01:01:31.224486 kubelet[2342]: E0214 01:01:31.223936 2342 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.12.186:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.12.186:6443: connect: connection refused" logger="UnhandledError" Feb 14 01:01:31.226556 kubelet[2342]: W0214 01:01:31.225801 2342 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.12.186:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-jzpa0.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.12.186:6443: connect: connection refused Feb 14 01:01:31.226556 kubelet[2342]: E0214 01:01:31.225874 2342 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.12.186:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-jzpa0.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.12.186:6443: connect: connection refused" logger="UnhandledError" Feb 14 01:01:31.227192 kubelet[2342]: I0214 01:01:31.227137 2342 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 14 01:01:31.229992 kubelet[2342]: I0214 01:01:31.229831 2342 server.go:490] "Adding debug handlers to kubelet server" Feb 14 01:01:31.231406 kubelet[2342]: I0214 01:01:31.231156 2342 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 14 01:01:31.231619 kubelet[2342]: I0214 01:01:31.231584 2342 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 14 01:01:31.235587 kubelet[2342]: I0214 01:01:31.235556 2342 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 14 01:01:31.236507 kubelet[2342]: E0214 01:01:31.233279 2342 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.12.186:6443/api/v1/namespaces/default/events\": dial tcp 10.230.12.186:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-jzpa0.gb1.brightbox.com.1823ed6327fd9878 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-jzpa0.gb1.brightbox.com,UID:srv-jzpa0.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-jzpa0.gb1.brightbox.com,},FirstTimestamp:2025-02-14 01:01:31.223619704 +0000 UTC m=+0.814915882,LastTimestamp:2025-02-14 01:01:31.223619704 +0000 UTC m=+0.814915882,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-jzpa0.gb1.brightbox.com,}" Feb 14 01:01:31.245273 kubelet[2342]: I0214 01:01:31.244756 2342 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 14 01:01:31.245641 kubelet[2342]: I0214 01:01:31.245613 2342 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 14 01:01:31.246177 kubelet[2342]: E0214 01:01:31.246135 2342 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"srv-jzpa0.gb1.brightbox.com\" not found" Feb 14 01:01:31.251615 kubelet[2342]: E0214 01:01:31.251507 2342 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.12.186:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-jzpa0.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.12.186:6443: connect: connection refused" interval="200ms" Feb 14 01:01:31.252823 kubelet[2342]: I0214 01:01:31.252592 2342 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 14 01:01:31.252823 kubelet[2342]: I0214 01:01:31.252748 2342 reconciler.go:26] "Reconciler: start to sync state" Feb 14 01:01:31.253226 kubelet[2342]: I0214 01:01:31.253192 2342 factory.go:221] Registration of the systemd container factory successfully Feb 14 01:01:31.253368 kubelet[2342]: I0214 01:01:31.253333 2342 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 14 01:01:31.259926 kubelet[2342]: I0214 01:01:31.259882 2342 factory.go:221] Registration of the containerd container factory successfully Feb 14 01:01:31.270253 kubelet[2342]: E0214 01:01:31.269985 2342 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 14 01:01:31.272620 kubelet[2342]: I0214 01:01:31.272520 2342 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 14 01:01:31.275896 kubelet[2342]: I0214 01:01:31.275818 2342 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 14 01:01:31.275896 kubelet[2342]: I0214 01:01:31.275867 2342 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 14 01:01:31.276094 kubelet[2342]: I0214 01:01:31.275913 2342 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 14 01:01:31.276094 kubelet[2342]: I0214 01:01:31.275932 2342 kubelet.go:2388] "Starting kubelet main sync loop" Feb 14 01:01:31.276094 kubelet[2342]: E0214 01:01:31.276038 2342 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 14 01:01:31.295144 kubelet[2342]: W0214 01:01:31.294978 2342 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.12.186:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.12.186:6443: connect: connection refused Feb 14 01:01:31.295144 kubelet[2342]: E0214 01:01:31.295062 2342 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.12.186:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.12.186:6443: connect: connection refused" logger="UnhandledError" Feb 14 01:01:31.295422 kubelet[2342]: W0214 01:01:31.295199 2342 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.12.186:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.12.186:6443: connect: connection refused Feb 14 01:01:31.295422 kubelet[2342]: E0214 01:01:31.295244 2342 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.12.186:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.12.186:6443: connect: connection refused" logger="UnhandledError" Feb 14 01:01:31.311625 kubelet[2342]: I0214 01:01:31.311588 2342 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 14 01:01:31.311847 kubelet[2342]: I0214 01:01:31.311825 2342 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 14 01:01:31.312028 kubelet[2342]: I0214 01:01:31.312009 2342 state_mem.go:36] "Initialized new in-memory state store" Feb 14 01:01:31.314103 kubelet[2342]: I0214 01:01:31.314081 2342 policy_none.go:49] "None policy: Start" Feb 14 01:01:31.314258 kubelet[2342]: I0214 01:01:31.314239 2342 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 14 01:01:31.314400 kubelet[2342]: I0214 01:01:31.314366 2342 state_mem.go:35] "Initializing new in-memory state store" Feb 14 01:01:31.324740 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 14 01:01:31.345967 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 14 01:01:31.346974 kubelet[2342]: E0214 01:01:31.346615 2342 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"srv-jzpa0.gb1.brightbox.com\" not found" Feb 14 01:01:31.352121 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 14 01:01:31.364566 kubelet[2342]: I0214 01:01:31.363282 2342 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 14 01:01:31.364566 kubelet[2342]: I0214 01:01:31.363691 2342 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 14 01:01:31.364566 kubelet[2342]: I0214 01:01:31.363731 2342 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 14 01:01:31.364566 kubelet[2342]: I0214 01:01:31.364159 2342 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 14 01:01:31.366361 kubelet[2342]: E0214 01:01:31.366337 2342 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 14 01:01:31.366627 kubelet[2342]: E0214 01:01:31.366604 2342 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-jzpa0.gb1.brightbox.com\" not found" Feb 14 01:01:31.393253 systemd[1]: Created slice kubepods-burstable-pod2ea0cfc4319a032e9aa5351cc78da128.slice - libcontainer container kubepods-burstable-pod2ea0cfc4319a032e9aa5351cc78da128.slice. Feb 14 01:01:31.401855 kubelet[2342]: E0214 01:01:31.401807 2342 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jzpa0.gb1.brightbox.com\" not found" node="srv-jzpa0.gb1.brightbox.com" Feb 14 01:01:31.406470 systemd[1]: Created slice kubepods-burstable-podc9a84ba2d0baf43292d036eccf950674.slice - libcontainer container kubepods-burstable-podc9a84ba2d0baf43292d036eccf950674.slice. Feb 14 01:01:31.409885 kubelet[2342]: E0214 01:01:31.409727 2342 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jzpa0.gb1.brightbox.com\" not found" node="srv-jzpa0.gb1.brightbox.com" Feb 14 01:01:31.421745 systemd[1]: Created slice kubepods-burstable-pod02377bc67d10f58fbb1ca1fd1db3fa00.slice - libcontainer container kubepods-burstable-pod02377bc67d10f58fbb1ca1fd1db3fa00.slice. Feb 14 01:01:31.424841 kubelet[2342]: E0214 01:01:31.424812 2342 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jzpa0.gb1.brightbox.com\" not found" node="srv-jzpa0.gb1.brightbox.com" Feb 14 01:01:31.453008 kubelet[2342]: E0214 01:01:31.452769 2342 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.12.186:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-jzpa0.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.12.186:6443: connect: connection refused" interval="400ms" Feb 14 01:01:31.453121 kubelet[2342]: I0214 01:01:31.453027 2342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2ea0cfc4319a032e9aa5351cc78da128-ca-certs\") pod \"kube-apiserver-srv-jzpa0.gb1.brightbox.com\" (UID: \"2ea0cfc4319a032e9aa5351cc78da128\") " pod="kube-system/kube-apiserver-srv-jzpa0.gb1.brightbox.com" Feb 14 01:01:31.453187 kubelet[2342]: I0214 01:01:31.453140 2342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2ea0cfc4319a032e9aa5351cc78da128-k8s-certs\") pod \"kube-apiserver-srv-jzpa0.gb1.brightbox.com\" (UID: \"2ea0cfc4319a032e9aa5351cc78da128\") " pod="kube-system/kube-apiserver-srv-jzpa0.gb1.brightbox.com" Feb 14 01:01:31.467298 kubelet[2342]: I0214 01:01:31.467258 2342 kubelet_node_status.go:76] "Attempting to register node" node="srv-jzpa0.gb1.brightbox.com" Feb 14 01:01:31.467964 kubelet[2342]: E0214 01:01:31.467878 2342 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.230.12.186:6443/api/v1/nodes\": dial tcp 10.230.12.186:6443: connect: connection refused" node="srv-jzpa0.gb1.brightbox.com" Feb 14 01:01:31.553822 kubelet[2342]: I0214 01:01:31.553739 2342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c9a84ba2d0baf43292d036eccf950674-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-jzpa0.gb1.brightbox.com\" (UID: \"c9a84ba2d0baf43292d036eccf950674\") " pod="kube-system/kube-controller-manager-srv-jzpa0.gb1.brightbox.com" Feb 14 01:01:31.553822 kubelet[2342]: I0214 01:01:31.553806 2342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c9a84ba2d0baf43292d036eccf950674-ca-certs\") pod \"kube-controller-manager-srv-jzpa0.gb1.brightbox.com\" (UID: \"c9a84ba2d0baf43292d036eccf950674\") " pod="kube-system/kube-controller-manager-srv-jzpa0.gb1.brightbox.com" Feb 14 01:01:31.554752 kubelet[2342]: I0214 01:01:31.553925 2342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2ea0cfc4319a032e9aa5351cc78da128-usr-share-ca-certificates\") pod \"kube-apiserver-srv-jzpa0.gb1.brightbox.com\" (UID: \"2ea0cfc4319a032e9aa5351cc78da128\") " pod="kube-system/kube-apiserver-srv-jzpa0.gb1.brightbox.com" Feb 14 01:01:31.554752 kubelet[2342]: I0214 01:01:31.553984 2342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c9a84ba2d0baf43292d036eccf950674-flexvolume-dir\") pod \"kube-controller-manager-srv-jzpa0.gb1.brightbox.com\" (UID: \"c9a84ba2d0baf43292d036eccf950674\") " pod="kube-system/kube-controller-manager-srv-jzpa0.gb1.brightbox.com" Feb 14 01:01:31.554752 kubelet[2342]: I0214 01:01:31.554011 2342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c9a84ba2d0baf43292d036eccf950674-k8s-certs\") pod \"kube-controller-manager-srv-jzpa0.gb1.brightbox.com\" (UID: \"c9a84ba2d0baf43292d036eccf950674\") " pod="kube-system/kube-controller-manager-srv-jzpa0.gb1.brightbox.com" Feb 14 01:01:31.554752 kubelet[2342]: I0214 01:01:31.554038 2342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c9a84ba2d0baf43292d036eccf950674-kubeconfig\") pod \"kube-controller-manager-srv-jzpa0.gb1.brightbox.com\" (UID: \"c9a84ba2d0baf43292d036eccf950674\") " pod="kube-system/kube-controller-manager-srv-jzpa0.gb1.brightbox.com" Feb 14 01:01:31.554752 kubelet[2342]: I0214 01:01:31.554064 2342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/02377bc67d10f58fbb1ca1fd1db3fa00-kubeconfig\") pod \"kube-scheduler-srv-jzpa0.gb1.brightbox.com\" (UID: \"02377bc67d10f58fbb1ca1fd1db3fa00\") " pod="kube-system/kube-scheduler-srv-jzpa0.gb1.brightbox.com" Feb 14 01:01:31.672525 kubelet[2342]: I0214 01:01:31.672340 2342 kubelet_node_status.go:76] "Attempting to register node" node="srv-jzpa0.gb1.brightbox.com" Feb 14 01:01:31.673385 kubelet[2342]: E0214 01:01:31.673315 2342 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.230.12.186:6443/api/v1/nodes\": dial tcp 10.230.12.186:6443: connect: connection refused" node="srv-jzpa0.gb1.brightbox.com" Feb 14 01:01:31.704590 containerd[1514]: time="2025-02-14T01:01:31.704481211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-jzpa0.gb1.brightbox.com,Uid:2ea0cfc4319a032e9aa5351cc78da128,Namespace:kube-system,Attempt:0,}" Feb 14 01:01:31.715781 containerd[1514]: time="2025-02-14T01:01:31.715729865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-jzpa0.gb1.brightbox.com,Uid:c9a84ba2d0baf43292d036eccf950674,Namespace:kube-system,Attempt:0,}" Feb 14 01:01:31.726429 containerd[1514]: time="2025-02-14T01:01:31.726305439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-jzpa0.gb1.brightbox.com,Uid:02377bc67d10f58fbb1ca1fd1db3fa00,Namespace:kube-system,Attempt:0,}" Feb 14 01:01:31.854259 kubelet[2342]: E0214 01:01:31.854174 2342 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.12.186:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-jzpa0.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.12.186:6443: connect: connection refused" interval="800ms" Feb 14 01:01:32.077269 kubelet[2342]: I0214 01:01:32.077196 2342 kubelet_node_status.go:76] "Attempting to register node" node="srv-jzpa0.gb1.brightbox.com" Feb 14 01:01:32.077838 kubelet[2342]: E0214 01:01:32.077788 2342 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.230.12.186:6443/api/v1/nodes\": dial tcp 10.230.12.186:6443: connect: connection refused" node="srv-jzpa0.gb1.brightbox.com" Feb 14 01:01:32.319623 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1970923249.mount: Deactivated successfully. Feb 14 01:01:32.325465 containerd[1514]: time="2025-02-14T01:01:32.325267703Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 14 01:01:32.326877 containerd[1514]: time="2025-02-14T01:01:32.326766826Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Feb 14 01:01:32.329766 containerd[1514]: time="2025-02-14T01:01:32.329277484Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 14 01:01:32.330756 containerd[1514]: time="2025-02-14T01:01:32.330576535Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 14 01:01:32.331255 containerd[1514]: time="2025-02-14T01:01:32.331173202Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 14 01:01:32.332525 containerd[1514]: time="2025-02-14T01:01:32.332484369Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 14 01:01:32.332843 containerd[1514]: time="2025-02-14T01:01:32.332671794Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 14 01:01:32.336700 containerd[1514]: time="2025-02-14T01:01:32.336659103Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 14 01:01:32.339652 containerd[1514]: time="2025-02-14T01:01:32.339607520Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 634.979342ms" Feb 14 01:01:32.342477 containerd[1514]: time="2025-02-14T01:01:32.342289465Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 626.486894ms" Feb 14 01:01:32.346633 containerd[1514]: time="2025-02-14T01:01:32.346592936Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 619.952119ms" Feb 14 01:01:32.571355 containerd[1514]: time="2025-02-14T01:01:32.570897838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 01:01:32.571355 containerd[1514]: time="2025-02-14T01:01:32.570985193Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 01:01:32.571355 containerd[1514]: time="2025-02-14T01:01:32.571028725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:01:32.571355 containerd[1514]: time="2025-02-14T01:01:32.571204167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:01:32.574379 containerd[1514]: time="2025-02-14T01:01:32.574106326Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 01:01:32.574379 containerd[1514]: time="2025-02-14T01:01:32.574194580Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 01:01:32.574379 containerd[1514]: time="2025-02-14T01:01:32.574217589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:01:32.575341 containerd[1514]: time="2025-02-14T01:01:32.575125324Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 01:01:32.575645 containerd[1514]: time="2025-02-14T01:01:32.575313035Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 01:01:32.575993 containerd[1514]: time="2025-02-14T01:01:32.574628339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:01:32.577819 containerd[1514]: time="2025-02-14T01:01:32.576932487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:01:32.578611 containerd[1514]: time="2025-02-14T01:01:32.577986298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:01:32.623732 systemd[1]: Started cri-containerd-e9832d603d29cf7b5e76075f19d191e741cf0b6bcb2d98773369b73502fe987c.scope - libcontainer container e9832d603d29cf7b5e76075f19d191e741cf0b6bcb2d98773369b73502fe987c. Feb 14 01:01:32.636520 systemd[1]: Started cri-containerd-cf7735d9f3fe54bd0a9633a852816a3482cfcb692f0d5cf30e6dc74a3cbd92a1.scope - libcontainer container cf7735d9f3fe54bd0a9633a852816a3482cfcb692f0d5cf30e6dc74a3cbd92a1. Feb 14 01:01:32.642938 systemd[1]: Started cri-containerd-e394b46ba5df750d01d53fb63a61095c6d0814dc635992f2071f2e71e8eb8b3d.scope - libcontainer container e394b46ba5df750d01d53fb63a61095c6d0814dc635992f2071f2e71e8eb8b3d. Feb 14 01:01:32.655893 kubelet[2342]: E0214 01:01:32.655341 2342 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.12.186:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-jzpa0.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.12.186:6443: connect: connection refused" interval="1.6s" Feb 14 01:01:32.687979 kubelet[2342]: W0214 01:01:32.687877 2342 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.12.186:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-jzpa0.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.12.186:6443: connect: connection refused Feb 14 01:01:32.688124 kubelet[2342]: E0214 01:01:32.687988 2342 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.12.186:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-jzpa0.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.12.186:6443: connect: connection refused" logger="UnhandledError" Feb 14 01:01:32.693647 kubelet[2342]: W0214 01:01:32.693587 2342 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.12.186:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.12.186:6443: connect: connection refused Feb 14 01:01:32.693753 kubelet[2342]: E0214 01:01:32.693646 2342 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.12.186:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.12.186:6443: connect: connection refused" logger="UnhandledError" Feb 14 01:01:32.732377 kubelet[2342]: W0214 01:01:32.732261 2342 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.12.186:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.12.186:6443: connect: connection refused Feb 14 01:01:32.732377 kubelet[2342]: E0214 01:01:32.732325 2342 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.12.186:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.12.186:6443: connect: connection refused" logger="UnhandledError" Feb 14 01:01:32.750555 containerd[1514]: time="2025-02-14T01:01:32.750336612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-jzpa0.gb1.brightbox.com,Uid:2ea0cfc4319a032e9aa5351cc78da128,Namespace:kube-system,Attempt:0,} returns sandbox id \"e9832d603d29cf7b5e76075f19d191e741cf0b6bcb2d98773369b73502fe987c\"" Feb 14 01:01:32.759731 containerd[1514]: time="2025-02-14T01:01:32.759375999Z" level=info msg="CreateContainer within sandbox \"e9832d603d29cf7b5e76075f19d191e741cf0b6bcb2d98773369b73502fe987c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 14 01:01:32.771326 containerd[1514]: time="2025-02-14T01:01:32.771280719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-jzpa0.gb1.brightbox.com,Uid:c9a84ba2d0baf43292d036eccf950674,Namespace:kube-system,Attempt:0,} returns sandbox id \"e394b46ba5df750d01d53fb63a61095c6d0814dc635992f2071f2e71e8eb8b3d\"" Feb 14 01:01:32.774672 containerd[1514]: time="2025-02-14T01:01:32.774330605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-jzpa0.gb1.brightbox.com,Uid:02377bc67d10f58fbb1ca1fd1db3fa00,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf7735d9f3fe54bd0a9633a852816a3482cfcb692f0d5cf30e6dc74a3cbd92a1\"" Feb 14 01:01:32.777136 containerd[1514]: time="2025-02-14T01:01:32.776747301Z" level=info msg="CreateContainer within sandbox \"e394b46ba5df750d01d53fb63a61095c6d0814dc635992f2071f2e71e8eb8b3d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 14 01:01:32.780514 containerd[1514]: time="2025-02-14T01:01:32.780458600Z" level=info msg="CreateContainer within sandbox \"cf7735d9f3fe54bd0a9633a852816a3482cfcb692f0d5cf30e6dc74a3cbd92a1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 14 01:01:32.820935 containerd[1514]: time="2025-02-14T01:01:32.820831335Z" level=info msg="CreateContainer within sandbox \"e394b46ba5df750d01d53fb63a61095c6d0814dc635992f2071f2e71e8eb8b3d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f1c1542ecc7384b604bc5fd9e077c7be7c90cee0ecd5481c0d7dfb0a83f485e4\"" Feb 14 01:01:32.822669 containerd[1514]: time="2025-02-14T01:01:32.822369493Z" level=info msg="CreateContainer within sandbox \"e9832d603d29cf7b5e76075f19d191e741cf0b6bcb2d98773369b73502fe987c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0d9fa505259181902e2f50d8da1521b1a999abe63ffaeec6e151ec863904e971\"" Feb 14 01:01:32.822669 containerd[1514]: time="2025-02-14T01:01:32.822412537Z" level=info msg="StartContainer for \"f1c1542ecc7384b604bc5fd9e077c7be7c90cee0ecd5481c0d7dfb0a83f485e4\"" Feb 14 01:01:32.824252 containerd[1514]: time="2025-02-14T01:01:32.823661384Z" level=info msg="StartContainer for \"0d9fa505259181902e2f50d8da1521b1a999abe63ffaeec6e151ec863904e971\"" Feb 14 01:01:32.833606 containerd[1514]: time="2025-02-14T01:01:32.833520846Z" level=info msg="CreateContainer within sandbox \"cf7735d9f3fe54bd0a9633a852816a3482cfcb692f0d5cf30e6dc74a3cbd92a1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f699c73d4b075af26e1f3c633d1898bec9c5d74e53b32b0e4ddc834a3ce33f2e\"" Feb 14 01:01:32.835400 containerd[1514]: time="2025-02-14T01:01:32.835357279Z" level=info msg="StartContainer for \"f699c73d4b075af26e1f3c633d1898bec9c5d74e53b32b0e4ddc834a3ce33f2e\"" Feb 14 01:01:32.866886 kubelet[2342]: W0214 01:01:32.866778 2342 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.12.186:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.12.186:6443: connect: connection refused Feb 14 01:01:32.867833 kubelet[2342]: E0214 01:01:32.867792 2342 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.12.186:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.12.186:6443: connect: connection refused" logger="UnhandledError" Feb 14 01:01:32.882707 systemd[1]: Started cri-containerd-f1c1542ecc7384b604bc5fd9e077c7be7c90cee0ecd5481c0d7dfb0a83f485e4.scope - libcontainer container f1c1542ecc7384b604bc5fd9e077c7be7c90cee0ecd5481c0d7dfb0a83f485e4. Feb 14 01:01:32.900567 kubelet[2342]: I0214 01:01:32.899084 2342 kubelet_node_status.go:76] "Attempting to register node" node="srv-jzpa0.gb1.brightbox.com" Feb 14 01:01:32.901722 systemd[1]: Started cri-containerd-0d9fa505259181902e2f50d8da1521b1a999abe63ffaeec6e151ec863904e971.scope - libcontainer container 0d9fa505259181902e2f50d8da1521b1a999abe63ffaeec6e151ec863904e971. Feb 14 01:01:32.902350 kubelet[2342]: E0214 01:01:32.902306 2342 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.230.12.186:6443/api/v1/nodes\": dial tcp 10.230.12.186:6443: connect: connection refused" node="srv-jzpa0.gb1.brightbox.com" Feb 14 01:01:32.922110 systemd[1]: Started cri-containerd-f699c73d4b075af26e1f3c633d1898bec9c5d74e53b32b0e4ddc834a3ce33f2e.scope - libcontainer container f699c73d4b075af26e1f3c633d1898bec9c5d74e53b32b0e4ddc834a3ce33f2e. Feb 14 01:01:33.016527 containerd[1514]: time="2025-02-14T01:01:33.016353096Z" level=info msg="StartContainer for \"f1c1542ecc7384b604bc5fd9e077c7be7c90cee0ecd5481c0d7dfb0a83f485e4\" returns successfully" Feb 14 01:01:33.016527 containerd[1514]: time="2025-02-14T01:01:33.016488497Z" level=info msg="StartContainer for \"0d9fa505259181902e2f50d8da1521b1a999abe63ffaeec6e151ec863904e971\" returns successfully" Feb 14 01:01:33.041206 containerd[1514]: time="2025-02-14T01:01:33.041062124Z" level=info msg="StartContainer for \"f699c73d4b075af26e1f3c633d1898bec9c5d74e53b32b0e4ddc834a3ce33f2e\" returns successfully" Feb 14 01:01:33.327917 kubelet[2342]: E0214 01:01:33.326707 2342 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jzpa0.gb1.brightbox.com\" not found" node="srv-jzpa0.gb1.brightbox.com" Feb 14 01:01:33.332999 kubelet[2342]: E0214 01:01:33.332966 2342 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jzpa0.gb1.brightbox.com\" not found" node="srv-jzpa0.gb1.brightbox.com" Feb 14 01:01:33.339450 kubelet[2342]: E0214 01:01:33.339233 2342 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jzpa0.gb1.brightbox.com\" not found" node="srv-jzpa0.gb1.brightbox.com" Feb 14 01:01:33.355221 kubelet[2342]: E0214 01:01:33.355147 2342 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.12.186:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.12.186:6443: connect: connection refused" logger="UnhandledError" Feb 14 01:01:34.342155 kubelet[2342]: E0214 01:01:34.341933 2342 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jzpa0.gb1.brightbox.com\" not found" node="srv-jzpa0.gb1.brightbox.com" Feb 14 01:01:34.343702 kubelet[2342]: E0214 01:01:34.343389 2342 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jzpa0.gb1.brightbox.com\" not found" node="srv-jzpa0.gb1.brightbox.com" Feb 14 01:01:34.508551 kubelet[2342]: I0214 01:01:34.506097 2342 kubelet_node_status.go:76] "Attempting to register node" node="srv-jzpa0.gb1.brightbox.com" Feb 14 01:01:34.602543 kubelet[2342]: E0214 01:01:34.602406 2342 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jzpa0.gb1.brightbox.com\" not found" node="srv-jzpa0.gb1.brightbox.com" Feb 14 01:01:36.284288 kubelet[2342]: E0214 01:01:36.284015 2342 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jzpa0.gb1.brightbox.com\" not found" node="srv-jzpa0.gb1.brightbox.com" Feb 14 01:01:36.429447 kubelet[2342]: E0214 01:01:36.429388 2342 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-jzpa0.gb1.brightbox.com\" not found" node="srv-jzpa0.gb1.brightbox.com" Feb 14 01:01:36.529565 kubelet[2342]: I0214 01:01:36.527499 2342 kubelet_node_status.go:79] "Successfully registered node" node="srv-jzpa0.gb1.brightbox.com" Feb 14 01:01:36.549565 kubelet[2342]: I0214 01:01:36.547451 2342 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-jzpa0.gb1.brightbox.com" Feb 14 01:01:36.562927 kubelet[2342]: E0214 01:01:36.562881 2342 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-jzpa0.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-jzpa0.gb1.brightbox.com" Feb 14 01:01:36.562927 kubelet[2342]: I0214 01:01:36.562923 2342 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-jzpa0.gb1.brightbox.com" Feb 14 01:01:36.569643 kubelet[2342]: E0214 01:01:36.569592 2342 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-jzpa0.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-jzpa0.gb1.brightbox.com" Feb 14 01:01:36.569767 kubelet[2342]: I0214 01:01:36.569658 2342 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-jzpa0.gb1.brightbox.com" Feb 14 01:01:36.572258 kubelet[2342]: E0214 01:01:36.572228 2342 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-jzpa0.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-jzpa0.gb1.brightbox.com" Feb 14 01:01:36.576120 kubelet[2342]: E0214 01:01:36.575993 2342 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{srv-jzpa0.gb1.brightbox.com.1823ed6327fd9878 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-jzpa0.gb1.brightbox.com,UID:srv-jzpa0.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-jzpa0.gb1.brightbox.com,},FirstTimestamp:2025-02-14 01:01:31.223619704 +0000 UTC m=+0.814915882,LastTimestamp:2025-02-14 01:01:31.223619704 +0000 UTC m=+0.814915882,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-jzpa0.gb1.brightbox.com,}" Feb 14 01:01:36.634135 kubelet[2342]: E0214 01:01:36.633978 2342 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{srv-jzpa0.gb1.brightbox.com.1823ed632ac0e29d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-jzpa0.gb1.brightbox.com,UID:srv-jzpa0.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:srv-jzpa0.gb1.brightbox.com,},FirstTimestamp:2025-02-14 01:01:31.269972637 +0000 UTC m=+0.861268836,LastTimestamp:2025-02-14 01:01:31.269972637 +0000 UTC m=+0.861268836,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-jzpa0.gb1.brightbox.com,}" Feb 14 01:01:37.215118 kubelet[2342]: I0214 01:01:37.215065 2342 apiserver.go:52] "Watching apiserver" Feb 14 01:01:37.253311 kubelet[2342]: I0214 01:01:37.253093 2342 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 14 01:01:37.432949 kubelet[2342]: I0214 01:01:37.432875 2342 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-jzpa0.gb1.brightbox.com" Feb 14 01:01:37.441054 kubelet[2342]: W0214 01:01:37.441001 2342 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 14 01:01:38.471112 systemd[1]: Reloading requested from client PID 2618 ('systemctl') (unit session-11.scope)... Feb 14 01:01:38.471153 systemd[1]: Reloading... Feb 14 01:01:38.591592 zram_generator::config[2660]: No configuration found. Feb 14 01:01:38.783181 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 14 01:01:38.911346 systemd[1]: Reloading finished in 439 ms. Feb 14 01:01:38.980636 kubelet[2342]: I0214 01:01:38.980590 2342 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 14 01:01:38.981412 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 01:01:38.997427 systemd[1]: kubelet.service: Deactivated successfully. Feb 14 01:01:38.997997 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 01:01:38.998203 systemd[1]: kubelet.service: Consumed 1.372s CPU time, 122.4M memory peak, 0B memory swap peak. Feb 14 01:01:39.005960 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 01:01:39.234102 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 01:01:39.248212 (kubelet)[2722]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 14 01:01:39.376510 kubelet[2722]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 14 01:01:39.376510 kubelet[2722]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 14 01:01:39.376510 kubelet[2722]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 14 01:01:39.376510 kubelet[2722]: I0214 01:01:39.374488 2722 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 14 01:01:39.395321 kubelet[2722]: I0214 01:01:39.394834 2722 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 14 01:01:39.395516 kubelet[2722]: I0214 01:01:39.395487 2722 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 14 01:01:39.395974 kubelet[2722]: I0214 01:01:39.395951 2722 server.go:954] "Client rotation is on, will bootstrap in background" Feb 14 01:01:39.403662 kubelet[2722]: I0214 01:01:39.403267 2722 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 14 01:01:39.415465 kubelet[2722]: I0214 01:01:39.415417 2722 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 14 01:01:39.426849 kubelet[2722]: E0214 01:01:39.426809 2722 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 14 01:01:39.427037 kubelet[2722]: I0214 01:01:39.427011 2722 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 14 01:01:39.436733 kubelet[2722]: I0214 01:01:39.435288 2722 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 14 01:01:39.439920 kubelet[2722]: I0214 01:01:39.438852 2722 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 14 01:01:39.439920 kubelet[2722]: I0214 01:01:39.439021 2722 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-jzpa0.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 14 01:01:39.439920 kubelet[2722]: I0214 01:01:39.439270 2722 topology_manager.go:138] "Creating topology manager with none policy" Feb 14 01:01:39.439920 kubelet[2722]: I0214 01:01:39.439286 2722 container_manager_linux.go:304] "Creating device plugin manager" Feb 14 01:01:39.442628 kubelet[2722]: I0214 01:01:39.442592 2722 state_mem.go:36] "Initialized new in-memory state store" Feb 14 01:01:39.442916 kubelet[2722]: I0214 01:01:39.442888 2722 kubelet.go:446] "Attempting to sync node with API server" Feb 14 01:01:39.442994 kubelet[2722]: I0214 01:01:39.442922 2722 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 14 01:01:39.442994 kubelet[2722]: I0214 01:01:39.442954 2722 kubelet.go:352] "Adding apiserver pod source" Feb 14 01:01:39.442994 kubelet[2722]: I0214 01:01:39.442977 2722 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 14 01:01:39.452470 kubelet[2722]: I0214 01:01:39.451729 2722 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 14 01:01:39.458485 kubelet[2722]: I0214 01:01:39.457779 2722 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 14 01:01:39.471565 kubelet[2722]: I0214 01:01:39.468878 2722 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 14 01:01:39.471565 kubelet[2722]: I0214 01:01:39.468938 2722 server.go:1287] "Started kubelet" Feb 14 01:01:39.494552 kubelet[2722]: I0214 01:01:39.494381 2722 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 14 01:01:39.515186 kubelet[2722]: I0214 01:01:39.512779 2722 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 14 01:01:39.515186 kubelet[2722]: I0214 01:01:39.513809 2722 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 14 01:01:39.515186 kubelet[2722]: I0214 01:01:39.514661 2722 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 14 01:01:39.516208 kubelet[2722]: I0214 01:01:39.515513 2722 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 14 01:01:39.519578 kubelet[2722]: I0214 01:01:39.517490 2722 server.go:490] "Adding debug handlers to kubelet server" Feb 14 01:01:39.524183 kubelet[2722]: E0214 01:01:39.520988 2722 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"srv-jzpa0.gb1.brightbox.com\" not found" Feb 14 01:01:39.525596 kubelet[2722]: I0214 01:01:39.522639 2722 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 14 01:01:39.531377 kubelet[2722]: I0214 01:01:39.530467 2722 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 14 01:01:39.541484 kubelet[2722]: E0214 01:01:39.540927 2722 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 14 01:01:39.542316 kubelet[2722]: I0214 01:01:39.541913 2722 factory.go:221] Registration of the containerd container factory successfully Feb 14 01:01:39.542316 kubelet[2722]: I0214 01:01:39.541942 2722 factory.go:221] Registration of the systemd container factory successfully Feb 14 01:01:39.543826 kubelet[2722]: I0214 01:01:39.522665 2722 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 14 01:01:39.544585 kubelet[2722]: I0214 01:01:39.544015 2722 reconciler.go:26] "Reconciler: start to sync state" Feb 14 01:01:39.574888 kubelet[2722]: I0214 01:01:39.574665 2722 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 14 01:01:39.579392 kubelet[2722]: I0214 01:01:39.578478 2722 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 14 01:01:39.579392 kubelet[2722]: I0214 01:01:39.578520 2722 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 14 01:01:39.579392 kubelet[2722]: I0214 01:01:39.578562 2722 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 14 01:01:39.579392 kubelet[2722]: I0214 01:01:39.578575 2722 kubelet.go:2388] "Starting kubelet main sync loop" Feb 14 01:01:39.579392 kubelet[2722]: E0214 01:01:39.578639 2722 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 14 01:01:39.679633 kubelet[2722]: E0214 01:01:39.678817 2722 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 14 01:01:39.696839 kubelet[2722]: I0214 01:01:39.695438 2722 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 14 01:01:39.696839 kubelet[2722]: I0214 01:01:39.695466 2722 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 14 01:01:39.696839 kubelet[2722]: I0214 01:01:39.695495 2722 state_mem.go:36] "Initialized new in-memory state store" Feb 14 01:01:39.696839 kubelet[2722]: I0214 01:01:39.695741 2722 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 14 01:01:39.696839 kubelet[2722]: I0214 01:01:39.695761 2722 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 14 01:01:39.696839 kubelet[2722]: I0214 01:01:39.695801 2722 policy_none.go:49] "None policy: Start" Feb 14 01:01:39.696839 kubelet[2722]: I0214 01:01:39.695815 2722 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 14 01:01:39.696839 kubelet[2722]: I0214 01:01:39.695834 2722 state_mem.go:35] "Initializing new in-memory state store" Feb 14 01:01:39.696839 kubelet[2722]: I0214 01:01:39.695994 2722 state_mem.go:75] "Updated machine memory state" Feb 14 01:01:39.703293 kubelet[2722]: I0214 01:01:39.702956 2722 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 14 01:01:39.704462 kubelet[2722]: I0214 01:01:39.703767 2722 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 14 01:01:39.707270 kubelet[2722]: I0214 01:01:39.707219 2722 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 14 01:01:39.708761 kubelet[2722]: I0214 01:01:39.707904 2722 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 14 01:01:39.726268 kubelet[2722]: E0214 01:01:39.726230 2722 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 14 01:01:39.837917 kubelet[2722]: I0214 01:01:39.837003 2722 kubelet_node_status.go:76] "Attempting to register node" node="srv-jzpa0.gb1.brightbox.com" Feb 14 01:01:39.850300 kubelet[2722]: I0214 01:01:39.848893 2722 kubelet_node_status.go:125] "Node was previously registered" node="srv-jzpa0.gb1.brightbox.com" Feb 14 01:01:39.850300 kubelet[2722]: I0214 01:01:39.848986 2722 kubelet_node_status.go:79] "Successfully registered node" node="srv-jzpa0.gb1.brightbox.com" Feb 14 01:01:39.879874 kubelet[2722]: I0214 01:01:39.879851 2722 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-jzpa0.gb1.brightbox.com" Feb 14 01:01:39.883984 kubelet[2722]: I0214 01:01:39.883901 2722 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-jzpa0.gb1.brightbox.com" Feb 14 01:01:39.885019 kubelet[2722]: I0214 01:01:39.884231 2722 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-jzpa0.gb1.brightbox.com" Feb 14 01:01:39.892112 kubelet[2722]: W0214 01:01:39.891710 2722 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 14 01:01:39.893897 kubelet[2722]: W0214 01:01:39.893876 2722 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 14 01:01:39.895741 kubelet[2722]: W0214 01:01:39.895618 2722 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 14 01:01:39.897360 kubelet[2722]: E0214 01:01:39.897297 2722 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-jzpa0.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-jzpa0.gb1.brightbox.com" Feb 14 01:01:39.946861 kubelet[2722]: I0214 01:01:39.946638 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2ea0cfc4319a032e9aa5351cc78da128-usr-share-ca-certificates\") pod \"kube-apiserver-srv-jzpa0.gb1.brightbox.com\" (UID: \"2ea0cfc4319a032e9aa5351cc78da128\") " pod="kube-system/kube-apiserver-srv-jzpa0.gb1.brightbox.com" Feb 14 01:01:39.946861 kubelet[2722]: I0214 01:01:39.946722 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c9a84ba2d0baf43292d036eccf950674-k8s-certs\") pod \"kube-controller-manager-srv-jzpa0.gb1.brightbox.com\" (UID: \"c9a84ba2d0baf43292d036eccf950674\") " pod="kube-system/kube-controller-manager-srv-jzpa0.gb1.brightbox.com" Feb 14 01:01:39.946861 kubelet[2722]: I0214 01:01:39.946762 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/02377bc67d10f58fbb1ca1fd1db3fa00-kubeconfig\") pod \"kube-scheduler-srv-jzpa0.gb1.brightbox.com\" (UID: \"02377bc67d10f58fbb1ca1fd1db3fa00\") " pod="kube-system/kube-scheduler-srv-jzpa0.gb1.brightbox.com" Feb 14 01:01:39.946861 kubelet[2722]: I0214 01:01:39.946797 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2ea0cfc4319a032e9aa5351cc78da128-ca-certs\") pod \"kube-apiserver-srv-jzpa0.gb1.brightbox.com\" (UID: \"2ea0cfc4319a032e9aa5351cc78da128\") " pod="kube-system/kube-apiserver-srv-jzpa0.gb1.brightbox.com" Feb 14 01:01:39.946861 kubelet[2722]: I0214 01:01:39.946832 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c9a84ba2d0baf43292d036eccf950674-flexvolume-dir\") pod \"kube-controller-manager-srv-jzpa0.gb1.brightbox.com\" (UID: \"c9a84ba2d0baf43292d036eccf950674\") " pod="kube-system/kube-controller-manager-srv-jzpa0.gb1.brightbox.com" Feb 14 01:01:39.947338 kubelet[2722]: I0214 01:01:39.946861 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c9a84ba2d0baf43292d036eccf950674-kubeconfig\") pod \"kube-controller-manager-srv-jzpa0.gb1.brightbox.com\" (UID: \"c9a84ba2d0baf43292d036eccf950674\") " pod="kube-system/kube-controller-manager-srv-jzpa0.gb1.brightbox.com" Feb 14 01:01:39.947338 kubelet[2722]: I0214 01:01:39.946889 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c9a84ba2d0baf43292d036eccf950674-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-jzpa0.gb1.brightbox.com\" (UID: \"c9a84ba2d0baf43292d036eccf950674\") " pod="kube-system/kube-controller-manager-srv-jzpa0.gb1.brightbox.com" Feb 14 01:01:39.947338 kubelet[2722]: I0214 01:01:39.946925 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2ea0cfc4319a032e9aa5351cc78da128-k8s-certs\") pod \"kube-apiserver-srv-jzpa0.gb1.brightbox.com\" (UID: \"2ea0cfc4319a032e9aa5351cc78da128\") " pod="kube-system/kube-apiserver-srv-jzpa0.gb1.brightbox.com" Feb 14 01:01:39.947338 kubelet[2722]: I0214 01:01:39.946955 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c9a84ba2d0baf43292d036eccf950674-ca-certs\") pod \"kube-controller-manager-srv-jzpa0.gb1.brightbox.com\" (UID: \"c9a84ba2d0baf43292d036eccf950674\") " pod="kube-system/kube-controller-manager-srv-jzpa0.gb1.brightbox.com" Feb 14 01:01:40.447308 kubelet[2722]: I0214 01:01:40.447192 2722 apiserver.go:52] "Watching apiserver" Feb 14 01:01:40.545566 kubelet[2722]: I0214 01:01:40.544674 2722 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 14 01:01:40.641223 kubelet[2722]: I0214 01:01:40.641158 2722 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-jzpa0.gb1.brightbox.com" Feb 14 01:01:40.685736 kubelet[2722]: W0214 01:01:40.685660 2722 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 14 01:01:40.685997 kubelet[2722]: E0214 01:01:40.685800 2722 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-jzpa0.gb1.brightbox.com\" already exists" pod="kube-system/kube-controller-manager-srv-jzpa0.gb1.brightbox.com" Feb 14 01:01:40.791339 kubelet[2722]: I0214 01:01:40.791207 2722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-jzpa0.gb1.brightbox.com" podStartSLOduration=1.791165739 podStartE2EDuration="1.791165739s" podCreationTimestamp="2025-02-14 01:01:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-14 01:01:40.761271676 +0000 UTC m=+1.475995026" watchObservedRunningTime="2025-02-14 01:01:40.791165739 +0000 UTC m=+1.505889083" Feb 14 01:01:40.825470 kubelet[2722]: I0214 01:01:40.825371 2722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-jzpa0.gb1.brightbox.com" podStartSLOduration=1.825347294 podStartE2EDuration="1.825347294s" podCreationTimestamp="2025-02-14 01:01:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-14 01:01:40.825005655 +0000 UTC m=+1.539729019" watchObservedRunningTime="2025-02-14 01:01:40.825347294 +0000 UTC m=+1.540070641" Feb 14 01:01:40.826351 kubelet[2722]: I0214 01:01:40.825602 2722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-jzpa0.gb1.brightbox.com" podStartSLOduration=3.8255921219999998 podStartE2EDuration="3.825592122s" podCreationTimestamp="2025-02-14 01:01:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-14 01:01:40.79478983 +0000 UTC m=+1.509513184" watchObservedRunningTime="2025-02-14 01:01:40.825592122 +0000 UTC m=+1.540315472" Feb 14 01:01:44.025873 kubelet[2722]: I0214 01:01:44.025804 2722 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 14 01:01:44.026738 containerd[1514]: time="2025-02-14T01:01:44.026496581Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 14 01:01:44.027234 kubelet[2722]: I0214 01:01:44.026767 2722 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 14 01:01:44.912963 systemd[1]: Created slice kubepods-besteffort-podb84bfe8d_cb1e_4836_81b7_cf1cbbd5411c.slice - libcontainer container kubepods-besteffort-podb84bfe8d_cb1e_4836_81b7_cf1cbbd5411c.slice. Feb 14 01:01:44.982679 kubelet[2722]: I0214 01:01:44.982423 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b84bfe8d-cb1e-4836-81b7-cf1cbbd5411c-lib-modules\") pod \"kube-proxy-wt42b\" (UID: \"b84bfe8d-cb1e-4836-81b7-cf1cbbd5411c\") " pod="kube-system/kube-proxy-wt42b" Feb 14 01:01:44.982679 kubelet[2722]: I0214 01:01:44.982483 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b84bfe8d-cb1e-4836-81b7-cf1cbbd5411c-kube-proxy\") pod \"kube-proxy-wt42b\" (UID: \"b84bfe8d-cb1e-4836-81b7-cf1cbbd5411c\") " pod="kube-system/kube-proxy-wt42b" Feb 14 01:01:44.982679 kubelet[2722]: I0214 01:01:44.982516 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b84bfe8d-cb1e-4836-81b7-cf1cbbd5411c-xtables-lock\") pod \"kube-proxy-wt42b\" (UID: \"b84bfe8d-cb1e-4836-81b7-cf1cbbd5411c\") " pod="kube-system/kube-proxy-wt42b" Feb 14 01:01:44.982679 kubelet[2722]: I0214 01:01:44.982567 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b57q5\" (UniqueName: \"kubernetes.io/projected/b84bfe8d-cb1e-4836-81b7-cf1cbbd5411c-kube-api-access-b57q5\") pod \"kube-proxy-wt42b\" (UID: \"b84bfe8d-cb1e-4836-81b7-cf1cbbd5411c\") " pod="kube-system/kube-proxy-wt42b" Feb 14 01:01:45.083656 kubelet[2722]: I0214 01:01:45.082860 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgtsc\" (UniqueName: \"kubernetes.io/projected/29ae69a8-e3f7-4286-8754-194c96256cf1-kube-api-access-kgtsc\") pod \"tigera-operator-7d68577dc5-kfp5g\" (UID: \"29ae69a8-e3f7-4286-8754-194c96256cf1\") " pod="tigera-operator/tigera-operator-7d68577dc5-kfp5g" Feb 14 01:01:45.083656 kubelet[2722]: I0214 01:01:45.082984 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/29ae69a8-e3f7-4286-8754-194c96256cf1-var-lib-calico\") pod \"tigera-operator-7d68577dc5-kfp5g\" (UID: \"29ae69a8-e3f7-4286-8754-194c96256cf1\") " pod="tigera-operator/tigera-operator-7d68577dc5-kfp5g" Feb 14 01:01:45.087933 systemd[1]: Created slice kubepods-besteffort-pod29ae69a8_e3f7_4286_8754_194c96256cf1.slice - libcontainer container kubepods-besteffort-pod29ae69a8_e3f7_4286_8754_194c96256cf1.slice. Feb 14 01:01:45.223201 containerd[1514]: time="2025-02-14T01:01:45.222594912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wt42b,Uid:b84bfe8d-cb1e-4836-81b7-cf1cbbd5411c,Namespace:kube-system,Attempt:0,}" Feb 14 01:01:45.277808 containerd[1514]: time="2025-02-14T01:01:45.277392284Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 01:01:45.277808 containerd[1514]: time="2025-02-14T01:01:45.277527987Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 01:01:45.277808 containerd[1514]: time="2025-02-14T01:01:45.277578176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:01:45.278483 containerd[1514]: time="2025-02-14T01:01:45.278341068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:01:45.311851 systemd[1]: Started cri-containerd-8f3fbb22f3c988c5f3e7d2cea906d5aede256d193b4d65febd83a1858f38873c.scope - libcontainer container 8f3fbb22f3c988c5f3e7d2cea906d5aede256d193b4d65febd83a1858f38873c. Feb 14 01:01:45.346994 containerd[1514]: time="2025-02-14T01:01:45.346942276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wt42b,Uid:b84bfe8d-cb1e-4836-81b7-cf1cbbd5411c,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f3fbb22f3c988c5f3e7d2cea906d5aede256d193b4d65febd83a1858f38873c\"" Feb 14 01:01:45.353112 containerd[1514]: time="2025-02-14T01:01:45.352922860Z" level=info msg="CreateContainer within sandbox \"8f3fbb22f3c988c5f3e7d2cea906d5aede256d193b4d65febd83a1858f38873c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 14 01:01:45.370498 containerd[1514]: time="2025-02-14T01:01:45.370418550Z" level=info msg="CreateContainer within sandbox \"8f3fbb22f3c988c5f3e7d2cea906d5aede256d193b4d65febd83a1858f38873c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2709cb7718a056a72d1a04e0d3ec57da44129ed328ce08798bd4a04be7345a22\"" Feb 14 01:01:45.371965 containerd[1514]: time="2025-02-14T01:01:45.371387480Z" level=info msg="StartContainer for \"2709cb7718a056a72d1a04e0d3ec57da44129ed328ce08798bd4a04be7345a22\"" Feb 14 01:01:45.398630 containerd[1514]: time="2025-02-14T01:01:45.398581949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-kfp5g,Uid:29ae69a8-e3f7-4286-8754-194c96256cf1,Namespace:tigera-operator,Attempt:0,}" Feb 14 01:01:45.417912 systemd[1]: Started cri-containerd-2709cb7718a056a72d1a04e0d3ec57da44129ed328ce08798bd4a04be7345a22.scope - libcontainer container 2709cb7718a056a72d1a04e0d3ec57da44129ed328ce08798bd4a04be7345a22. Feb 14 01:01:45.452058 containerd[1514]: time="2025-02-14T01:01:45.450800705Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 01:01:45.452813 containerd[1514]: time="2025-02-14T01:01:45.452663600Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 01:01:45.452813 containerd[1514]: time="2025-02-14T01:01:45.452709885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:01:45.453582 containerd[1514]: time="2025-02-14T01:01:45.452829855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:01:45.480715 containerd[1514]: time="2025-02-14T01:01:45.479985211Z" level=info msg="StartContainer for \"2709cb7718a056a72d1a04e0d3ec57da44129ed328ce08798bd4a04be7345a22\" returns successfully" Feb 14 01:01:45.490156 systemd[1]: Started cri-containerd-054e4c102f6b7510fd3e1b38b56ef01162d12d0b5e0c63711f17ee2b3cb213b1.scope - libcontainer container 054e4c102f6b7510fd3e1b38b56ef01162d12d0b5e0c63711f17ee2b3cb213b1. Feb 14 01:01:45.564618 containerd[1514]: time="2025-02-14T01:01:45.563119211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-kfp5g,Uid:29ae69a8-e3f7-4286-8754-194c96256cf1,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"054e4c102f6b7510fd3e1b38b56ef01162d12d0b5e0c63711f17ee2b3cb213b1\"" Feb 14 01:01:45.566952 containerd[1514]: time="2025-02-14T01:01:45.566853792Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 14 01:01:45.826629 sudo[1803]: pam_unix(sudo:session): session closed for user root Feb 14 01:01:45.982875 sshd[1800]: pam_unix(sshd:session): session closed for user core Feb 14 01:01:45.988888 systemd[1]: sshd@11-10.230.12.186:22-147.75.109.163:54520.service: Deactivated successfully. Feb 14 01:01:45.992141 systemd[1]: session-11.scope: Deactivated successfully. Feb 14 01:01:45.992429 systemd[1]: session-11.scope: Consumed 5.740s CPU time, 143.0M memory peak, 0B memory swap peak. Feb 14 01:01:45.994297 systemd-logind[1490]: Session 11 logged out. Waiting for processes to exit. Feb 14 01:01:45.996446 systemd-logind[1490]: Removed session 11. Feb 14 01:01:46.114922 systemd[1]: run-containerd-runc-k8s.io-8f3fbb22f3c988c5f3e7d2cea906d5aede256d193b4d65febd83a1858f38873c-runc.qbAjrI.mount: Deactivated successfully. Feb 14 01:01:47.251033 kubelet[2722]: I0214 01:01:47.249862 2722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wt42b" podStartSLOduration=3.24983042 podStartE2EDuration="3.24983042s" podCreationTimestamp="2025-02-14 01:01:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-14 01:01:45.677177443 +0000 UTC m=+6.391900804" watchObservedRunningTime="2025-02-14 01:01:47.24983042 +0000 UTC m=+7.964553767" Feb 14 01:01:47.657617 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount631114366.mount: Deactivated successfully. Feb 14 01:01:47.821893 systemd[1]: Started sshd@12-10.230.12.186:22-92.255.85.188:42548.service - OpenSSH per-connection server daemon (92.255.85.188:42548). Feb 14 01:01:48.463379 sshd[3057]: Invalid user test from 92.255.85.188 port 42548 Feb 14 01:01:48.535458 sshd[3057]: Connection closed by invalid user test 92.255.85.188 port 42548 [preauth] Feb 14 01:01:48.537457 systemd[1]: sshd@12-10.230.12.186:22-92.255.85.188:42548.service: Deactivated successfully. Feb 14 01:01:48.551621 containerd[1514]: time="2025-02-14T01:01:48.551516236Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:01:48.553147 containerd[1514]: time="2025-02-14T01:01:48.552851494Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Feb 14 01:01:48.556091 containerd[1514]: time="2025-02-14T01:01:48.553997901Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:01:48.558448 containerd[1514]: time="2025-02-14T01:01:48.558398616Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:01:48.559987 containerd[1514]: time="2025-02-14T01:01:48.559952940Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.993054903s" Feb 14 01:01:48.560170 containerd[1514]: time="2025-02-14T01:01:48.560143077Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Feb 14 01:01:48.564479 containerd[1514]: time="2025-02-14T01:01:48.564425723Z" level=info msg="CreateContainer within sandbox \"054e4c102f6b7510fd3e1b38b56ef01162d12d0b5e0c63711f17ee2b3cb213b1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 14 01:01:48.584146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount894477242.mount: Deactivated successfully. Feb 14 01:01:48.593740 containerd[1514]: time="2025-02-14T01:01:48.593218305Z" level=info msg="CreateContainer within sandbox \"054e4c102f6b7510fd3e1b38b56ef01162d12d0b5e0c63711f17ee2b3cb213b1\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"354fd79095248730787594a1eb4353105e97fdf1b7e80a533b7f4ccfefbc3a52\"" Feb 14 01:01:48.594354 containerd[1514]: time="2025-02-14T01:01:48.594201911Z" level=info msg="StartContainer for \"354fd79095248730787594a1eb4353105e97fdf1b7e80a533b7f4ccfefbc3a52\"" Feb 14 01:01:48.642732 systemd[1]: Started cri-containerd-354fd79095248730787594a1eb4353105e97fdf1b7e80a533b7f4ccfefbc3a52.scope - libcontainer container 354fd79095248730787594a1eb4353105e97fdf1b7e80a533b7f4ccfefbc3a52. Feb 14 01:01:48.706790 containerd[1514]: time="2025-02-14T01:01:48.706014033Z" level=info msg="StartContainer for \"354fd79095248730787594a1eb4353105e97fdf1b7e80a533b7f4ccfefbc3a52\" returns successfully" Feb 14 01:01:49.697662 kubelet[2722]: I0214 01:01:49.697430 2722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7d68577dc5-kfp5g" podStartSLOduration=1.702315351 podStartE2EDuration="4.697409801s" podCreationTimestamp="2025-02-14 01:01:45 +0000 UTC" firstStartedPulling="2025-02-14 01:01:45.566084195 +0000 UTC m=+6.280807531" lastFinishedPulling="2025-02-14 01:01:48.56117864 +0000 UTC m=+9.275901981" observedRunningTime="2025-02-14 01:01:49.696175129 +0000 UTC m=+10.410898490" watchObservedRunningTime="2025-02-14 01:01:49.697409801 +0000 UTC m=+10.412133151" Feb 14 01:01:52.434625 kubelet[2722]: I0214 01:01:52.434432 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/257e20d1-c896-4af3-9c16-b518e0e806f9-typha-certs\") pod \"calico-typha-5bb866c4df-n5xr2\" (UID: \"257e20d1-c896-4af3-9c16-b518e0e806f9\") " pod="calico-system/calico-typha-5bb866c4df-n5xr2" Feb 14 01:01:52.434625 kubelet[2722]: I0214 01:01:52.434488 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/257e20d1-c896-4af3-9c16-b518e0e806f9-tigera-ca-bundle\") pod \"calico-typha-5bb866c4df-n5xr2\" (UID: \"257e20d1-c896-4af3-9c16-b518e0e806f9\") " pod="calico-system/calico-typha-5bb866c4df-n5xr2" Feb 14 01:01:52.434625 kubelet[2722]: I0214 01:01:52.434528 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lvnt\" (UniqueName: \"kubernetes.io/projected/257e20d1-c896-4af3-9c16-b518e0e806f9-kube-api-access-5lvnt\") pod \"calico-typha-5bb866c4df-n5xr2\" (UID: \"257e20d1-c896-4af3-9c16-b518e0e806f9\") " pod="calico-system/calico-typha-5bb866c4df-n5xr2" Feb 14 01:01:52.448006 systemd[1]: Created slice kubepods-besteffort-pod257e20d1_c896_4af3_9c16_b518e0e806f9.slice - libcontainer container kubepods-besteffort-pod257e20d1_c896_4af3_9c16_b518e0e806f9.slice. Feb 14 01:01:52.598673 systemd[1]: Created slice kubepods-besteffort-pod383bfb44_a0bc_4838_bf90_ed38b0bd26b0.slice - libcontainer container kubepods-besteffort-pod383bfb44_a0bc_4838_bf90_ed38b0bd26b0.slice. Feb 14 01:01:52.636500 kubelet[2722]: I0214 01:01:52.636433 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/383bfb44-a0bc-4838-bf90-ed38b0bd26b0-node-certs\") pod \"calico-node-mcs5v\" (UID: \"383bfb44-a0bc-4838-bf90-ed38b0bd26b0\") " pod="calico-system/calico-node-mcs5v" Feb 14 01:01:52.636712 kubelet[2722]: I0214 01:01:52.636513 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/383bfb44-a0bc-4838-bf90-ed38b0bd26b0-var-run-calico\") pod \"calico-node-mcs5v\" (UID: \"383bfb44-a0bc-4838-bf90-ed38b0bd26b0\") " pod="calico-system/calico-node-mcs5v" Feb 14 01:01:52.636712 kubelet[2722]: I0214 01:01:52.636594 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/383bfb44-a0bc-4838-bf90-ed38b0bd26b0-var-lib-calico\") pod \"calico-node-mcs5v\" (UID: \"383bfb44-a0bc-4838-bf90-ed38b0bd26b0\") " pod="calico-system/calico-node-mcs5v" Feb 14 01:01:52.636712 kubelet[2722]: I0214 01:01:52.636642 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/383bfb44-a0bc-4838-bf90-ed38b0bd26b0-xtables-lock\") pod \"calico-node-mcs5v\" (UID: \"383bfb44-a0bc-4838-bf90-ed38b0bd26b0\") " pod="calico-system/calico-node-mcs5v" Feb 14 01:01:52.636712 kubelet[2722]: I0214 01:01:52.636673 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/383bfb44-a0bc-4838-bf90-ed38b0bd26b0-policysync\") pod \"calico-node-mcs5v\" (UID: \"383bfb44-a0bc-4838-bf90-ed38b0bd26b0\") " pod="calico-system/calico-node-mcs5v" Feb 14 01:01:52.636712 kubelet[2722]: I0214 01:01:52.636698 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/383bfb44-a0bc-4838-bf90-ed38b0bd26b0-tigera-ca-bundle\") pod \"calico-node-mcs5v\" (UID: \"383bfb44-a0bc-4838-bf90-ed38b0bd26b0\") " pod="calico-system/calico-node-mcs5v" Feb 14 01:01:52.636938 kubelet[2722]: I0214 01:01:52.636742 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/383bfb44-a0bc-4838-bf90-ed38b0bd26b0-cni-net-dir\") pod \"calico-node-mcs5v\" (UID: \"383bfb44-a0bc-4838-bf90-ed38b0bd26b0\") " pod="calico-system/calico-node-mcs5v" Feb 14 01:01:52.636938 kubelet[2722]: I0214 01:01:52.636774 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/383bfb44-a0bc-4838-bf90-ed38b0bd26b0-lib-modules\") pod \"calico-node-mcs5v\" (UID: \"383bfb44-a0bc-4838-bf90-ed38b0bd26b0\") " pod="calico-system/calico-node-mcs5v" Feb 14 01:01:52.636938 kubelet[2722]: I0214 01:01:52.636801 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/383bfb44-a0bc-4838-bf90-ed38b0bd26b0-cni-log-dir\") pod \"calico-node-mcs5v\" (UID: \"383bfb44-a0bc-4838-bf90-ed38b0bd26b0\") " pod="calico-system/calico-node-mcs5v" Feb 14 01:01:52.636938 kubelet[2722]: I0214 01:01:52.636828 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv622\" (UniqueName: \"kubernetes.io/projected/383bfb44-a0bc-4838-bf90-ed38b0bd26b0-kube-api-access-bv622\") pod \"calico-node-mcs5v\" (UID: \"383bfb44-a0bc-4838-bf90-ed38b0bd26b0\") " pod="calico-system/calico-node-mcs5v" Feb 14 01:01:52.636938 kubelet[2722]: I0214 01:01:52.636867 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/383bfb44-a0bc-4838-bf90-ed38b0bd26b0-cni-bin-dir\") pod \"calico-node-mcs5v\" (UID: \"383bfb44-a0bc-4838-bf90-ed38b0bd26b0\") " pod="calico-system/calico-node-mcs5v" Feb 14 01:01:52.637202 kubelet[2722]: I0214 01:01:52.636894 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/383bfb44-a0bc-4838-bf90-ed38b0bd26b0-flexvol-driver-host\") pod \"calico-node-mcs5v\" (UID: \"383bfb44-a0bc-4838-bf90-ed38b0bd26b0\") " pod="calico-system/calico-node-mcs5v" Feb 14 01:01:52.698884 kubelet[2722]: E0214 01:01:52.697469 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gt8pt" podUID="d3d8e6bb-a729-4387-87b3-0f4e4f6643d0" Feb 14 01:01:52.739575 kubelet[2722]: I0214 01:01:52.737356 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlgth\" (UniqueName: \"kubernetes.io/projected/d3d8e6bb-a729-4387-87b3-0f4e4f6643d0-kube-api-access-mlgth\") pod \"csi-node-driver-gt8pt\" (UID: \"d3d8e6bb-a729-4387-87b3-0f4e4f6643d0\") " pod="calico-system/csi-node-driver-gt8pt" Feb 14 01:01:52.739575 kubelet[2722]: I0214 01:01:52.737442 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d3d8e6bb-a729-4387-87b3-0f4e4f6643d0-varrun\") pod \"csi-node-driver-gt8pt\" (UID: \"d3d8e6bb-a729-4387-87b3-0f4e4f6643d0\") " pod="calico-system/csi-node-driver-gt8pt" Feb 14 01:01:52.739575 kubelet[2722]: I0214 01:01:52.737508 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d3d8e6bb-a729-4387-87b3-0f4e4f6643d0-kubelet-dir\") pod \"csi-node-driver-gt8pt\" (UID: \"d3d8e6bb-a729-4387-87b3-0f4e4f6643d0\") " pod="calico-system/csi-node-driver-gt8pt" Feb 14 01:01:52.739575 kubelet[2722]: I0214 01:01:52.737559 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d3d8e6bb-a729-4387-87b3-0f4e4f6643d0-socket-dir\") pod \"csi-node-driver-gt8pt\" (UID: \"d3d8e6bb-a729-4387-87b3-0f4e4f6643d0\") " pod="calico-system/csi-node-driver-gt8pt" Feb 14 01:01:52.739575 kubelet[2722]: I0214 01:01:52.737595 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d3d8e6bb-a729-4387-87b3-0f4e4f6643d0-registration-dir\") pod \"csi-node-driver-gt8pt\" (UID: \"d3d8e6bb-a729-4387-87b3-0f4e4f6643d0\") " pod="calico-system/csi-node-driver-gt8pt" Feb 14 01:01:52.755949 containerd[1514]: time="2025-02-14T01:01:52.755863909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5bb866c4df-n5xr2,Uid:257e20d1-c896-4af3-9c16-b518e0e806f9,Namespace:calico-system,Attempt:0,}" Feb 14 01:01:52.757509 kubelet[2722]: E0214 01:01:52.757480 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.757701 kubelet[2722]: W0214 01:01:52.757676 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.757990 kubelet[2722]: E0214 01:01:52.757821 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.758558 kubelet[2722]: E0214 01:01:52.758199 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.758881 kubelet[2722]: W0214 01:01:52.758736 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.758881 kubelet[2722]: E0214 01:01:52.758763 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.762670 kubelet[2722]: E0214 01:01:52.762643 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.762959 kubelet[2722]: W0214 01:01:52.762776 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.762959 kubelet[2722]: E0214 01:01:52.762808 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.763209 kubelet[2722]: E0214 01:01:52.763187 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.763717 kubelet[2722]: W0214 01:01:52.763294 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.763717 kubelet[2722]: E0214 01:01:52.763320 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.770394 kubelet[2722]: E0214 01:01:52.770367 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.770552 kubelet[2722]: W0214 01:01:52.770515 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.770678 kubelet[2722]: E0214 01:01:52.770649 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.773088 kubelet[2722]: E0214 01:01:52.773053 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.773221 kubelet[2722]: W0214 01:01:52.773197 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.773356 kubelet[2722]: E0214 01:01:52.773334 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.773873 kubelet[2722]: E0214 01:01:52.773698 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.773873 kubelet[2722]: W0214 01:01:52.773718 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.773873 kubelet[2722]: E0214 01:01:52.773740 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.782886 kubelet[2722]: E0214 01:01:52.782830 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.782886 kubelet[2722]: W0214 01:01:52.782859 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.783920 kubelet[2722]: E0214 01:01:52.782904 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.792096 kubelet[2722]: E0214 01:01:52.791688 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.792096 kubelet[2722]: W0214 01:01:52.791710 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.792096 kubelet[2722]: E0214 01:01:52.792048 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.799578 kubelet[2722]: E0214 01:01:52.794365 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.799578 kubelet[2722]: W0214 01:01:52.794386 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.799578 kubelet[2722]: E0214 01:01:52.794673 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.799578 kubelet[2722]: E0214 01:01:52.798639 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.799578 kubelet[2722]: W0214 01:01:52.798654 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.799578 kubelet[2722]: E0214 01:01:52.799122 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.799578 kubelet[2722]: W0214 01:01:52.799139 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.801647 kubelet[2722]: E0214 01:01:52.800037 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.801647 kubelet[2722]: W0214 01:01:52.800061 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.802095 kubelet[2722]: E0214 01:01:52.801819 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.802095 kubelet[2722]: W0214 01:01:52.801841 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.802480 kubelet[2722]: E0214 01:01:52.802185 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.802480 kubelet[2722]: E0214 01:01:52.802204 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.802480 kubelet[2722]: E0214 01:01:52.802218 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.802480 kubelet[2722]: E0214 01:01:52.802231 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.804757 kubelet[2722]: E0214 01:01:52.804733 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.804757 kubelet[2722]: W0214 01:01:52.804755 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.805302 kubelet[2722]: E0214 01:01:52.805273 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.809626 kubelet[2722]: E0214 01:01:52.809585 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.809626 kubelet[2722]: W0214 01:01:52.809610 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.809626 kubelet[2722]: E0214 01:01:52.809652 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.811554 kubelet[2722]: E0214 01:01:52.811508 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.811554 kubelet[2722]: W0214 01:01:52.811546 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.811851 kubelet[2722]: E0214 01:01:52.811712 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.811937 kubelet[2722]: E0214 01:01:52.811857 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.811937 kubelet[2722]: W0214 01:01:52.811872 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.813691 kubelet[2722]: E0214 01:01:52.811989 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.813691 kubelet[2722]: E0214 01:01:52.812994 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.813691 kubelet[2722]: W0214 01:01:52.813010 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.813921 kubelet[2722]: E0214 01:01:52.813892 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.816489 kubelet[2722]: E0214 01:01:52.816458 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.816489 kubelet[2722]: W0214 01:01:52.816481 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.816669 kubelet[2722]: E0214 01:01:52.816631 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.817492 kubelet[2722]: E0214 01:01:52.817401 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.817492 kubelet[2722]: W0214 01:01:52.817424 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.817492 kubelet[2722]: E0214 01:01:52.817460 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.818517 kubelet[2722]: E0214 01:01:52.818485 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.818517 kubelet[2722]: W0214 01:01:52.818508 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.818712 kubelet[2722]: E0214 01:01:52.818678 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.819440 kubelet[2722]: E0214 01:01:52.819400 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.819440 kubelet[2722]: W0214 01:01:52.819430 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.819639 kubelet[2722]: E0214 01:01:52.819474 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.821647 kubelet[2722]: E0214 01:01:52.821618 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.821647 kubelet[2722]: W0214 01:01:52.821641 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.821817 kubelet[2722]: E0214 01:01:52.821681 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.822687 kubelet[2722]: E0214 01:01:52.822659 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.822687 kubelet[2722]: W0214 01:01:52.822682 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.822936 kubelet[2722]: E0214 01:01:52.822853 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.825564 kubelet[2722]: E0214 01:01:52.824622 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.825564 kubelet[2722]: W0214 01:01:52.824644 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.825564 kubelet[2722]: E0214 01:01:52.824739 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.825564 kubelet[2722]: E0214 01:01:52.824998 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.825564 kubelet[2722]: W0214 01:01:52.825012 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.826663 kubelet[2722]: E0214 01:01:52.826623 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.827816 kubelet[2722]: E0214 01:01:52.827790 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.827816 kubelet[2722]: W0214 01:01:52.827813 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.828074 kubelet[2722]: E0214 01:01:52.827986 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.828132 kubelet[2722]: E0214 01:01:52.828074 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.828132 kubelet[2722]: W0214 01:01:52.828092 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.828268 kubelet[2722]: E0214 01:01:52.828240 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.828835 kubelet[2722]: E0214 01:01:52.828806 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.828835 kubelet[2722]: W0214 01:01:52.828833 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.829284 kubelet[2722]: E0214 01:01:52.829256 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.830107 kubelet[2722]: E0214 01:01:52.829796 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.830107 kubelet[2722]: W0214 01:01:52.829818 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.830107 kubelet[2722]: E0214 01:01:52.830065 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.833071 kubelet[2722]: E0214 01:01:52.832765 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.833071 kubelet[2722]: W0214 01:01:52.832789 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.833071 kubelet[2722]: E0214 01:01:52.832932 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.833758 kubelet[2722]: E0214 01:01:52.833732 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.833758 kubelet[2722]: W0214 01:01:52.833755 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.834126 kubelet[2722]: E0214 01:01:52.833883 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.836379 kubelet[2722]: E0214 01:01:52.834617 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.836379 kubelet[2722]: W0214 01:01:52.834639 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.836379 kubelet[2722]: E0214 01:01:52.834671 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.838362 kubelet[2722]: E0214 01:01:52.838337 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.838362 kubelet[2722]: W0214 01:01:52.838360 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.838582 kubelet[2722]: E0214 01:01:52.838385 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.841791 kubelet[2722]: E0214 01:01:52.841765 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.841791 kubelet[2722]: W0214 01:01:52.841788 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.842072 kubelet[2722]: E0214 01:01:52.841805 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.842138 kubelet[2722]: E0214 01:01:52.842103 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.842138 kubelet[2722]: W0214 01:01:52.842118 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.842247 kubelet[2722]: E0214 01:01:52.842158 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.843147 kubelet[2722]: E0214 01:01:52.843122 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.843147 kubelet[2722]: W0214 01:01:52.843144 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.843597 kubelet[2722]: E0214 01:01:52.843179 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.844572 kubelet[2722]: E0214 01:01:52.843808 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.844572 kubelet[2722]: W0214 01:01:52.843828 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.845720 kubelet[2722]: E0214 01:01:52.845693 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.845814 kubelet[2722]: E0214 01:01:52.845793 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.845907 kubelet[2722]: W0214 01:01:52.845813 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.845907 kubelet[2722]: E0214 01:01:52.845849 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.846768 kubelet[2722]: E0214 01:01:52.846118 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.846768 kubelet[2722]: W0214 01:01:52.846139 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.846768 kubelet[2722]: E0214 01:01:52.846181 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.846768 kubelet[2722]: E0214 01:01:52.846392 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.846768 kubelet[2722]: W0214 01:01:52.846406 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.846768 kubelet[2722]: E0214 01:01:52.846499 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.847058 kubelet[2722]: E0214 01:01:52.847031 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.847058 kubelet[2722]: W0214 01:01:52.847045 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.847606 kubelet[2722]: E0214 01:01:52.847234 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.848697 kubelet[2722]: E0214 01:01:52.848675 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.848697 kubelet[2722]: W0214 01:01:52.848694 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.849196 kubelet[2722]: E0214 01:01:52.849080 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.850473 kubelet[2722]: E0214 01:01:52.850449 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.850473 kubelet[2722]: W0214 01:01:52.850470 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.851719 kubelet[2722]: E0214 01:01:52.851690 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.852614 kubelet[2722]: E0214 01:01:52.851888 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.852614 kubelet[2722]: W0214 01:01:52.851912 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.852735 kubelet[2722]: E0214 01:01:52.852621 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.852901 kubelet[2722]: E0214 01:01:52.852806 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.852959 kubelet[2722]: W0214 01:01:52.852635 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.853869 kubelet[2722]: E0214 01:01:52.853578 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.854132 kubelet[2722]: E0214 01:01:52.854104 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.854576 kubelet[2722]: W0214 01:01:52.854321 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.855477 kubelet[2722]: E0214 01:01:52.855435 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.856010 kubelet[2722]: E0214 01:01:52.855987 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.856010 kubelet[2722]: W0214 01:01:52.856007 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.856570 kubelet[2722]: E0214 01:01:52.856516 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.857559 kubelet[2722]: E0214 01:01:52.856749 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.857559 kubelet[2722]: W0214 01:01:52.856769 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.857559 kubelet[2722]: E0214 01:01:52.857313 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.857559 kubelet[2722]: W0214 01:01:52.857328 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.857849 kubelet[2722]: E0214 01:01:52.857794 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.857849 kubelet[2722]: E0214 01:01:52.857825 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.858612 kubelet[2722]: E0214 01:01:52.857999 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.858612 kubelet[2722]: W0214 01:01:52.858020 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.858612 kubelet[2722]: E0214 01:01:52.858311 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.859931 kubelet[2722]: E0214 01:01:52.859465 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.859931 kubelet[2722]: W0214 01:01:52.859487 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.859931 kubelet[2722]: E0214 01:01:52.859725 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.860795 kubelet[2722]: E0214 01:01:52.860768 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.860795 kubelet[2722]: W0214 01:01:52.860788 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.862372 kubelet[2722]: E0214 01:01:52.862339 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.862746 kubelet[2722]: E0214 01:01:52.862594 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.862746 kubelet[2722]: W0214 01:01:52.862614 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.862746 kubelet[2722]: E0214 01:01:52.862663 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.863325 kubelet[2722]: E0214 01:01:52.863078 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.863325 kubelet[2722]: W0214 01:01:52.863100 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.863742 kubelet[2722]: E0214 01:01:52.863702 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.864213 kubelet[2722]: E0214 01:01:52.864158 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.864213 kubelet[2722]: W0214 01:01:52.864206 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.864639 kubelet[2722]: E0214 01:01:52.864597 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.866671 kubelet[2722]: E0214 01:01:52.866625 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.866671 kubelet[2722]: W0214 01:01:52.866647 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.867243 kubelet[2722]: E0214 01:01:52.866963 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.867243 kubelet[2722]: W0214 01:01:52.866998 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.867243 kubelet[2722]: E0214 01:01:52.867015 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.867406 kubelet[2722]: E0214 01:01:52.867014 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.867863 containerd[1514]: time="2025-02-14T01:01:52.863817065Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 01:01:52.867863 containerd[1514]: time="2025-02-14T01:01:52.863917577Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 01:01:52.867863 containerd[1514]: time="2025-02-14T01:01:52.863934017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:01:52.867863 containerd[1514]: time="2025-02-14T01:01:52.864058914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:01:52.868071 kubelet[2722]: E0214 01:01:52.867860 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.868071 kubelet[2722]: W0214 01:01:52.867875 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.868071 kubelet[2722]: E0214 01:01:52.867890 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.897027 kubelet[2722]: E0214 01:01:52.896892 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:52.897027 kubelet[2722]: W0214 01:01:52.896922 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:52.897027 kubelet[2722]: E0214 01:01:52.897021 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:52.908236 containerd[1514]: time="2025-02-14T01:01:52.906822989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mcs5v,Uid:383bfb44-a0bc-4838-bf90-ed38b0bd26b0,Namespace:calico-system,Attempt:0,}" Feb 14 01:01:52.909332 systemd[1]: Started cri-containerd-b66b87510d6bf37f5be0d9eb4f74f6dbc2738aa8068a0b33521029f807a32b5c.scope - libcontainer container b66b87510d6bf37f5be0d9eb4f74f6dbc2738aa8068a0b33521029f807a32b5c. Feb 14 01:01:53.000088 containerd[1514]: time="2025-02-14T01:01:52.997411452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5bb866c4df-n5xr2,Uid:257e20d1-c896-4af3-9c16-b518e0e806f9,Namespace:calico-system,Attempt:0,} returns sandbox id \"b66b87510d6bf37f5be0d9eb4f74f6dbc2738aa8068a0b33521029f807a32b5c\"" Feb 14 01:01:53.006818 containerd[1514]: time="2025-02-14T01:01:53.003968689Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 01:01:53.006818 containerd[1514]: time="2025-02-14T01:01:53.004633177Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 01:01:53.006818 containerd[1514]: time="2025-02-14T01:01:53.004671449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:01:53.006818 containerd[1514]: time="2025-02-14T01:01:53.004825450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:01:53.008025 containerd[1514]: time="2025-02-14T01:01:53.007154393Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 14 01:01:53.046735 systemd[1]: Started cri-containerd-16a88cecaf2077c6b87400eef4180d813822542cce3c488e5f1c3e52db26fa70.scope - libcontainer container 16a88cecaf2077c6b87400eef4180d813822542cce3c488e5f1c3e52db26fa70. Feb 14 01:01:53.097524 containerd[1514]: time="2025-02-14T01:01:53.097331251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mcs5v,Uid:383bfb44-a0bc-4838-bf90-ed38b0bd26b0,Namespace:calico-system,Attempt:0,} returns sandbox id \"16a88cecaf2077c6b87400eef4180d813822542cce3c488e5f1c3e52db26fa70\"" Feb 14 01:01:54.580298 kubelet[2722]: E0214 01:01:54.580234 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gt8pt" podUID="d3d8e6bb-a729-4387-87b3-0f4e4f6643d0" Feb 14 01:01:54.845511 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3581060299.mount: Deactivated successfully. Feb 14 01:01:56.581185 kubelet[2722]: E0214 01:01:56.580517 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gt8pt" podUID="d3d8e6bb-a729-4387-87b3-0f4e4f6643d0" Feb 14 01:01:56.829247 containerd[1514]: time="2025-02-14T01:01:56.829157657Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:01:56.831057 containerd[1514]: time="2025-02-14T01:01:56.830992482Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Feb 14 01:01:56.832253 containerd[1514]: time="2025-02-14T01:01:56.832128377Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:01:56.835972 containerd[1514]: time="2025-02-14T01:01:56.835902802Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:01:56.837838 containerd[1514]: time="2025-02-14T01:01:56.837780556Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 3.830568938s" Feb 14 01:01:56.837925 containerd[1514]: time="2025-02-14T01:01:56.837844283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Feb 14 01:01:56.840673 containerd[1514]: time="2025-02-14T01:01:56.840596488Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 14 01:01:56.869932 containerd[1514]: time="2025-02-14T01:01:56.869858587Z" level=info msg="CreateContainer within sandbox \"b66b87510d6bf37f5be0d9eb4f74f6dbc2738aa8068a0b33521029f807a32b5c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 14 01:01:56.899855 containerd[1514]: time="2025-02-14T01:01:56.899770003Z" level=info msg="CreateContainer within sandbox \"b66b87510d6bf37f5be0d9eb4f74f6dbc2738aa8068a0b33521029f807a32b5c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7d317b041e7c14e5547c16b1bf1be569d6ec7e40ffa4a950653b4819a340d2f0\"" Feb 14 01:01:56.902145 containerd[1514]: time="2025-02-14T01:01:56.902104292Z" level=info msg="StartContainer for \"7d317b041e7c14e5547c16b1bf1be569d6ec7e40ffa4a950653b4819a340d2f0\"" Feb 14 01:01:57.045868 systemd[1]: Started cri-containerd-7d317b041e7c14e5547c16b1bf1be569d6ec7e40ffa4a950653b4819a340d2f0.scope - libcontainer container 7d317b041e7c14e5547c16b1bf1be569d6ec7e40ffa4a950653b4819a340d2f0. Feb 14 01:01:57.129220 containerd[1514]: time="2025-02-14T01:01:57.127876835Z" level=info msg="StartContainer for \"7d317b041e7c14e5547c16b1bf1be569d6ec7e40ffa4a950653b4819a340d2f0\" returns successfully" Feb 14 01:01:57.735024 kubelet[2722]: I0214 01:01:57.734927 2722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5bb866c4df-n5xr2" podStartSLOduration=1.901508967 podStartE2EDuration="5.734885586s" podCreationTimestamp="2025-02-14 01:01:52 +0000 UTC" firstStartedPulling="2025-02-14 01:01:53.006648077 +0000 UTC m=+13.721371415" lastFinishedPulling="2025-02-14 01:01:56.840024693 +0000 UTC m=+17.554748034" observedRunningTime="2025-02-14 01:01:57.73220339 +0000 UTC m=+18.446926753" watchObservedRunningTime="2025-02-14 01:01:57.734885586 +0000 UTC m=+18.449608930" Feb 14 01:01:57.760590 kubelet[2722]: E0214 01:01:57.760517 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:57.760590 kubelet[2722]: W0214 01:01:57.760574 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:57.761141 kubelet[2722]: E0214 01:01:57.760615 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:57.761141 kubelet[2722]: E0214 01:01:57.761121 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:57.761141 kubelet[2722]: W0214 01:01:57.761139 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:57.761405 kubelet[2722]: E0214 01:01:57.761155 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:57.761743 kubelet[2722]: E0214 01:01:57.761718 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:57.761743 kubelet[2722]: W0214 01:01:57.761739 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:57.762480 kubelet[2722]: E0214 01:01:57.761759 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:57.775639 kubelet[2722]: E0214 01:01:57.775550 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:57.775639 kubelet[2722]: W0214 01:01:57.775606 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:57.776430 kubelet[2722]: E0214 01:01:57.775658 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:57.776758 kubelet[2722]: E0214 01:01:57.776522 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:57.776758 kubelet[2722]: W0214 01:01:57.776688 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:57.776758 kubelet[2722]: E0214 01:01:57.776711 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:57.778029 kubelet[2722]: E0214 01:01:57.777973 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:57.778029 kubelet[2722]: W0214 01:01:57.777996 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:57.778190 kubelet[2722]: E0214 01:01:57.778149 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:57.778934 kubelet[2722]: E0214 01:01:57.778855 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:57.778934 kubelet[2722]: W0214 01:01:57.778888 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:57.778934 kubelet[2722]: E0214 01:01:57.778907 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:57.779816 kubelet[2722]: E0214 01:01:57.779789 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:57.779816 kubelet[2722]: W0214 01:01:57.779811 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:57.779989 kubelet[2722]: E0214 01:01:57.779829 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:57.780964 kubelet[2722]: E0214 01:01:57.780939 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:57.780964 kubelet[2722]: W0214 01:01:57.780966 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:57.781107 kubelet[2722]: E0214 01:01:57.780983 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:57.781828 kubelet[2722]: E0214 01:01:57.781702 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:57.781828 kubelet[2722]: W0214 01:01:57.781732 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:57.781828 kubelet[2722]: E0214 01:01:57.781750 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:57.782247 kubelet[2722]: E0214 01:01:57.782084 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:57.782247 kubelet[2722]: W0214 01:01:57.782102 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:57.782247 kubelet[2722]: E0214 01:01:57.782117 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:57.782669 kubelet[2722]: E0214 01:01:57.782631 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:57.782669 kubelet[2722]: W0214 01:01:57.782647 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:57.782778 kubelet[2722]: E0214 01:01:57.782670 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:57.783126 kubelet[2722]: E0214 01:01:57.783017 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:57.783126 kubelet[2722]: W0214 01:01:57.783039 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:57.783126 kubelet[2722]: E0214 01:01:57.783063 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:57.783678 kubelet[2722]: E0214 01:01:57.783647 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:57.783678 kubelet[2722]: W0214 01:01:57.783677 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:57.783808 kubelet[2722]: E0214 01:01:57.783695 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:57.784145 kubelet[2722]: E0214 01:01:57.784060 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:57.784145 kubelet[2722]: W0214 01:01:57.784110 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:57.784145 kubelet[2722]: E0214 01:01:57.784130 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:57.797942 kubelet[2722]: E0214 01:01:57.797894 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:57.797942 kubelet[2722]: W0214 01:01:57.797932 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:57.798206 kubelet[2722]: E0214 01:01:57.797960 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:57.798492 kubelet[2722]: E0214 01:01:57.798292 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:57.798492 kubelet[2722]: W0214 01:01:57.798307 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:57.798492 kubelet[2722]: E0214 01:01:57.798322 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:57.799266 kubelet[2722]: E0214 01:01:57.799223 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:57.799266 kubelet[2722]: W0214 01:01:57.799246 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:57.799266 kubelet[2722]: E0214 01:01:57.799266 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:57.799627 kubelet[2722]: E0214 01:01:57.799603 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:57.799627 kubelet[2722]: W0214 01:01:57.799625 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:57.800576 kubelet[2722]: E0214 01:01:57.799719 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:57.800576 kubelet[2722]: E0214 01:01:57.800022 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:57.800576 kubelet[2722]: W0214 01:01:57.800037 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:57.800576 kubelet[2722]: E0214 01:01:57.800183 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:57.800576 kubelet[2722]: E0214 01:01:57.800422 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:57.800576 kubelet[2722]: W0214 01:01:57.800437 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:57.800576 kubelet[2722]: E0214 01:01:57.800491 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:57.801162 kubelet[2722]: E0214 01:01:57.800744 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:57.801162 kubelet[2722]: W0214 01:01:57.800759 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:57.801162 kubelet[2722]: E0214 01:01:57.800783 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:57.801162 kubelet[2722]: E0214 01:01:57.801058 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:57.801162 kubelet[2722]: W0214 01:01:57.801083 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:57.802286 kubelet[2722]: E0214 01:01:57.802172 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:57.802509 kubelet[2722]: E0214 01:01:57.802478 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:57.802609 kubelet[2722]: W0214 01:01:57.802506 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:57.802609 kubelet[2722]: E0214 01:01:57.802555 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:57.803206 kubelet[2722]: E0214 01:01:57.802830 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:57.803206 kubelet[2722]: W0214 01:01:57.802851 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:57.803206 kubelet[2722]: E0214 01:01:57.802951 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:57.803648 kubelet[2722]: E0214 01:01:57.803623 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:57.803648 kubelet[2722]: W0214 01:01:57.803644 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:57.803943 kubelet[2722]: E0214 01:01:57.803836 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:57.804186 kubelet[2722]: E0214 01:01:57.804017 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:57.804186 kubelet[2722]: W0214 01:01:57.804031 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:57.804647 kubelet[2722]: E0214 01:01:57.804561 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:57.804996 kubelet[2722]: E0214 01:01:57.804967 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:57.804996 kubelet[2722]: W0214 01:01:57.804989 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:57.806148 kubelet[2722]: E0214 01:01:57.805226 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:57.806148 kubelet[2722]: E0214 01:01:57.805502 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:57.806148 kubelet[2722]: W0214 01:01:57.805518 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:57.806148 kubelet[2722]: E0214 01:01:57.805923 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:57.806148 kubelet[2722]: W0214 01:01:57.806048 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:57.806148 kubelet[2722]: E0214 01:01:57.806069 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:57.806148 kubelet[2722]: E0214 01:01:57.806127 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:57.807347 kubelet[2722]: E0214 01:01:57.807319 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:57.807647 kubelet[2722]: W0214 01:01:57.807343 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:57.807647 kubelet[2722]: E0214 01:01:57.807424 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:57.808364 kubelet[2722]: E0214 01:01:57.808296 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:57.808364 kubelet[2722]: W0214 01:01:57.808318 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:57.808364 kubelet[2722]: E0214 01:01:57.808342 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:57.808940 kubelet[2722]: E0214 01:01:57.808854 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:57.808940 kubelet[2722]: W0214 01:01:57.808876 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:57.808940 kubelet[2722]: E0214 01:01:57.808893 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:58.579385 kubelet[2722]: E0214 01:01:58.579322 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gt8pt" podUID="d3d8e6bb-a729-4387-87b3-0f4e4f6643d0" Feb 14 01:01:58.645377 containerd[1514]: time="2025-02-14T01:01:58.645278677Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:01:58.646622 containerd[1514]: time="2025-02-14T01:01:58.646225337Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Feb 14 01:01:58.660363 containerd[1514]: time="2025-02-14T01:01:58.660303183Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:01:58.669913 containerd[1514]: time="2025-02-14T01:01:58.669873163Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:01:58.670819 containerd[1514]: time="2025-02-14T01:01:58.670782696Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.830133739s" Feb 14 01:01:58.670975 containerd[1514]: time="2025-02-14T01:01:58.670944313Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 14 01:01:58.677113 containerd[1514]: time="2025-02-14T01:01:58.677028663Z" level=info msg="CreateContainer within sandbox \"16a88cecaf2077c6b87400eef4180d813822542cce3c488e5f1c3e52db26fa70\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 14 01:01:58.771331 containerd[1514]: time="2025-02-14T01:01:58.771272482Z" level=info msg="CreateContainer within sandbox \"16a88cecaf2077c6b87400eef4180d813822542cce3c488e5f1c3e52db26fa70\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3cc7d9e2015d8b08b6996e2b24a3b476d925d5d598e541cac4dab8b6fcaa0c11\"" Feb 14 01:01:58.774468 containerd[1514]: time="2025-02-14T01:01:58.772718420Z" level=info msg="StartContainer for \"3cc7d9e2015d8b08b6996e2b24a3b476d925d5d598e541cac4dab8b6fcaa0c11\"" Feb 14 01:01:58.794615 kubelet[2722]: E0214 01:01:58.794575 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:58.795049 kubelet[2722]: W0214 01:01:58.794615 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:58.796099 kubelet[2722]: E0214 01:01:58.795362 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:58.796177 kubelet[2722]: E0214 01:01:58.796114 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:58.796177 kubelet[2722]: W0214 01:01:58.796131 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:58.796177 kubelet[2722]: E0214 01:01:58.796148 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:58.796726 kubelet[2722]: E0214 01:01:58.796692 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:58.796824 kubelet[2722]: W0214 01:01:58.796807 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:58.796878 kubelet[2722]: E0214 01:01:58.796830 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:58.797182 kubelet[2722]: E0214 01:01:58.797160 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:58.797251 kubelet[2722]: W0214 01:01:58.797198 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:58.797251 kubelet[2722]: E0214 01:01:58.797217 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:58.797653 kubelet[2722]: E0214 01:01:58.797590 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:58.797653 kubelet[2722]: W0214 01:01:58.797604 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:58.797653 kubelet[2722]: E0214 01:01:58.797619 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:58.797911 kubelet[2722]: E0214 01:01:58.797888 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:58.797911 kubelet[2722]: W0214 01:01:58.797907 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:58.798116 kubelet[2722]: E0214 01:01:58.797928 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:58.798474 kubelet[2722]: E0214 01:01:58.798302 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:58.798474 kubelet[2722]: W0214 01:01:58.798348 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:58.798474 kubelet[2722]: E0214 01:01:58.798368 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:58.798849 kubelet[2722]: E0214 01:01:58.798827 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:58.798849 kubelet[2722]: W0214 01:01:58.798847 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:58.798991 kubelet[2722]: E0214 01:01:58.798864 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:58.799524 kubelet[2722]: E0214 01:01:58.799398 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:58.799524 kubelet[2722]: W0214 01:01:58.799420 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:58.799524 kubelet[2722]: E0214 01:01:58.799436 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:58.799798 kubelet[2722]: E0214 01:01:58.799777 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:58.799871 kubelet[2722]: W0214 01:01:58.799823 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:58.799871 kubelet[2722]: E0214 01:01:58.799843 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:58.800241 kubelet[2722]: E0214 01:01:58.800213 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:58.800241 kubelet[2722]: W0214 01:01:58.800229 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:58.800351 kubelet[2722]: E0214 01:01:58.800244 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:58.800645 kubelet[2722]: E0214 01:01:58.800622 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:58.800722 kubelet[2722]: W0214 01:01:58.800668 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:58.800722 kubelet[2722]: E0214 01:01:58.800690 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:58.801093 kubelet[2722]: E0214 01:01:58.801062 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:58.801161 kubelet[2722]: W0214 01:01:58.801117 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:58.801161 kubelet[2722]: E0214 01:01:58.801137 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:58.801909 kubelet[2722]: E0214 01:01:58.801882 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:58.801909 kubelet[2722]: W0214 01:01:58.801905 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:58.802217 kubelet[2722]: E0214 01:01:58.801922 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:58.805325 kubelet[2722]: E0214 01:01:58.803713 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:58.805325 kubelet[2722]: W0214 01:01:58.803761 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:58.805325 kubelet[2722]: E0214 01:01:58.803779 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:58.808908 kubelet[2722]: E0214 01:01:58.808670 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:58.808908 kubelet[2722]: W0214 01:01:58.808692 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:58.808908 kubelet[2722]: E0214 01:01:58.808727 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:58.809273 kubelet[2722]: E0214 01:01:58.809251 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:58.809381 kubelet[2722]: W0214 01:01:58.809359 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:58.809491 kubelet[2722]: E0214 01:01:58.809468 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:58.810669 kubelet[2722]: E0214 01:01:58.810632 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:58.810669 kubelet[2722]: W0214 01:01:58.810663 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:58.810802 kubelet[2722]: E0214 01:01:58.810692 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:58.812780 kubelet[2722]: E0214 01:01:58.812595 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:58.812780 kubelet[2722]: W0214 01:01:58.812619 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:58.812780 kubelet[2722]: E0214 01:01:58.812746 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:58.812979 kubelet[2722]: E0214 01:01:58.812916 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:58.812979 kubelet[2722]: W0214 01:01:58.812944 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:58.813090 kubelet[2722]: E0214 01:01:58.813038 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:58.813956 kubelet[2722]: E0214 01:01:58.813550 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:58.813956 kubelet[2722]: W0214 01:01:58.813571 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:58.813956 kubelet[2722]: E0214 01:01:58.813682 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:58.814172 kubelet[2722]: E0214 01:01:58.813987 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:58.814172 kubelet[2722]: W0214 01:01:58.814002 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:58.814172 kubelet[2722]: E0214 01:01:58.814027 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:58.814772 kubelet[2722]: E0214 01:01:58.814728 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:58.815139 kubelet[2722]: W0214 01:01:58.814984 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:58.815658 kubelet[2722]: E0214 01:01:58.815235 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:58.815658 kubelet[2722]: E0214 01:01:58.815405 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:58.815658 kubelet[2722]: W0214 01:01:58.815420 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:58.815658 kubelet[2722]: E0214 01:01:58.815493 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:58.818694 kubelet[2722]: E0214 01:01:58.818668 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:58.818694 kubelet[2722]: W0214 01:01:58.818691 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:58.818952 kubelet[2722]: E0214 01:01:58.818820 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:58.820109 kubelet[2722]: E0214 01:01:58.820080 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:58.820109 kubelet[2722]: W0214 01:01:58.820103 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:58.820514 kubelet[2722]: E0214 01:01:58.820355 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:58.820514 kubelet[2722]: E0214 01:01:58.820375 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:58.820514 kubelet[2722]: W0214 01:01:58.820390 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:58.820514 kubelet[2722]: E0214 01:01:58.820485 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:58.823391 kubelet[2722]: E0214 01:01:58.820926 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:58.823391 kubelet[2722]: W0214 01:01:58.820949 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:58.823391 kubelet[2722]: E0214 01:01:58.821590 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:58.823391 kubelet[2722]: E0214 01:01:58.821972 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:58.823391 kubelet[2722]: W0214 01:01:58.821988 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:58.823391 kubelet[2722]: E0214 01:01:58.822010 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:58.823706 kubelet[2722]: E0214 01:01:58.823513 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:58.823706 kubelet[2722]: W0214 01:01:58.823527 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:58.823706 kubelet[2722]: E0214 01:01:58.823572 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:58.823875 kubelet[2722]: E0214 01:01:58.823863 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:58.823926 kubelet[2722]: W0214 01:01:58.823879 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:58.823926 kubelet[2722]: E0214 01:01:58.823894 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:58.824451 kubelet[2722]: E0214 01:01:58.824418 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:58.824451 kubelet[2722]: W0214 01:01:58.824442 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:58.824599 kubelet[2722]: E0214 01:01:58.824458 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:58.824785 kubelet[2722]: E0214 01:01:58.824755 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:01:58.824785 kubelet[2722]: W0214 01:01:58.824780 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:01:58.824891 kubelet[2722]: E0214 01:01:58.824797 2722 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:01:58.828309 systemd[1]: Started cri-containerd-3cc7d9e2015d8b08b6996e2b24a3b476d925d5d598e541cac4dab8b6fcaa0c11.scope - libcontainer container 3cc7d9e2015d8b08b6996e2b24a3b476d925d5d598e541cac4dab8b6fcaa0c11. Feb 14 01:01:58.884778 containerd[1514]: time="2025-02-14T01:01:58.884469354Z" level=info msg="StartContainer for \"3cc7d9e2015d8b08b6996e2b24a3b476d925d5d598e541cac4dab8b6fcaa0c11\" returns successfully" Feb 14 01:01:58.910868 systemd[1]: cri-containerd-3cc7d9e2015d8b08b6996e2b24a3b476d925d5d598e541cac4dab8b6fcaa0c11.scope: Deactivated successfully. Feb 14 01:01:58.946934 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3cc7d9e2015d8b08b6996e2b24a3b476d925d5d598e541cac4dab8b6fcaa0c11-rootfs.mount: Deactivated successfully. Feb 14 01:01:59.008731 containerd[1514]: time="2025-02-14T01:01:58.967221597Z" level=info msg="shim disconnected" id=3cc7d9e2015d8b08b6996e2b24a3b476d925d5d598e541cac4dab8b6fcaa0c11 namespace=k8s.io Feb 14 01:01:59.009261 containerd[1514]: time="2025-02-14T01:01:59.008980661Z" level=warning msg="cleaning up after shim disconnected" id=3cc7d9e2015d8b08b6996e2b24a3b476d925d5d598e541cac4dab8b6fcaa0c11 namespace=k8s.io Feb 14 01:01:59.009261 containerd[1514]: time="2025-02-14T01:01:59.009022750Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 14 01:01:59.723685 containerd[1514]: time="2025-02-14T01:01:59.723379312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 14 01:02:00.580057 kubelet[2722]: E0214 01:02:00.579868 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gt8pt" podUID="d3d8e6bb-a729-4387-87b3-0f4e4f6643d0" Feb 14 01:02:02.584818 kubelet[2722]: E0214 01:02:02.584739 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gt8pt" podUID="d3d8e6bb-a729-4387-87b3-0f4e4f6643d0" Feb 14 01:02:04.586811 kubelet[2722]: E0214 01:02:04.586716 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gt8pt" podUID="d3d8e6bb-a729-4387-87b3-0f4e4f6643d0" Feb 14 01:02:06.579643 kubelet[2722]: E0214 01:02:06.579585 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gt8pt" podUID="d3d8e6bb-a729-4387-87b3-0f4e4f6643d0" Feb 14 01:02:06.699562 containerd[1514]: time="2025-02-14T01:02:06.699387746Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:02:06.700624 containerd[1514]: time="2025-02-14T01:02:06.700525392Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 14 01:02:06.701525 containerd[1514]: time="2025-02-14T01:02:06.701465938Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:02:06.709831 containerd[1514]: time="2025-02-14T01:02:06.709767075Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:02:06.711399 containerd[1514]: time="2025-02-14T01:02:06.711017121Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 6.987577383s" Feb 14 01:02:06.711399 containerd[1514]: time="2025-02-14T01:02:06.711063779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 14 01:02:06.713864 containerd[1514]: time="2025-02-14T01:02:06.713774280Z" level=info msg="CreateContainer within sandbox \"16a88cecaf2077c6b87400eef4180d813822542cce3c488e5f1c3e52db26fa70\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 14 01:02:06.731776 containerd[1514]: time="2025-02-14T01:02:06.731728478Z" level=info msg="CreateContainer within sandbox \"16a88cecaf2077c6b87400eef4180d813822542cce3c488e5f1c3e52db26fa70\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5ba594cd5fcb96d8e60c974f91196e7e5fe76017f443ad88c5d6b02bbcb8a7ed\"" Feb 14 01:02:06.733590 containerd[1514]: time="2025-02-14T01:02:06.733555189Z" level=info msg="StartContainer for \"5ba594cd5fcb96d8e60c974f91196e7e5fe76017f443ad88c5d6b02bbcb8a7ed\"" Feb 14 01:02:06.808759 systemd[1]: Started cri-containerd-5ba594cd5fcb96d8e60c974f91196e7e5fe76017f443ad88c5d6b02bbcb8a7ed.scope - libcontainer container 5ba594cd5fcb96d8e60c974f91196e7e5fe76017f443ad88c5d6b02bbcb8a7ed. Feb 14 01:02:06.854509 containerd[1514]: time="2025-02-14T01:02:06.854354912Z" level=info msg="StartContainer for \"5ba594cd5fcb96d8e60c974f91196e7e5fe76017f443ad88c5d6b02bbcb8a7ed\" returns successfully" Feb 14 01:02:07.563338 systemd[1]: cri-containerd-5ba594cd5fcb96d8e60c974f91196e7e5fe76017f443ad88c5d6b02bbcb8a7ed.scope: Deactivated successfully. Feb 14 01:02:07.612832 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ba594cd5fcb96d8e60c974f91196e7e5fe76017f443ad88c5d6b02bbcb8a7ed-rootfs.mount: Deactivated successfully. Feb 14 01:02:07.649010 kubelet[2722]: I0214 01:02:07.648604 2722 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Feb 14 01:02:07.832591 containerd[1514]: time="2025-02-14T01:02:07.831985855Z" level=info msg="shim disconnected" id=5ba594cd5fcb96d8e60c974f91196e7e5fe76017f443ad88c5d6b02bbcb8a7ed namespace=k8s.io Feb 14 01:02:07.832591 containerd[1514]: time="2025-02-14T01:02:07.832097574Z" level=warning msg="cleaning up after shim disconnected" id=5ba594cd5fcb96d8e60c974f91196e7e5fe76017f443ad88c5d6b02bbcb8a7ed namespace=k8s.io Feb 14 01:02:07.832591 containerd[1514]: time="2025-02-14T01:02:07.832118543Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 14 01:02:07.870884 containerd[1514]: time="2025-02-14T01:02:07.870332516Z" level=warning msg="cleanup warnings time=\"2025-02-14T01:02:07Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 14 01:02:07.911733 systemd[1]: Created slice kubepods-burstable-pod72fd21e3_08ab_4994_bc45_716e2988248d.slice - libcontainer container kubepods-burstable-pod72fd21e3_08ab_4994_bc45_716e2988248d.slice. Feb 14 01:02:07.927077 systemd[1]: Created slice kubepods-besteffort-pod73cf4496_4287_424e_9aa7_d12b46f31c22.slice - libcontainer container kubepods-besteffort-pod73cf4496_4287_424e_9aa7_d12b46f31c22.slice. Feb 14 01:02:07.941254 systemd[1]: Created slice kubepods-besteffort-poda76acc35_8f62_4f71_9d2c_67c0a127253c.slice - libcontainer container kubepods-besteffort-poda76acc35_8f62_4f71_9d2c_67c0a127253c.slice. Feb 14 01:02:07.956087 systemd[1]: Created slice kubepods-besteffort-pod320253fc_8ef3_4370_b081_23e4e4fce4d9.slice - libcontainer container kubepods-besteffort-pod320253fc_8ef3_4370_b081_23e4e4fce4d9.slice. Feb 14 01:02:07.965988 systemd[1]: Created slice kubepods-burstable-pod2d4aa1d2_3ade_47c1_87a9_67c8a86c7457.slice - libcontainer container kubepods-burstable-pod2d4aa1d2_3ade_47c1_87a9_67c8a86c7457.slice. Feb 14 01:02:08.022816 kubelet[2722]: I0214 01:02:08.022743 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d4aa1d2-3ade-47c1-87a9-67c8a86c7457-config-volume\") pod \"coredns-668d6bf9bc-rhl98\" (UID: \"2d4aa1d2-3ade-47c1-87a9-67c8a86c7457\") " pod="kube-system/coredns-668d6bf9bc-rhl98" Feb 14 01:02:08.023570 kubelet[2722]: I0214 01:02:08.023287 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2jtq\" (UniqueName: \"kubernetes.io/projected/2d4aa1d2-3ade-47c1-87a9-67c8a86c7457-kube-api-access-w2jtq\") pod \"coredns-668d6bf9bc-rhl98\" (UID: \"2d4aa1d2-3ade-47c1-87a9-67c8a86c7457\") " pod="kube-system/coredns-668d6bf9bc-rhl98" Feb 14 01:02:08.023570 kubelet[2722]: I0214 01:02:08.023472 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/320253fc-8ef3-4370-b081-23e4e4fce4d9-calico-apiserver-certs\") pod \"calico-apiserver-78cd759b77-mft6l\" (UID: \"320253fc-8ef3-4370-b081-23e4e4fce4d9\") " pod="calico-apiserver/calico-apiserver-78cd759b77-mft6l" Feb 14 01:02:08.023999 kubelet[2722]: I0214 01:02:08.023830 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a76acc35-8f62-4f71-9d2c-67c0a127253c-calico-apiserver-certs\") pod \"calico-apiserver-78cd759b77-szkpk\" (UID: \"a76acc35-8f62-4f71-9d2c-67c0a127253c\") " pod="calico-apiserver/calico-apiserver-78cd759b77-szkpk" Feb 14 01:02:08.024205 kubelet[2722]: I0214 01:02:08.024108 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2pgq\" (UniqueName: \"kubernetes.io/projected/320253fc-8ef3-4370-b081-23e4e4fce4d9-kube-api-access-s2pgq\") pod \"calico-apiserver-78cd759b77-mft6l\" (UID: \"320253fc-8ef3-4370-b081-23e4e4fce4d9\") " pod="calico-apiserver/calico-apiserver-78cd759b77-mft6l" Feb 14 01:02:08.024585 kubelet[2722]: I0214 01:02:08.024409 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc778\" (UniqueName: \"kubernetes.io/projected/72fd21e3-08ab-4994-bc45-716e2988248d-kube-api-access-vc778\") pod \"coredns-668d6bf9bc-pds58\" (UID: \"72fd21e3-08ab-4994-bc45-716e2988248d\") " pod="kube-system/coredns-668d6bf9bc-pds58" Feb 14 01:02:08.024585 kubelet[2722]: I0214 01:02:08.024453 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6zk6\" (UniqueName: \"kubernetes.io/projected/73cf4496-4287-424e-9aa7-d12b46f31c22-kube-api-access-z6zk6\") pod \"calico-kube-controllers-6646bd4b95-9q284\" (UID: \"73cf4496-4287-424e-9aa7-d12b46f31c22\") " pod="calico-system/calico-kube-controllers-6646bd4b95-9q284" Feb 14 01:02:08.024585 kubelet[2722]: I0214 01:02:08.024524 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/72fd21e3-08ab-4994-bc45-716e2988248d-config-volume\") pod \"coredns-668d6bf9bc-pds58\" (UID: \"72fd21e3-08ab-4994-bc45-716e2988248d\") " pod="kube-system/coredns-668d6bf9bc-pds58" Feb 14 01:02:08.025148 kubelet[2722]: I0214 01:02:08.024901 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdbms\" (UniqueName: \"kubernetes.io/projected/a76acc35-8f62-4f71-9d2c-67c0a127253c-kube-api-access-tdbms\") pod \"calico-apiserver-78cd759b77-szkpk\" (UID: \"a76acc35-8f62-4f71-9d2c-67c0a127253c\") " pod="calico-apiserver/calico-apiserver-78cd759b77-szkpk" Feb 14 01:02:08.025148 kubelet[2722]: I0214 01:02:08.025067 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73cf4496-4287-424e-9aa7-d12b46f31c22-tigera-ca-bundle\") pod \"calico-kube-controllers-6646bd4b95-9q284\" (UID: \"73cf4496-4287-424e-9aa7-d12b46f31c22\") " pod="calico-system/calico-kube-controllers-6646bd4b95-9q284" Feb 14 01:02:08.221391 containerd[1514]: time="2025-02-14T01:02:08.221220776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pds58,Uid:72fd21e3-08ab-4994-bc45-716e2988248d,Namespace:kube-system,Attempt:0,}" Feb 14 01:02:08.238159 containerd[1514]: time="2025-02-14T01:02:08.237757917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6646bd4b95-9q284,Uid:73cf4496-4287-424e-9aa7-d12b46f31c22,Namespace:calico-system,Attempt:0,}" Feb 14 01:02:08.249743 containerd[1514]: time="2025-02-14T01:02:08.249692792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78cd759b77-szkpk,Uid:a76acc35-8f62-4f71-9d2c-67c0a127253c,Namespace:calico-apiserver,Attempt:0,}" Feb 14 01:02:08.263680 containerd[1514]: time="2025-02-14T01:02:08.263318350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78cd759b77-mft6l,Uid:320253fc-8ef3-4370-b081-23e4e4fce4d9,Namespace:calico-apiserver,Attempt:0,}" Feb 14 01:02:08.270221 containerd[1514]: time="2025-02-14T01:02:08.270184935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rhl98,Uid:2d4aa1d2-3ade-47c1-87a9-67c8a86c7457,Namespace:kube-system,Attempt:0,}" Feb 14 01:02:08.576365 containerd[1514]: time="2025-02-14T01:02:08.576298516Z" level=error msg="Failed to destroy network for sandbox \"18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:02:08.577064 containerd[1514]: time="2025-02-14T01:02:08.577026232Z" level=error msg="encountered an error cleaning up failed sandbox \"18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:02:08.577255 containerd[1514]: time="2025-02-14T01:02:08.577215579Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78cd759b77-szkpk,Uid:a76acc35-8f62-4f71-9d2c-67c0a127253c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:02:08.579675 containerd[1514]: time="2025-02-14T01:02:08.578051868Z" level=error msg="Failed to destroy network for sandbox \"515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:02:08.579675 containerd[1514]: time="2025-02-14T01:02:08.578489691Z" level=error msg="encountered an error cleaning up failed sandbox \"515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:02:08.579675 containerd[1514]: time="2025-02-14T01:02:08.579058922Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pds58,Uid:72fd21e3-08ab-4994-bc45-716e2988248d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:02:08.585831 kubelet[2722]: E0214 01:02:08.585785 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:02:08.586057 kubelet[2722]: E0214 01:02:08.586018 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:02:08.586138 kubelet[2722]: E0214 01:02:08.586071 2722 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78cd759b77-szkpk" Feb 14 01:02:08.586138 kubelet[2722]: E0214 01:02:08.586110 2722 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78cd759b77-szkpk" Feb 14 01:02:08.586896 kubelet[2722]: E0214 01:02:08.586167 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-78cd759b77-szkpk_calico-apiserver(a76acc35-8f62-4f71-9d2c-67c0a127253c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-78cd759b77-szkpk_calico-apiserver(a76acc35-8f62-4f71-9d2c-67c0a127253c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78cd759b77-szkpk" podUID="a76acc35-8f62-4f71-9d2c-67c0a127253c" Feb 14 01:02:08.586896 kubelet[2722]: E0214 01:02:08.586027 2722 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pds58" Feb 14 01:02:08.586896 kubelet[2722]: E0214 01:02:08.586256 2722 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pds58" Feb 14 01:02:08.587776 kubelet[2722]: E0214 01:02:08.586306 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-pds58_kube-system(72fd21e3-08ab-4994-bc45-716e2988248d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-pds58_kube-system(72fd21e3-08ab-4994-bc45-716e2988248d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-pds58" podUID="72fd21e3-08ab-4994-bc45-716e2988248d" Feb 14 01:02:08.593093 systemd[1]: Created slice kubepods-besteffort-podd3d8e6bb_a729_4387_87b3_0f4e4f6643d0.slice - libcontainer container kubepods-besteffort-podd3d8e6bb_a729_4387_87b3_0f4e4f6643d0.slice. Feb 14 01:02:08.601970 containerd[1514]: time="2025-02-14T01:02:08.601867260Z" level=error msg="Failed to destroy network for sandbox \"15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:02:08.602573 containerd[1514]: time="2025-02-14T01:02:08.602482829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gt8pt,Uid:d3d8e6bb-a729-4387-87b3-0f4e4f6643d0,Namespace:calico-system,Attempt:0,}" Feb 14 01:02:08.603028 containerd[1514]: time="2025-02-14T01:02:08.602989484Z" level=error msg="encountered an error cleaning up failed sandbox \"15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:02:08.603299 containerd[1514]: time="2025-02-14T01:02:08.603178219Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6646bd4b95-9q284,Uid:73cf4496-4287-424e-9aa7-d12b46f31c22,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:02:08.603912 kubelet[2722]: E0214 01:02:08.603542 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:02:08.603912 kubelet[2722]: E0214 01:02:08.603629 2722 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6646bd4b95-9q284" Feb 14 01:02:08.603912 kubelet[2722]: E0214 01:02:08.603659 2722 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6646bd4b95-9q284" Feb 14 01:02:08.604111 kubelet[2722]: E0214 01:02:08.603702 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6646bd4b95-9q284_calico-system(73cf4496-4287-424e-9aa7-d12b46f31c22)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6646bd4b95-9q284_calico-system(73cf4496-4287-424e-9aa7-d12b46f31c22)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6646bd4b95-9q284" podUID="73cf4496-4287-424e-9aa7-d12b46f31c22" Feb 14 01:02:08.604594 containerd[1514]: time="2025-02-14T01:02:08.604274561Z" level=error msg="Failed to destroy network for sandbox \"2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:02:08.605063 containerd[1514]: time="2025-02-14T01:02:08.604960966Z" level=error msg="encountered an error cleaning up failed sandbox \"2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:02:08.605063 containerd[1514]: time="2025-02-14T01:02:08.605013822Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rhl98,Uid:2d4aa1d2-3ade-47c1-87a9-67c8a86c7457,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:02:08.606397 kubelet[2722]: E0214 01:02:08.605398 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:02:08.606397 kubelet[2722]: E0214 01:02:08.605454 2722 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rhl98" Feb 14 01:02:08.606397 kubelet[2722]: E0214 01:02:08.605480 2722 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rhl98" Feb 14 01:02:08.606818 kubelet[2722]: E0214 01:02:08.605524 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-rhl98_kube-system(2d4aa1d2-3ade-47c1-87a9-67c8a86c7457)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-rhl98_kube-system(2d4aa1d2-3ade-47c1-87a9-67c8a86c7457)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-rhl98" podUID="2d4aa1d2-3ade-47c1-87a9-67c8a86c7457" Feb 14 01:02:08.611346 containerd[1514]: time="2025-02-14T01:02:08.611305977Z" level=error msg="Failed to destroy network for sandbox \"34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:02:08.611903 containerd[1514]: time="2025-02-14T01:02:08.611817862Z" level=error msg="encountered an error cleaning up failed sandbox \"34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:02:08.612012 containerd[1514]: time="2025-02-14T01:02:08.611906099Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78cd759b77-mft6l,Uid:320253fc-8ef3-4370-b081-23e4e4fce4d9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:02:08.613072 kubelet[2722]: E0214 01:02:08.612472 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:02:08.613072 kubelet[2722]: E0214 01:02:08.612958 2722 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78cd759b77-mft6l" Feb 14 01:02:08.613072 kubelet[2722]: E0214 01:02:08.613032 2722 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78cd759b77-mft6l" Feb 14 01:02:08.613943 kubelet[2722]: E0214 01:02:08.613605 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-78cd759b77-mft6l_calico-apiserver(320253fc-8ef3-4370-b081-23e4e4fce4d9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-78cd759b77-mft6l_calico-apiserver(320253fc-8ef3-4370-b081-23e4e4fce4d9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78cd759b77-mft6l" podUID="320253fc-8ef3-4370-b081-23e4e4fce4d9" Feb 14 01:02:08.701883 containerd[1514]: time="2025-02-14T01:02:08.701673814Z" level=error msg="Failed to destroy network for sandbox \"3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:02:08.702864 containerd[1514]: time="2025-02-14T01:02:08.702653066Z" level=error msg="encountered an error cleaning up failed sandbox \"3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:02:08.702864 containerd[1514]: time="2025-02-14T01:02:08.702750214Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gt8pt,Uid:d3d8e6bb-a729-4387-87b3-0f4e4f6643d0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:02:08.703708 kubelet[2722]: E0214 01:02:08.703414 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:02:08.703708 kubelet[2722]: E0214 01:02:08.703489 2722 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gt8pt" Feb 14 01:02:08.703708 kubelet[2722]: E0214 01:02:08.703521 2722 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gt8pt" Feb 14 01:02:08.704957 kubelet[2722]: E0214 01:02:08.703631 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gt8pt_calico-system(d3d8e6bb-a729-4387-87b3-0f4e4f6643d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gt8pt_calico-system(d3d8e6bb-a729-4387-87b3-0f4e4f6643d0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gt8pt" podUID="d3d8e6bb-a729-4387-87b3-0f4e4f6643d0" Feb 14 01:02:08.918794 kubelet[2722]: I0214 01:02:08.918620 2722 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6" Feb 14 01:02:08.922323 kubelet[2722]: I0214 01:02:08.922157 2722 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f" Feb 14 01:02:08.928036 containerd[1514]: time="2025-02-14T01:02:08.927780825Z" level=info msg="StopPodSandbox for \"18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6\"" Feb 14 01:02:08.931205 containerd[1514]: time="2025-02-14T01:02:08.928752646Z" level=info msg="StopPodSandbox for \"15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f\"" Feb 14 01:02:08.939779 containerd[1514]: time="2025-02-14T01:02:08.937538835Z" level=info msg="Ensure that sandbox 18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6 in task-service has been cleanup successfully" Feb 14 01:02:08.941812 containerd[1514]: time="2025-02-14T01:02:08.941728907Z" level=info msg="Ensure that sandbox 15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f in task-service has been cleanup successfully" Feb 14 01:02:08.948324 containerd[1514]: time="2025-02-14T01:02:08.947138895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 14 01:02:08.950402 kubelet[2722]: I0214 01:02:08.950262 2722 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544" Feb 14 01:02:08.953965 containerd[1514]: time="2025-02-14T01:02:08.953523731Z" level=info msg="StopPodSandbox for \"3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544\"" Feb 14 01:02:08.953965 containerd[1514]: time="2025-02-14T01:02:08.953750232Z" level=info msg="Ensure that sandbox 3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544 in task-service has been cleanup successfully" Feb 14 01:02:08.961573 kubelet[2722]: I0214 01:02:08.960237 2722 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679" Feb 14 01:02:08.962233 containerd[1514]: time="2025-02-14T01:02:08.962195860Z" level=info msg="StopPodSandbox for \"2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679\"" Feb 14 01:02:08.963595 containerd[1514]: time="2025-02-14T01:02:08.963563483Z" level=info msg="Ensure that sandbox 2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679 in task-service has been cleanup successfully" Feb 14 01:02:08.969949 kubelet[2722]: I0214 01:02:08.969909 2722 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab" Feb 14 01:02:08.974565 containerd[1514]: time="2025-02-14T01:02:08.974065811Z" level=info msg="StopPodSandbox for \"34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab\"" Feb 14 01:02:08.974565 containerd[1514]: time="2025-02-14T01:02:08.974287430Z" level=info msg="Ensure that sandbox 34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab in task-service has been cleanup successfully" Feb 14 01:02:08.983789 kubelet[2722]: I0214 01:02:08.983727 2722 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e" Feb 14 01:02:08.990176 containerd[1514]: time="2025-02-14T01:02:08.990113304Z" level=info msg="StopPodSandbox for \"515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e\"" Feb 14 01:02:08.990394 containerd[1514]: time="2025-02-14T01:02:08.990346038Z" level=info msg="Ensure that sandbox 515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e in task-service has been cleanup successfully" Feb 14 01:02:09.084475 containerd[1514]: time="2025-02-14T01:02:09.083924750Z" level=error msg="StopPodSandbox for \"15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f\" failed" error="failed to destroy network for sandbox \"15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:02:09.084708 kubelet[2722]: E0214 01:02:09.084509 2722 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f" Feb 14 01:02:09.084708 kubelet[2722]: E0214 01:02:09.084609 2722 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f"} Feb 14 01:02:09.084708 kubelet[2722]: E0214 01:02:09.084687 2722 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"73cf4496-4287-424e-9aa7-d12b46f31c22\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 14 01:02:09.084954 kubelet[2722]: E0214 01:02:09.084721 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"73cf4496-4287-424e-9aa7-d12b46f31c22\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6646bd4b95-9q284" podUID="73cf4496-4287-424e-9aa7-d12b46f31c22" Feb 14 01:02:09.108585 containerd[1514]: time="2025-02-14T01:02:09.108028996Z" level=error msg="StopPodSandbox for \"34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab\" failed" error="failed to destroy network for sandbox \"34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:02:09.108585 containerd[1514]: time="2025-02-14T01:02:09.108207840Z" level=error msg="StopPodSandbox for \"2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679\" failed" error="failed to destroy network for sandbox \"2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:02:09.109679 kubelet[2722]: E0214 01:02:09.108615 2722 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679" Feb 14 01:02:09.109679 kubelet[2722]: E0214 01:02:09.108700 2722 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679"} Feb 14 01:02:09.109679 kubelet[2722]: E0214 01:02:09.108897 2722 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab" Feb 14 01:02:09.109679 kubelet[2722]: E0214 01:02:09.108938 2722 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab"} Feb 14 01:02:09.109679 kubelet[2722]: E0214 01:02:09.108974 2722 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"320253fc-8ef3-4370-b081-23e4e4fce4d9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 14 01:02:09.110033 kubelet[2722]: E0214 01:02:09.109015 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"320253fc-8ef3-4370-b081-23e4e4fce4d9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78cd759b77-mft6l" podUID="320253fc-8ef3-4370-b081-23e4e4fce4d9" Feb 14 01:02:09.110033 kubelet[2722]: E0214 01:02:09.108917 2722 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2d4aa1d2-3ade-47c1-87a9-67c8a86c7457\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 14 01:02:09.110033 kubelet[2722]: E0214 01:02:09.109075 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2d4aa1d2-3ade-47c1-87a9-67c8a86c7457\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-rhl98" podUID="2d4aa1d2-3ade-47c1-87a9-67c8a86c7457" Feb 14 01:02:09.111572 containerd[1514]: time="2025-02-14T01:02:09.111348087Z" level=error msg="StopPodSandbox for \"18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6\" failed" error="failed to destroy network for sandbox \"18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:02:09.111661 kubelet[2722]: E0214 01:02:09.111595 2722 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6" Feb 14 01:02:09.111661 kubelet[2722]: E0214 01:02:09.111637 2722 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6"} Feb 14 01:02:09.111765 kubelet[2722]: E0214 01:02:09.111672 2722 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a76acc35-8f62-4f71-9d2c-67c0a127253c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 14 01:02:09.111765 kubelet[2722]: E0214 01:02:09.111703 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a76acc35-8f62-4f71-9d2c-67c0a127253c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78cd759b77-szkpk" podUID="a76acc35-8f62-4f71-9d2c-67c0a127253c" Feb 14 01:02:09.118776 containerd[1514]: time="2025-02-14T01:02:09.118632726Z" level=error msg="StopPodSandbox for \"3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544\" failed" error="failed to destroy network for sandbox \"3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:02:09.119562 kubelet[2722]: E0214 01:02:09.119286 2722 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544" Feb 14 01:02:09.119562 kubelet[2722]: E0214 01:02:09.119369 2722 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544"} Feb 14 01:02:09.119562 kubelet[2722]: E0214 01:02:09.119431 2722 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d3d8e6bb-a729-4387-87b3-0f4e4f6643d0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 14 01:02:09.119562 kubelet[2722]: E0214 01:02:09.119465 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d3d8e6bb-a729-4387-87b3-0f4e4f6643d0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gt8pt" podUID="d3d8e6bb-a729-4387-87b3-0f4e4f6643d0" Feb 14 01:02:09.125878 containerd[1514]: time="2025-02-14T01:02:09.125781214Z" level=error msg="StopPodSandbox for \"515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e\" failed" error="failed to destroy network for sandbox \"515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:02:09.126027 kubelet[2722]: E0214 01:02:09.125986 2722 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e" Feb 14 01:02:09.126144 kubelet[2722]: E0214 01:02:09.126035 2722 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e"} Feb 14 01:02:09.126144 kubelet[2722]: E0214 01:02:09.126072 2722 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"72fd21e3-08ab-4994-bc45-716e2988248d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 14 01:02:09.126144 kubelet[2722]: E0214 01:02:09.126105 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"72fd21e3-08ab-4994-bc45-716e2988248d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-pds58" podUID="72fd21e3-08ab-4994-bc45-716e2988248d" Feb 14 01:02:09.139856 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e-shm.mount: Deactivated successfully. Feb 14 01:02:18.487766 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2030267293.mount: Deactivated successfully. Feb 14 01:02:18.608050 containerd[1514]: time="2025-02-14T01:02:18.597098739Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 14 01:02:18.618666 containerd[1514]: time="2025-02-14T01:02:18.617829866Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 9.662357654s" Feb 14 01:02:18.618666 containerd[1514]: time="2025-02-14T01:02:18.617898785Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 14 01:02:18.648491 containerd[1514]: time="2025-02-14T01:02:18.648295017Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:02:18.670521 containerd[1514]: time="2025-02-14T01:02:18.669933809Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:02:18.670905 containerd[1514]: time="2025-02-14T01:02:18.670857142Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:02:18.729257 containerd[1514]: time="2025-02-14T01:02:18.729106536Z" level=info msg="CreateContainer within sandbox \"16a88cecaf2077c6b87400eef4180d813822542cce3c488e5f1c3e52db26fa70\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 14 01:02:18.799465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3149742018.mount: Deactivated successfully. Feb 14 01:02:18.805773 containerd[1514]: time="2025-02-14T01:02:18.805713891Z" level=info msg="CreateContainer within sandbox \"16a88cecaf2077c6b87400eef4180d813822542cce3c488e5f1c3e52db26fa70\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"76e50f992e7628bac0b6b812b4d6f07e42d3e57711bf7718d059fe5ac47ed63c\"" Feb 14 01:02:18.810513 containerd[1514]: time="2025-02-14T01:02:18.810458783Z" level=info msg="StartContainer for \"76e50f992e7628bac0b6b812b4d6f07e42d3e57711bf7718d059fe5ac47ed63c\"" Feb 14 01:02:19.047171 systemd[1]: Started cri-containerd-76e50f992e7628bac0b6b812b4d6f07e42d3e57711bf7718d059fe5ac47ed63c.scope - libcontainer container 76e50f992e7628bac0b6b812b4d6f07e42d3e57711bf7718d059fe5ac47ed63c. Feb 14 01:02:19.120676 containerd[1514]: time="2025-02-14T01:02:19.120260307Z" level=info msg="StartContainer for \"76e50f992e7628bac0b6b812b4d6f07e42d3e57711bf7718d059fe5ac47ed63c\" returns successfully" Feb 14 01:02:19.262910 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 14 01:02:19.265285 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 14 01:02:20.162567 kubelet[2722]: I0214 01:02:20.154129 2722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-mcs5v" podStartSLOduration=2.622574708 podStartE2EDuration="28.13742664s" podCreationTimestamp="2025-02-14 01:01:52 +0000 UTC" firstStartedPulling="2025-02-14 01:01:53.104734148 +0000 UTC m=+13.819457492" lastFinishedPulling="2025-02-14 01:02:18.619586079 +0000 UTC m=+39.334309424" observedRunningTime="2025-02-14 01:02:20.130304362 +0000 UTC m=+40.845027722" watchObservedRunningTime="2025-02-14 01:02:20.13742664 +0000 UTC m=+40.852149990" Feb 14 01:02:20.585865 containerd[1514]: time="2025-02-14T01:02:20.585481403Z" level=info msg="StopPodSandbox for \"3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544\"" Feb 14 01:02:21.028763 containerd[1514]: 2025-02-14 01:02:20.678 [INFO][3929] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544" Feb 14 01:02:21.028763 containerd[1514]: 2025-02-14 01:02:20.680 [INFO][3929] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544" iface="eth0" netns="/var/run/netns/cni-0711f17f-e9d2-ae6f-5cc6-67fb6da2e23d" Feb 14 01:02:21.028763 containerd[1514]: 2025-02-14 01:02:20.681 [INFO][3929] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544" iface="eth0" netns="/var/run/netns/cni-0711f17f-e9d2-ae6f-5cc6-67fb6da2e23d" Feb 14 01:02:21.028763 containerd[1514]: 2025-02-14 01:02:20.683 [INFO][3929] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544" iface="eth0" netns="/var/run/netns/cni-0711f17f-e9d2-ae6f-5cc6-67fb6da2e23d" Feb 14 01:02:21.028763 containerd[1514]: 2025-02-14 01:02:20.683 [INFO][3929] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544" Feb 14 01:02:21.028763 containerd[1514]: 2025-02-14 01:02:20.683 [INFO][3929] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544" Feb 14 01:02:21.028763 containerd[1514]: 2025-02-14 01:02:20.988 [INFO][3935] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544" HandleID="k8s-pod-network.3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544" Workload="srv--jzpa0.gb1.brightbox.com-k8s-csi--node--driver--gt8pt-eth0" Feb 14 01:02:21.028763 containerd[1514]: 2025-02-14 01:02:20.992 [INFO][3935] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:02:21.028763 containerd[1514]: 2025-02-14 01:02:20.992 [INFO][3935] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:02:21.028763 containerd[1514]: 2025-02-14 01:02:21.015 [WARNING][3935] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544" HandleID="k8s-pod-network.3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544" Workload="srv--jzpa0.gb1.brightbox.com-k8s-csi--node--driver--gt8pt-eth0" Feb 14 01:02:21.028763 containerd[1514]: 2025-02-14 01:02:21.015 [INFO][3935] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544" HandleID="k8s-pod-network.3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544" Workload="srv--jzpa0.gb1.brightbox.com-k8s-csi--node--driver--gt8pt-eth0" Feb 14 01:02:21.028763 containerd[1514]: 2025-02-14 01:02:21.019 [INFO][3935] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:02:21.028763 containerd[1514]: 2025-02-14 01:02:21.026 [INFO][3929] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544" Feb 14 01:02:21.032147 containerd[1514]: time="2025-02-14T01:02:21.029731849Z" level=info msg="TearDown network for sandbox \"3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544\" successfully" Feb 14 01:02:21.032147 containerd[1514]: time="2025-02-14T01:02:21.029770384Z" level=info msg="StopPodSandbox for \"3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544\" returns successfully" Feb 14 01:02:21.033333 containerd[1514]: time="2025-02-14T01:02:21.033289302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gt8pt,Uid:d3d8e6bb-a729-4387-87b3-0f4e4f6643d0,Namespace:calico-system,Attempt:1,}" Feb 14 01:02:21.036854 systemd[1]: run-netns-cni\x2d0711f17f\x2de9d2\x2dae6f\x2d5cc6\x2d67fb6da2e23d.mount: Deactivated successfully. Feb 14 01:02:21.557927 systemd-networkd[1417]: cali8f23cc04da3: Link UP Feb 14 01:02:21.558916 systemd-networkd[1417]: cali8f23cc04da3: Gained carrier Feb 14 01:02:21.586643 containerd[1514]: 2025-02-14 01:02:21.318 [INFO][4040] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 14 01:02:21.586643 containerd[1514]: 2025-02-14 01:02:21.342 [INFO][4040] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--jzpa0.gb1.brightbox.com-k8s-csi--node--driver--gt8pt-eth0 csi-node-driver- calico-system d3d8e6bb-a729-4387-87b3-0f4e4f6643d0 809 0 2025-02-14 01:01:52 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:84cddb44f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-jzpa0.gb1.brightbox.com csi-node-driver-gt8pt eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8f23cc04da3 [] []}} ContainerID="f1d636e8118b01746c0dce2d2ff3843e35f6a6c13bd8c560aa4e31af0ef23759" Namespace="calico-system" Pod="csi-node-driver-gt8pt" WorkloadEndpoint="srv--jzpa0.gb1.brightbox.com-k8s-csi--node--driver--gt8pt-" Feb 14 01:02:21.586643 containerd[1514]: 2025-02-14 01:02:21.342 [INFO][4040] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f1d636e8118b01746c0dce2d2ff3843e35f6a6c13bd8c560aa4e31af0ef23759" Namespace="calico-system" Pod="csi-node-driver-gt8pt" WorkloadEndpoint="srv--jzpa0.gb1.brightbox.com-k8s-csi--node--driver--gt8pt-eth0" Feb 14 01:02:21.586643 containerd[1514]: 2025-02-14 01:02:21.463 [INFO][4067] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f1d636e8118b01746c0dce2d2ff3843e35f6a6c13bd8c560aa4e31af0ef23759" HandleID="k8s-pod-network.f1d636e8118b01746c0dce2d2ff3843e35f6a6c13bd8c560aa4e31af0ef23759" Workload="srv--jzpa0.gb1.brightbox.com-k8s-csi--node--driver--gt8pt-eth0" Feb 14 01:02:21.586643 containerd[1514]: 2025-02-14 01:02:21.481 [INFO][4067] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f1d636e8118b01746c0dce2d2ff3843e35f6a6c13bd8c560aa4e31af0ef23759" HandleID="k8s-pod-network.f1d636e8118b01746c0dce2d2ff3843e35f6a6c13bd8c560aa4e31af0ef23759" Workload="srv--jzpa0.gb1.brightbox.com-k8s-csi--node--driver--gt8pt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000409090), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-jzpa0.gb1.brightbox.com", "pod":"csi-node-driver-gt8pt", "timestamp":"2025-02-14 01:02:21.463303156 +0000 UTC"}, Hostname:"srv-jzpa0.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 14 01:02:21.586643 containerd[1514]: 2025-02-14 01:02:21.481 [INFO][4067] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:02:21.586643 containerd[1514]: 2025-02-14 01:02:21.482 [INFO][4067] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:02:21.586643 containerd[1514]: 2025-02-14 01:02:21.482 [INFO][4067] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-jzpa0.gb1.brightbox.com' Feb 14 01:02:21.586643 containerd[1514]: 2025-02-14 01:02:21.486 [INFO][4067] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f1d636e8118b01746c0dce2d2ff3843e35f6a6c13bd8c560aa4e31af0ef23759" host="srv-jzpa0.gb1.brightbox.com" Feb 14 01:02:21.586643 containerd[1514]: 2025-02-14 01:02:21.494 [INFO][4067] ipam/ipam.go 372: Looking up existing affinities for host host="srv-jzpa0.gb1.brightbox.com" Feb 14 01:02:21.586643 containerd[1514]: 2025-02-14 01:02:21.503 [INFO][4067] ipam/ipam.go 489: Trying affinity for 192.168.53.64/26 host="srv-jzpa0.gb1.brightbox.com" Feb 14 01:02:21.586643 containerd[1514]: 2025-02-14 01:02:21.506 [INFO][4067] ipam/ipam.go 155: Attempting to load block cidr=192.168.53.64/26 host="srv-jzpa0.gb1.brightbox.com" Feb 14 01:02:21.586643 containerd[1514]: 2025-02-14 01:02:21.509 [INFO][4067] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.53.64/26 host="srv-jzpa0.gb1.brightbox.com" Feb 14 01:02:21.586643 containerd[1514]: 2025-02-14 01:02:21.510 [INFO][4067] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.53.64/26 handle="k8s-pod-network.f1d636e8118b01746c0dce2d2ff3843e35f6a6c13bd8c560aa4e31af0ef23759" host="srv-jzpa0.gb1.brightbox.com" Feb 14 01:02:21.586643 containerd[1514]: 2025-02-14 01:02:21.514 [INFO][4067] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f1d636e8118b01746c0dce2d2ff3843e35f6a6c13bd8c560aa4e31af0ef23759 Feb 14 01:02:21.586643 containerd[1514]: 2025-02-14 01:02:21.519 [INFO][4067] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.53.64/26 handle="k8s-pod-network.f1d636e8118b01746c0dce2d2ff3843e35f6a6c13bd8c560aa4e31af0ef23759" host="srv-jzpa0.gb1.brightbox.com" Feb 14 01:02:21.586643 containerd[1514]: 2025-02-14 01:02:21.528 [INFO][4067] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.53.65/26] block=192.168.53.64/26 handle="k8s-pod-network.f1d636e8118b01746c0dce2d2ff3843e35f6a6c13bd8c560aa4e31af0ef23759" host="srv-jzpa0.gb1.brightbox.com" Feb 14 01:02:21.586643 containerd[1514]: 2025-02-14 01:02:21.528 [INFO][4067] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.53.65/26] handle="k8s-pod-network.f1d636e8118b01746c0dce2d2ff3843e35f6a6c13bd8c560aa4e31af0ef23759" host="srv-jzpa0.gb1.brightbox.com" Feb 14 01:02:21.586643 containerd[1514]: 2025-02-14 01:02:21.529 [INFO][4067] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:02:21.586643 containerd[1514]: 2025-02-14 01:02:21.529 [INFO][4067] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.53.65/26] IPv6=[] ContainerID="f1d636e8118b01746c0dce2d2ff3843e35f6a6c13bd8c560aa4e31af0ef23759" HandleID="k8s-pod-network.f1d636e8118b01746c0dce2d2ff3843e35f6a6c13bd8c560aa4e31af0ef23759" Workload="srv--jzpa0.gb1.brightbox.com-k8s-csi--node--driver--gt8pt-eth0" Feb 14 01:02:21.602580 containerd[1514]: 2025-02-14 01:02:21.533 [INFO][4040] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f1d636e8118b01746c0dce2d2ff3843e35f6a6c13bd8c560aa4e31af0ef23759" Namespace="calico-system" Pod="csi-node-driver-gt8pt" WorkloadEndpoint="srv--jzpa0.gb1.brightbox.com-k8s-csi--node--driver--gt8pt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jzpa0.gb1.brightbox.com-k8s-csi--node--driver--gt8pt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d3d8e6bb-a729-4387-87b3-0f4e4f6643d0", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 1, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jzpa0.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-gt8pt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.53.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8f23cc04da3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:02:21.602580 containerd[1514]: 2025-02-14 01:02:21.533 [INFO][4040] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.53.65/32] ContainerID="f1d636e8118b01746c0dce2d2ff3843e35f6a6c13bd8c560aa4e31af0ef23759" Namespace="calico-system" Pod="csi-node-driver-gt8pt" WorkloadEndpoint="srv--jzpa0.gb1.brightbox.com-k8s-csi--node--driver--gt8pt-eth0" Feb 14 01:02:21.602580 containerd[1514]: 2025-02-14 01:02:21.533 [INFO][4040] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8f23cc04da3 ContainerID="f1d636e8118b01746c0dce2d2ff3843e35f6a6c13bd8c560aa4e31af0ef23759" Namespace="calico-system" Pod="csi-node-driver-gt8pt" WorkloadEndpoint="srv--jzpa0.gb1.brightbox.com-k8s-csi--node--driver--gt8pt-eth0" Feb 14 01:02:21.602580 containerd[1514]: 2025-02-14 01:02:21.554 [INFO][4040] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f1d636e8118b01746c0dce2d2ff3843e35f6a6c13bd8c560aa4e31af0ef23759" Namespace="calico-system" Pod="csi-node-driver-gt8pt" WorkloadEndpoint="srv--jzpa0.gb1.brightbox.com-k8s-csi--node--driver--gt8pt-eth0" Feb 14 01:02:21.602580 containerd[1514]: 2025-02-14 01:02:21.558 [INFO][4040] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f1d636e8118b01746c0dce2d2ff3843e35f6a6c13bd8c560aa4e31af0ef23759" Namespace="calico-system" Pod="csi-node-driver-gt8pt" WorkloadEndpoint="srv--jzpa0.gb1.brightbox.com-k8s-csi--node--driver--gt8pt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jzpa0.gb1.brightbox.com-k8s-csi--node--driver--gt8pt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d3d8e6bb-a729-4387-87b3-0f4e4f6643d0", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 1, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jzpa0.gb1.brightbox.com", ContainerID:"f1d636e8118b01746c0dce2d2ff3843e35f6a6c13bd8c560aa4e31af0ef23759", Pod:"csi-node-driver-gt8pt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.53.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8f23cc04da3", MAC:"0e:ae:1e:7e:51:fc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:02:21.602580 containerd[1514]: 2025-02-14 01:02:21.573 [INFO][4040] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f1d636e8118b01746c0dce2d2ff3843e35f6a6c13bd8c560aa4e31af0ef23759" Namespace="calico-system" Pod="csi-node-driver-gt8pt" WorkloadEndpoint="srv--jzpa0.gb1.brightbox.com-k8s-csi--node--driver--gt8pt-eth0" Feb 14 01:02:21.614156 containerd[1514]: time="2025-02-14T01:02:21.611497528Z" level=info msg="StopPodSandbox for \"2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679\"" Feb 14 01:02:21.681215 kernel: bpftool[4130]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 14 01:02:21.761307 containerd[1514]: time="2025-02-14T01:02:21.760784139Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 01:02:21.761307 containerd[1514]: time="2025-02-14T01:02:21.760906379Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 01:02:21.761307 containerd[1514]: time="2025-02-14T01:02:21.760936570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:02:21.761307 containerd[1514]: time="2025-02-14T01:02:21.761084268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:02:21.829738 systemd[1]: Started cri-containerd-f1d636e8118b01746c0dce2d2ff3843e35f6a6c13bd8c560aa4e31af0ef23759.scope - libcontainer container f1d636e8118b01746c0dce2d2ff3843e35f6a6c13bd8c560aa4e31af0ef23759. Feb 14 01:02:21.882264 containerd[1514]: 2025-02-14 01:02:21.780 [INFO][4119] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679" Feb 14 01:02:21.882264 containerd[1514]: 2025-02-14 01:02:21.786 [INFO][4119] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679" iface="eth0" netns="/var/run/netns/cni-c1b9a377-e77e-4fcf-ac75-4e612e1dcf6c" Feb 14 01:02:21.882264 containerd[1514]: 2025-02-14 01:02:21.788 [INFO][4119] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679" iface="eth0" netns="/var/run/netns/cni-c1b9a377-e77e-4fcf-ac75-4e612e1dcf6c" Feb 14 01:02:21.882264 containerd[1514]: 2025-02-14 01:02:21.790 [INFO][4119] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679" iface="eth0" netns="/var/run/netns/cni-c1b9a377-e77e-4fcf-ac75-4e612e1dcf6c" Feb 14 01:02:21.882264 containerd[1514]: 2025-02-14 01:02:21.792 [INFO][4119] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679" Feb 14 01:02:21.882264 containerd[1514]: 2025-02-14 01:02:21.792 [INFO][4119] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679" Feb 14 01:02:21.882264 containerd[1514]: 2025-02-14 01:02:21.861 [INFO][4152] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679" HandleID="k8s-pod-network.2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679" Workload="srv--jzpa0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--rhl98-eth0" Feb 14 01:02:21.882264 containerd[1514]: 2025-02-14 01:02:21.861 [INFO][4152] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:02:21.882264 containerd[1514]: 2025-02-14 01:02:21.861 [INFO][4152] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:02:21.882264 containerd[1514]: 2025-02-14 01:02:21.872 [WARNING][4152] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679" HandleID="k8s-pod-network.2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679" Workload="srv--jzpa0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--rhl98-eth0" Feb 14 01:02:21.882264 containerd[1514]: 2025-02-14 01:02:21.872 [INFO][4152] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679" HandleID="k8s-pod-network.2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679" Workload="srv--jzpa0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--rhl98-eth0" Feb 14 01:02:21.882264 containerd[1514]: 2025-02-14 01:02:21.874 [INFO][4152] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:02:21.882264 containerd[1514]: 2025-02-14 01:02:21.877 [INFO][4119] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679" Feb 14 01:02:21.884311 containerd[1514]: time="2025-02-14T01:02:21.883662306Z" level=info msg="TearDown network for sandbox \"2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679\" successfully" Feb 14 01:02:21.884311 containerd[1514]: time="2025-02-14T01:02:21.883829748Z" level=info msg="StopPodSandbox for \"2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679\" returns successfully" Feb 14 01:02:21.886044 containerd[1514]: time="2025-02-14T01:02:21.885912918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rhl98,Uid:2d4aa1d2-3ade-47c1-87a9-67c8a86c7457,Namespace:kube-system,Attempt:1,}" Feb 14 01:02:21.963983 containerd[1514]: time="2025-02-14T01:02:21.963911166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gt8pt,Uid:d3d8e6bb-a729-4387-87b3-0f4e4f6643d0,Namespace:calico-system,Attempt:1,} returns sandbox id \"f1d636e8118b01746c0dce2d2ff3843e35f6a6c13bd8c560aa4e31af0ef23759\"" Feb 14 01:02:21.969132 containerd[1514]: time="2025-02-14T01:02:21.968793210Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 14 01:02:22.038809 systemd[1]: run-netns-cni\x2dc1b9a377\x2de77e\x2d4fcf\x2dac75\x2d4e612e1dcf6c.mount: Deactivated successfully. Feb 14 01:02:22.209184 systemd-networkd[1417]: cali0016ed9e592: Link UP Feb 14 01:02:22.211347 systemd-networkd[1417]: cali0016ed9e592: Gained carrier Feb 14 01:02:22.250008 containerd[1514]: 2025-02-14 01:02:22.008 [INFO][4183] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--jzpa0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--rhl98-eth0 coredns-668d6bf9bc- kube-system 2d4aa1d2-3ade-47c1-87a9-67c8a86c7457 818 0 2025-02-14 01:01:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-jzpa0.gb1.brightbox.com coredns-668d6bf9bc-rhl98 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0016ed9e592 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c336b6079eb32b3032436921d20be571a87240dfe2d782b431fb39d0945430dc" Namespace="kube-system" Pod="coredns-668d6bf9bc-rhl98" WorkloadEndpoint="srv--jzpa0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--rhl98-" Feb 14 01:02:22.250008 containerd[1514]: 2025-02-14 01:02:22.008 [INFO][4183] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c336b6079eb32b3032436921d20be571a87240dfe2d782b431fb39d0945430dc" Namespace="kube-system" Pod="coredns-668d6bf9bc-rhl98" WorkloadEndpoint="srv--jzpa0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--rhl98-eth0" Feb 14 01:02:22.250008 containerd[1514]: 2025-02-14 01:02:22.070 [INFO][4195] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c336b6079eb32b3032436921d20be571a87240dfe2d782b431fb39d0945430dc" HandleID="k8s-pod-network.c336b6079eb32b3032436921d20be571a87240dfe2d782b431fb39d0945430dc" Workload="srv--jzpa0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--rhl98-eth0" Feb 14 01:02:22.250008 containerd[1514]: 2025-02-14 01:02:22.085 [INFO][4195] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c336b6079eb32b3032436921d20be571a87240dfe2d782b431fb39d0945430dc" HandleID="k8s-pod-network.c336b6079eb32b3032436921d20be571a87240dfe2d782b431fb39d0945430dc" Workload="srv--jzpa0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--rhl98-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003e3a40), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-jzpa0.gb1.brightbox.com", "pod":"coredns-668d6bf9bc-rhl98", "timestamp":"2025-02-14 01:02:22.070954646 +0000 UTC"}, Hostname:"srv-jzpa0.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 14 01:02:22.250008 containerd[1514]: 2025-02-14 01:02:22.087 [INFO][4195] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:02:22.250008 containerd[1514]: 2025-02-14 01:02:22.088 [INFO][4195] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:02:22.250008 containerd[1514]: 2025-02-14 01:02:22.088 [INFO][4195] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-jzpa0.gb1.brightbox.com' Feb 14 01:02:22.250008 containerd[1514]: 2025-02-14 01:02:22.092 [INFO][4195] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c336b6079eb32b3032436921d20be571a87240dfe2d782b431fb39d0945430dc" host="srv-jzpa0.gb1.brightbox.com" Feb 14 01:02:22.250008 containerd[1514]: 2025-02-14 01:02:22.099 [INFO][4195] ipam/ipam.go 372: Looking up existing affinities for host host="srv-jzpa0.gb1.brightbox.com" Feb 14 01:02:22.250008 containerd[1514]: 2025-02-14 01:02:22.116 [INFO][4195] ipam/ipam.go 489: Trying affinity for 192.168.53.64/26 host="srv-jzpa0.gb1.brightbox.com" Feb 14 01:02:22.250008 containerd[1514]: 2025-02-14 01:02:22.136 [INFO][4195] ipam/ipam.go 155: Attempting to load block cidr=192.168.53.64/26 host="srv-jzpa0.gb1.brightbox.com" Feb 14 01:02:22.250008 containerd[1514]: 2025-02-14 01:02:22.168 [INFO][4195] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.53.64/26 host="srv-jzpa0.gb1.brightbox.com" Feb 14 01:02:22.250008 containerd[1514]: 2025-02-14 01:02:22.168 [INFO][4195] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.53.64/26 handle="k8s-pod-network.c336b6079eb32b3032436921d20be571a87240dfe2d782b431fb39d0945430dc" host="srv-jzpa0.gb1.brightbox.com" Feb 14 01:02:22.250008 containerd[1514]: 2025-02-14 01:02:22.177 [INFO][4195] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c336b6079eb32b3032436921d20be571a87240dfe2d782b431fb39d0945430dc Feb 14 01:02:22.250008 containerd[1514]: 2025-02-14 01:02:22.189 [INFO][4195] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.53.64/26 handle="k8s-pod-network.c336b6079eb32b3032436921d20be571a87240dfe2d782b431fb39d0945430dc" host="srv-jzpa0.gb1.brightbox.com" Feb 14 01:02:22.250008 containerd[1514]: 2025-02-14 01:02:22.199 [INFO][4195] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.53.66/26] block=192.168.53.64/26 handle="k8s-pod-network.c336b6079eb32b3032436921d20be571a87240dfe2d782b431fb39d0945430dc" host="srv-jzpa0.gb1.brightbox.com" Feb 14 01:02:22.250008 containerd[1514]: 2025-02-14 01:02:22.200 [INFO][4195] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.53.66/26] handle="k8s-pod-network.c336b6079eb32b3032436921d20be571a87240dfe2d782b431fb39d0945430dc" host="srv-jzpa0.gb1.brightbox.com" Feb 14 01:02:22.250008 containerd[1514]: 2025-02-14 01:02:22.200 [INFO][4195] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:02:22.250008 containerd[1514]: 2025-02-14 01:02:22.200 [INFO][4195] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.53.66/26] IPv6=[] ContainerID="c336b6079eb32b3032436921d20be571a87240dfe2d782b431fb39d0945430dc" HandleID="k8s-pod-network.c336b6079eb32b3032436921d20be571a87240dfe2d782b431fb39d0945430dc" Workload="srv--jzpa0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--rhl98-eth0" Feb 14 01:02:22.252163 containerd[1514]: 2025-02-14 01:02:22.202 [INFO][4183] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c336b6079eb32b3032436921d20be571a87240dfe2d782b431fb39d0945430dc" Namespace="kube-system" Pod="coredns-668d6bf9bc-rhl98" WorkloadEndpoint="srv--jzpa0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--rhl98-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jzpa0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--rhl98-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2d4aa1d2-3ade-47c1-87a9-67c8a86c7457", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 1, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jzpa0.gb1.brightbox.com", ContainerID:"", Pod:"coredns-668d6bf9bc-rhl98", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.53.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0016ed9e592", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:02:22.252163 containerd[1514]: 2025-02-14 01:02:22.202 [INFO][4183] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.53.66/32] ContainerID="c336b6079eb32b3032436921d20be571a87240dfe2d782b431fb39d0945430dc" Namespace="kube-system" Pod="coredns-668d6bf9bc-rhl98" WorkloadEndpoint="srv--jzpa0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--rhl98-eth0" Feb 14 01:02:22.252163 containerd[1514]: 2025-02-14 01:02:22.202 [INFO][4183] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0016ed9e592 ContainerID="c336b6079eb32b3032436921d20be571a87240dfe2d782b431fb39d0945430dc" Namespace="kube-system" Pod="coredns-668d6bf9bc-rhl98" WorkloadEndpoint="srv--jzpa0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--rhl98-eth0" Feb 14 01:02:22.252163 containerd[1514]: 2025-02-14 01:02:22.212 [INFO][4183] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c336b6079eb32b3032436921d20be571a87240dfe2d782b431fb39d0945430dc" Namespace="kube-system" Pod="coredns-668d6bf9bc-rhl98" WorkloadEndpoint="srv--jzpa0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--rhl98-eth0" Feb 14 01:02:22.252163 containerd[1514]: 2025-02-14 01:02:22.213 [INFO][4183] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c336b6079eb32b3032436921d20be571a87240dfe2d782b431fb39d0945430dc" Namespace="kube-system" Pod="coredns-668d6bf9bc-rhl98" WorkloadEndpoint="srv--jzpa0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--rhl98-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jzpa0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--rhl98-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2d4aa1d2-3ade-47c1-87a9-67c8a86c7457", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 1, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jzpa0.gb1.brightbox.com", ContainerID:"c336b6079eb32b3032436921d20be571a87240dfe2d782b431fb39d0945430dc", Pod:"coredns-668d6bf9bc-rhl98", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.53.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0016ed9e592", MAC:"2a:4f:a3:df:c4:8a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:02:22.252163 containerd[1514]: 2025-02-14 01:02:22.241 [INFO][4183] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c336b6079eb32b3032436921d20be571a87240dfe2d782b431fb39d0945430dc" Namespace="kube-system" Pod="coredns-668d6bf9bc-rhl98" WorkloadEndpoint="srv--jzpa0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--rhl98-eth0" Feb 14 01:02:22.294174 containerd[1514]: time="2025-02-14T01:02:22.294057463Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 01:02:22.295601 containerd[1514]: time="2025-02-14T01:02:22.295547547Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 01:02:22.295940 containerd[1514]: time="2025-02-14T01:02:22.295746034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:02:22.296074 containerd[1514]: time="2025-02-14T01:02:22.295915143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:02:22.355710 systemd[1]: Started cri-containerd-c336b6079eb32b3032436921d20be571a87240dfe2d782b431fb39d0945430dc.scope - libcontainer container c336b6079eb32b3032436921d20be571a87240dfe2d782b431fb39d0945430dc. Feb 14 01:02:22.396083 systemd-networkd[1417]: vxlan.calico: Link UP Feb 14 01:02:22.396096 systemd-networkd[1417]: vxlan.calico: Gained carrier Feb 14 01:02:22.536220 containerd[1514]: time="2025-02-14T01:02:22.536162319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rhl98,Uid:2d4aa1d2-3ade-47c1-87a9-67c8a86c7457,Namespace:kube-system,Attempt:1,} returns sandbox id \"c336b6079eb32b3032436921d20be571a87240dfe2d782b431fb39d0945430dc\"" Feb 14 01:02:22.546102 containerd[1514]: time="2025-02-14T01:02:22.546067376Z" level=info msg="CreateContainer within sandbox \"c336b6079eb32b3032436921d20be571a87240dfe2d782b431fb39d0945430dc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 14 01:02:22.593778 containerd[1514]: time="2025-02-14T01:02:22.593713006Z" level=info msg="StopPodSandbox for \"15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f\"" Feb 14 01:02:22.614797 containerd[1514]: time="2025-02-14T01:02:22.614743097Z" level=info msg="CreateContainer within sandbox \"c336b6079eb32b3032436921d20be571a87240dfe2d782b431fb39d0945430dc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5379e95e7e794c1c235595e35c463f0166d4e22c3e610ac4f0d37f6e80fb1734\"" Feb 14 01:02:22.617244 containerd[1514]: time="2025-02-14T01:02:22.616236600Z" level=info msg="StartContainer for \"5379e95e7e794c1c235595e35c463f0166d4e22c3e610ac4f0d37f6e80fb1734\"" Feb 14 01:02:22.663580 systemd[1]: Started cri-containerd-5379e95e7e794c1c235595e35c463f0166d4e22c3e610ac4f0d37f6e80fb1734.scope - libcontainer container 5379e95e7e794c1c235595e35c463f0166d4e22c3e610ac4f0d37f6e80fb1734. Feb 14 01:02:22.679827 systemd-networkd[1417]: cali8f23cc04da3: Gained IPv6LL Feb 14 01:02:22.761032 containerd[1514]: time="2025-02-14T01:02:22.760972308Z" level=info msg="StartContainer for \"5379e95e7e794c1c235595e35c463f0166d4e22c3e610ac4f0d37f6e80fb1734\" returns successfully" Feb 14 01:02:22.855859 containerd[1514]: 2025-02-14 01:02:22.747 [INFO][4330] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f" Feb 14 01:02:22.855859 containerd[1514]: 2025-02-14 01:02:22.747 [INFO][4330] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f" iface="eth0" netns="/var/run/netns/cni-c13ad8ce-6171-1f27-a54d-02fec5da2131" Feb 14 01:02:22.855859 containerd[1514]: 2025-02-14 01:02:22.748 [INFO][4330] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f" iface="eth0" netns="/var/run/netns/cni-c13ad8ce-6171-1f27-a54d-02fec5da2131" Feb 14 01:02:22.855859 containerd[1514]: 2025-02-14 01:02:22.748 [INFO][4330] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f" iface="eth0" netns="/var/run/netns/cni-c13ad8ce-6171-1f27-a54d-02fec5da2131" Feb 14 01:02:22.855859 containerd[1514]: 2025-02-14 01:02:22.749 [INFO][4330] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f" Feb 14 01:02:22.855859 containerd[1514]: 2025-02-14 01:02:22.749 [INFO][4330] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f" Feb 14 01:02:22.855859 containerd[1514]: 2025-02-14 01:02:22.831 [INFO][4366] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f" HandleID="k8s-pod-network.15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f" Workload="srv--jzpa0.gb1.brightbox.com-k8s-calico--kube--controllers--6646bd4b95--9q284-eth0" Feb 14 01:02:22.855859 containerd[1514]: 2025-02-14 01:02:22.831 [INFO][4366] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:02:22.855859 containerd[1514]: 2025-02-14 01:02:22.831 [INFO][4366] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:02:22.855859 containerd[1514]: 2025-02-14 01:02:22.842 [WARNING][4366] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f" HandleID="k8s-pod-network.15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f" Workload="srv--jzpa0.gb1.brightbox.com-k8s-calico--kube--controllers--6646bd4b95--9q284-eth0" Feb 14 01:02:22.855859 containerd[1514]: 2025-02-14 01:02:22.842 [INFO][4366] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f" HandleID="k8s-pod-network.15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f" Workload="srv--jzpa0.gb1.brightbox.com-k8s-calico--kube--controllers--6646bd4b95--9q284-eth0" Feb 14 01:02:22.855859 containerd[1514]: 2025-02-14 01:02:22.849 [INFO][4366] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:02:22.855859 containerd[1514]: 2025-02-14 01:02:22.851 [INFO][4330] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f" Feb 14 01:02:22.858096 containerd[1514]: time="2025-02-14T01:02:22.856960571Z" level=info msg="TearDown network for sandbox \"15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f\" successfully" Feb 14 01:02:22.858096 containerd[1514]: time="2025-02-14T01:02:22.857003424Z" level=info msg="StopPodSandbox for \"15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f\" returns successfully" Feb 14 01:02:22.862039 containerd[1514]: time="2025-02-14T01:02:22.860025859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6646bd4b95-9q284,Uid:73cf4496-4287-424e-9aa7-d12b46f31c22,Namespace:calico-system,Attempt:1,}" Feb 14 01:02:23.048082 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3044119250.mount: Deactivated successfully. Feb 14 01:02:23.050680 systemd[1]: run-netns-cni\x2dc13ad8ce\x2d6171\x2d1f27\x2da54d\x2d02fec5da2131.mount: Deactivated successfully. Feb 14 01:02:23.284337 systemd-networkd[1417]: cali838c0776288: Link UP Feb 14 01:02:23.287239 systemd-networkd[1417]: cali838c0776288: Gained carrier Feb 14 01:02:23.316655 kubelet[2722]: I0214 01:02:23.316282 2722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-rhl98" podStartSLOduration=38.316098416 podStartE2EDuration="38.316098416s" podCreationTimestamp="2025-02-14 01:01:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-14 01:02:23.132183479 +0000 UTC m=+43.846906858" watchObservedRunningTime="2025-02-14 01:02:23.316098416 +0000 UTC m=+44.030821787" Feb 14 01:02:23.319700 containerd[1514]: 2025-02-14 01:02:22.982 [INFO][4377] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--jzpa0.gb1.brightbox.com-k8s-calico--kube--controllers--6646bd4b95--9q284-eth0 calico-kube-controllers-6646bd4b95- calico-system 73cf4496-4287-424e-9aa7-d12b46f31c22 828 0 2025-02-14 01:01:52 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6646bd4b95 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-jzpa0.gb1.brightbox.com calico-kube-controllers-6646bd4b95-9q284 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali838c0776288 [] []}} ContainerID="1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f" Namespace="calico-system" Pod="calico-kube-controllers-6646bd4b95-9q284" WorkloadEndpoint="srv--jzpa0.gb1.brightbox.com-k8s-calico--kube--controllers--6646bd4b95--9q284-" Feb 14 01:02:23.319700 containerd[1514]: 2025-02-14 01:02:22.982 [INFO][4377] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f" Namespace="calico-system" Pod="calico-kube-controllers-6646bd4b95-9q284" WorkloadEndpoint="srv--jzpa0.gb1.brightbox.com-k8s-calico--kube--controllers--6646bd4b95--9q284-eth0" Feb 14 01:02:23.319700 containerd[1514]: 2025-02-14 01:02:23.087 [INFO][4391] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f" HandleID="k8s-pod-network.1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f" Workload="srv--jzpa0.gb1.brightbox.com-k8s-calico--kube--controllers--6646bd4b95--9q284-eth0" Feb 14 01:02:23.319700 containerd[1514]: 2025-02-14 01:02:23.214 [INFO][4391] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f" HandleID="k8s-pod-network.1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f" Workload="srv--jzpa0.gb1.brightbox.com-k8s-calico--kube--controllers--6646bd4b95--9q284-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ff660), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-jzpa0.gb1.brightbox.com", "pod":"calico-kube-controllers-6646bd4b95-9q284", "timestamp":"2025-02-14 01:02:23.08706743 +0000 UTC"}, Hostname:"srv-jzpa0.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 14 01:02:23.319700 containerd[1514]: 2025-02-14 01:02:23.215 [INFO][4391] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:02:23.319700 containerd[1514]: 2025-02-14 01:02:23.215 [INFO][4391] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:02:23.319700 containerd[1514]: 2025-02-14 01:02:23.215 [INFO][4391] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-jzpa0.gb1.brightbox.com' Feb 14 01:02:23.319700 containerd[1514]: 2025-02-14 01:02:23.218 [INFO][4391] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f" host="srv-jzpa0.gb1.brightbox.com" Feb 14 01:02:23.319700 containerd[1514]: 2025-02-14 01:02:23.228 [INFO][4391] ipam/ipam.go 372: Looking up existing affinities for host host="srv-jzpa0.gb1.brightbox.com" Feb 14 01:02:23.319700 containerd[1514]: 2025-02-14 01:02:23.239 [INFO][4391] ipam/ipam.go 489: Trying affinity for 192.168.53.64/26 host="srv-jzpa0.gb1.brightbox.com" Feb 14 01:02:23.319700 containerd[1514]: 2025-02-14 01:02:23.242 [INFO][4391] ipam/ipam.go 155: Attempting to load block cidr=192.168.53.64/26 host="srv-jzpa0.gb1.brightbox.com" Feb 14 01:02:23.319700 containerd[1514]: 2025-02-14 01:02:23.248 [INFO][4391] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.53.64/26 host="srv-jzpa0.gb1.brightbox.com" Feb 14 01:02:23.319700 containerd[1514]: 2025-02-14 01:02:23.248 [INFO][4391] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.53.64/26 handle="k8s-pod-network.1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f" host="srv-jzpa0.gb1.brightbox.com" Feb 14 01:02:23.319700 containerd[1514]: 2025-02-14 01:02:23.251 [INFO][4391] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f Feb 14 01:02:23.319700 containerd[1514]: 2025-02-14 01:02:23.261 [INFO][4391] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.53.64/26 handle="k8s-pod-network.1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f" host="srv-jzpa0.gb1.brightbox.com" Feb 14 01:02:23.319700 containerd[1514]: 2025-02-14 01:02:23.269 [INFO][4391] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.53.67/26] block=192.168.53.64/26 handle="k8s-pod-network.1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f" host="srv-jzpa0.gb1.brightbox.com" Feb 14 01:02:23.319700 containerd[1514]: 2025-02-14 01:02:23.269 [INFO][4391] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.53.67/26] handle="k8s-pod-network.1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f" host="srv-jzpa0.gb1.brightbox.com" Feb 14 01:02:23.319700 containerd[1514]: 2025-02-14 01:02:23.269 [INFO][4391] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:02:23.319700 containerd[1514]: 2025-02-14 01:02:23.270 [INFO][4391] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.53.67/26] IPv6=[] ContainerID="1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f" HandleID="k8s-pod-network.1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f" Workload="srv--jzpa0.gb1.brightbox.com-k8s-calico--kube--controllers--6646bd4b95--9q284-eth0" Feb 14 01:02:23.322053 containerd[1514]: 2025-02-14 01:02:23.275 [INFO][4377] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f" Namespace="calico-system" Pod="calico-kube-controllers-6646bd4b95-9q284" WorkloadEndpoint="srv--jzpa0.gb1.brightbox.com-k8s-calico--kube--controllers--6646bd4b95--9q284-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jzpa0.gb1.brightbox.com-k8s-calico--kube--controllers--6646bd4b95--9q284-eth0", GenerateName:"calico-kube-controllers-6646bd4b95-", Namespace:"calico-system", SelfLink:"", UID:"73cf4496-4287-424e-9aa7-d12b46f31c22", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 1, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6646bd4b95", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jzpa0.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-6646bd4b95-9q284", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.53.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali838c0776288", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:02:23.322053 containerd[1514]: 2025-02-14 01:02:23.275 [INFO][4377] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.53.67/32] ContainerID="1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f" Namespace="calico-system" Pod="calico-kube-controllers-6646bd4b95-9q284" WorkloadEndpoint="srv--jzpa0.gb1.brightbox.com-k8s-calico--kube--controllers--6646bd4b95--9q284-eth0" Feb 14 01:02:23.322053 containerd[1514]: 2025-02-14 01:02:23.275 [INFO][4377] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali838c0776288 ContainerID="1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f" Namespace="calico-system" Pod="calico-kube-controllers-6646bd4b95-9q284" WorkloadEndpoint="srv--jzpa0.gb1.brightbox.com-k8s-calico--kube--controllers--6646bd4b95--9q284-eth0" Feb 14 01:02:23.322053 containerd[1514]: 2025-02-14 01:02:23.283 [INFO][4377] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f" Namespace="calico-system" Pod="calico-kube-controllers-6646bd4b95-9q284" WorkloadEndpoint="srv--jzpa0.gb1.brightbox.com-k8s-calico--kube--controllers--6646bd4b95--9q284-eth0" Feb 14 01:02:23.322053 containerd[1514]: 2025-02-14 01:02:23.288 [INFO][4377] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f" Namespace="calico-system" Pod="calico-kube-controllers-6646bd4b95-9q284" WorkloadEndpoint="srv--jzpa0.gb1.brightbox.com-k8s-calico--kube--controllers--6646bd4b95--9q284-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jzpa0.gb1.brightbox.com-k8s-calico--kube--controllers--6646bd4b95--9q284-eth0", GenerateName:"calico-kube-controllers-6646bd4b95-", Namespace:"calico-system", SelfLink:"", UID:"73cf4496-4287-424e-9aa7-d12b46f31c22", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 1, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6646bd4b95", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jzpa0.gb1.brightbox.com", ContainerID:"1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f", Pod:"calico-kube-controllers-6646bd4b95-9q284", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.53.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali838c0776288", MAC:"ea:8f:92:10:11:a3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:02:23.322053 containerd[1514]: 2025-02-14 01:02:23.314 [INFO][4377] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f" Namespace="calico-system" Pod="calico-kube-controllers-6646bd4b95-9q284" WorkloadEndpoint="srv--jzpa0.gb1.brightbox.com-k8s-calico--kube--controllers--6646bd4b95--9q284-eth0" Feb 14 01:02:23.380921 containerd[1514]: time="2025-02-14T01:02:23.379724668Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 01:02:23.380921 containerd[1514]: time="2025-02-14T01:02:23.379881768Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 01:02:23.380921 containerd[1514]: time="2025-02-14T01:02:23.379909646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:02:23.380921 containerd[1514]: time="2025-02-14T01:02:23.380070822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:02:23.384952 systemd-networkd[1417]: cali0016ed9e592: Gained IPv6LL Feb 14 01:02:23.418212 systemd[1]: run-containerd-runc-k8s.io-1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f-runc.4wmIvh.mount: Deactivated successfully. Feb 14 01:02:23.432511 systemd[1]: Started cri-containerd-1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f.scope - libcontainer container 1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f. Feb 14 01:02:23.502962 containerd[1514]: time="2025-02-14T01:02:23.502868992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6646bd4b95-9q284,Uid:73cf4496-4287-424e-9aa7-d12b46f31c22,Namespace:calico-system,Attempt:1,} returns sandbox id \"1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f\"" Feb 14 01:02:23.581301 containerd[1514]: time="2025-02-14T01:02:23.581153665Z" level=info msg="StopPodSandbox for \"34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab\"" Feb 14 01:02:23.581941 containerd[1514]: time="2025-02-14T01:02:23.581900224Z" level=info msg="StopPodSandbox for \"515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e\"" Feb 14 01:02:23.829108 containerd[1514]: 2025-02-14 01:02:23.727 [INFO][4515] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e" Feb 14 01:02:23.829108 containerd[1514]: 2025-02-14 01:02:23.727 [INFO][4515] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e" iface="eth0" netns="/var/run/netns/cni-58dbd003-a1b3-96ad-7af8-e6d4e1b2f388" Feb 14 01:02:23.829108 containerd[1514]: 2025-02-14 01:02:23.728 [INFO][4515] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e" iface="eth0" netns="/var/run/netns/cni-58dbd003-a1b3-96ad-7af8-e6d4e1b2f388" Feb 14 01:02:23.829108 containerd[1514]: 2025-02-14 01:02:23.729 [INFO][4515] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e" iface="eth0" netns="/var/run/netns/cni-58dbd003-a1b3-96ad-7af8-e6d4e1b2f388" Feb 14 01:02:23.829108 containerd[1514]: 2025-02-14 01:02:23.729 [INFO][4515] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e" Feb 14 01:02:23.829108 containerd[1514]: 2025-02-14 01:02:23.729 [INFO][4515] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e" Feb 14 01:02:23.829108 containerd[1514]: 2025-02-14 01:02:23.796 [INFO][4532] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e" HandleID="k8s-pod-network.515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e" Workload="srv--jzpa0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--pds58-eth0" Feb 14 01:02:23.829108 containerd[1514]: 2025-02-14 01:02:23.797 [INFO][4532] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:02:23.829108 containerd[1514]: 2025-02-14 01:02:23.797 [INFO][4532] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:02:23.829108 containerd[1514]: 2025-02-14 01:02:23.815 [WARNING][4532] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e" HandleID="k8s-pod-network.515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e" Workload="srv--jzpa0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--pds58-eth0" Feb 14 01:02:23.829108 containerd[1514]: 2025-02-14 01:02:23.816 [INFO][4532] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e" HandleID="k8s-pod-network.515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e" Workload="srv--jzpa0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--pds58-eth0" Feb 14 01:02:23.829108 containerd[1514]: 2025-02-14 01:02:23.819 [INFO][4532] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:02:23.829108 containerd[1514]: 2025-02-14 01:02:23.823 [INFO][4515] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e" Feb 14 01:02:23.834220 containerd[1514]: time="2025-02-14T01:02:23.830657879Z" level=info msg="TearDown network for sandbox \"515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e\" successfully" Feb 14 01:02:23.834220 containerd[1514]: time="2025-02-14T01:02:23.830701230Z" level=info msg="StopPodSandbox for \"515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e\" returns successfully" Feb 14 01:02:23.834220 containerd[1514]: time="2025-02-14T01:02:23.832762910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pds58,Uid:72fd21e3-08ab-4994-bc45-716e2988248d,Namespace:kube-system,Attempt:1,}" Feb 14 01:02:23.834609 systemd[1]: run-netns-cni\x2d58dbd003\x2da1b3\x2d96ad\x2d7af8\x2de6d4e1b2f388.mount: Deactivated successfully. Feb 14 01:02:23.935602 containerd[1514]: 2025-02-14 01:02:23.755 [INFO][4522] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab" Feb 14 01:02:23.935602 containerd[1514]: 2025-02-14 01:02:23.755 [INFO][4522] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab" iface="eth0" netns="/var/run/netns/cni-8725f3a1-5780-14fb-2d43-d2362483ac2f" Feb 14 01:02:23.935602 containerd[1514]: 2025-02-14 01:02:23.756 [INFO][4522] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab" iface="eth0" netns="/var/run/netns/cni-8725f3a1-5780-14fb-2d43-d2362483ac2f" Feb 14 01:02:23.935602 containerd[1514]: 2025-02-14 01:02:23.759 [INFO][4522] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab" iface="eth0" netns="/var/run/netns/cni-8725f3a1-5780-14fb-2d43-d2362483ac2f" Feb 14 01:02:23.935602 containerd[1514]: 2025-02-14 01:02:23.759 [INFO][4522] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab" Feb 14 01:02:23.935602 containerd[1514]: 2025-02-14 01:02:23.759 [INFO][4522] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab" Feb 14 01:02:23.935602 containerd[1514]: 2025-02-14 01:02:23.916 [INFO][4536] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab" HandleID="k8s-pod-network.34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab" Workload="srv--jzpa0.gb1.brightbox.com-k8s-calico--apiserver--78cd759b77--mft6l-eth0" Feb 14 01:02:23.935602 containerd[1514]: 2025-02-14 01:02:23.917 [INFO][4536] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:02:23.935602 containerd[1514]: 2025-02-14 01:02:23.917 [INFO][4536] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:02:23.935602 containerd[1514]: 2025-02-14 01:02:23.927 [WARNING][4536] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab" HandleID="k8s-pod-network.34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab" Workload="srv--jzpa0.gb1.brightbox.com-k8s-calico--apiserver--78cd759b77--mft6l-eth0" Feb 14 01:02:23.935602 containerd[1514]: 2025-02-14 01:02:23.927 [INFO][4536] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab" HandleID="k8s-pod-network.34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab" Workload="srv--jzpa0.gb1.brightbox.com-k8s-calico--apiserver--78cd759b77--mft6l-eth0" Feb 14 01:02:23.935602 containerd[1514]: 2025-02-14 01:02:23.930 [INFO][4536] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:02:23.935602 containerd[1514]: 2025-02-14 01:02:23.932 [INFO][4522] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab" Feb 14 01:02:23.937123 containerd[1514]: time="2025-02-14T01:02:23.936024282Z" level=info msg="TearDown network for sandbox \"34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab\" successfully" Feb 14 01:02:23.937123 containerd[1514]: time="2025-02-14T01:02:23.936061313Z" level=info msg="StopPodSandbox for \"34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab\" returns successfully" Feb 14 01:02:23.937123 containerd[1514]: time="2025-02-14T01:02:23.937072948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78cd759b77-mft6l,Uid:320253fc-8ef3-4370-b081-23e4e4fce4d9,Namespace:calico-apiserver,Attempt:1,}" Feb 14 01:02:24.023892 systemd-networkd[1417]: vxlan.calico: Gained IPv6LL Feb 14 01:02:24.040936 systemd[1]: run-netns-cni\x2d8725f3a1\x2d5780\x2d14fb\x2d2d43\x2dd2362483ac2f.mount: Deactivated successfully. Feb 14 01:02:24.337992 systemd-networkd[1417]: cali2e620141db9: Link UP Feb 14 01:02:24.343350 systemd-networkd[1417]: cali2e620141db9: Gained carrier Feb 14 01:02:24.385286 containerd[1514]: 2025-02-14 01:02:23.950 [INFO][4542] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--jzpa0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--pds58-eth0 coredns-668d6bf9bc- kube-system 72fd21e3-08ab-4994-bc45-716e2988248d 842 0 2025-02-14 01:01:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-jzpa0.gb1.brightbox.com coredns-668d6bf9bc-pds58 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2e620141db9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="4a75e78c3f889b79a21c9101f5095c4c29b4cf5fa200c3e4a3ce3bcdd93b7aef" Namespace="kube-system" Pod="coredns-668d6bf9bc-pds58" WorkloadEndpoint="srv--jzpa0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--pds58-" Feb 14 01:02:24.385286 containerd[1514]: 2025-02-14 01:02:23.951 [INFO][4542] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4a75e78c3f889b79a21c9101f5095c4c29b4cf5fa200c3e4a3ce3bcdd93b7aef" Namespace="kube-system" Pod="coredns-668d6bf9bc-pds58" WorkloadEndpoint="srv--jzpa0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--pds58-eth0" Feb 14 01:02:24.385286 containerd[1514]: 2025-02-14 01:02:24.122 [INFO][4561] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4a75e78c3f889b79a21c9101f5095c4c29b4cf5fa200c3e4a3ce3bcdd93b7aef" HandleID="k8s-pod-network.4a75e78c3f889b79a21c9101f5095c4c29b4cf5fa200c3e4a3ce3bcdd93b7aef" Workload="srv--jzpa0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--pds58-eth0" Feb 14 01:02:24.385286 containerd[1514]: 2025-02-14 01:02:24.151 [INFO][4561] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4a75e78c3f889b79a21c9101f5095c4c29b4cf5fa200c3e4a3ce3bcdd93b7aef" HandleID="k8s-pod-network.4a75e78c3f889b79a21c9101f5095c4c29b4cf5fa200c3e4a3ce3bcdd93b7aef" Workload="srv--jzpa0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--pds58-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ecd70), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-jzpa0.gb1.brightbox.com", "pod":"coredns-668d6bf9bc-pds58", "timestamp":"2025-02-14 01:02:24.122214843 +0000 UTC"}, Hostname:"srv-jzpa0.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 14 01:02:24.385286 containerd[1514]: 2025-02-14 01:02:24.152 [INFO][4561] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:02:24.385286 containerd[1514]: 2025-02-14 01:02:24.152 [INFO][4561] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:02:24.385286 containerd[1514]: 2025-02-14 01:02:24.152 [INFO][4561] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-jzpa0.gb1.brightbox.com' Feb 14 01:02:24.385286 containerd[1514]: 2025-02-14 01:02:24.160 [INFO][4561] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4a75e78c3f889b79a21c9101f5095c4c29b4cf5fa200c3e4a3ce3bcdd93b7aef" host="srv-jzpa0.gb1.brightbox.com" Feb 14 01:02:24.385286 containerd[1514]: 2025-02-14 01:02:24.255 [INFO][4561] ipam/ipam.go 372: Looking up existing affinities for host host="srv-jzpa0.gb1.brightbox.com" Feb 14 01:02:24.385286 containerd[1514]: 2025-02-14 01:02:24.274 [INFO][4561] ipam/ipam.go 489: Trying affinity for 192.168.53.64/26 host="srv-jzpa0.gb1.brightbox.com" Feb 14 01:02:24.385286 containerd[1514]: 2025-02-14 01:02:24.278 [INFO][4561] ipam/ipam.go 155: Attempting to load block cidr=192.168.53.64/26 host="srv-jzpa0.gb1.brightbox.com" Feb 14 01:02:24.385286 containerd[1514]: 2025-02-14 01:02:24.286 [INFO][4561] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.53.64/26 host="srv-jzpa0.gb1.brightbox.com" Feb 14 01:02:24.385286 containerd[1514]: 2025-02-14 01:02:24.287 [INFO][4561] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.53.64/26 handle="k8s-pod-network.4a75e78c3f889b79a21c9101f5095c4c29b4cf5fa200c3e4a3ce3bcdd93b7aef" host="srv-jzpa0.gb1.brightbox.com" Feb 14 01:02:24.385286 containerd[1514]: 2025-02-14 01:02:24.291 [INFO][4561] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4a75e78c3f889b79a21c9101f5095c4c29b4cf5fa200c3e4a3ce3bcdd93b7aef Feb 14 01:02:24.385286 containerd[1514]: 2025-02-14 01:02:24.309 [INFO][4561] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.53.64/26 handle="k8s-pod-network.4a75e78c3f889b79a21c9101f5095c4c29b4cf5fa200c3e4a3ce3bcdd93b7aef" host="srv-jzpa0.gb1.brightbox.com" Feb 14 01:02:24.385286 containerd[1514]: 2025-02-14 01:02:24.323 [INFO][4561] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.53.68/26] block=192.168.53.64/26 handle="k8s-pod-network.4a75e78c3f889b79a21c9101f5095c4c29b4cf5fa200c3e4a3ce3bcdd93b7aef" host="srv-jzpa0.gb1.brightbox.com" Feb 14 01:02:24.385286 containerd[1514]: 2025-02-14 01:02:24.323 [INFO][4561] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.53.68/26] handle="k8s-pod-network.4a75e78c3f889b79a21c9101f5095c4c29b4cf5fa200c3e4a3ce3bcdd93b7aef" host="srv-jzpa0.gb1.brightbox.com" Feb 14 01:02:24.385286 containerd[1514]: 2025-02-14 01:02:24.323 [INFO][4561] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:02:24.385286 containerd[1514]: 2025-02-14 01:02:24.323 [INFO][4561] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.53.68/26] IPv6=[] ContainerID="4a75e78c3f889b79a21c9101f5095c4c29b4cf5fa200c3e4a3ce3bcdd93b7aef" HandleID="k8s-pod-network.4a75e78c3f889b79a21c9101f5095c4c29b4cf5fa200c3e4a3ce3bcdd93b7aef" Workload="srv--jzpa0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--pds58-eth0" Feb 14 01:02:24.388757 containerd[1514]: 2025-02-14 01:02:24.327 [INFO][4542] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4a75e78c3f889b79a21c9101f5095c4c29b4cf5fa200c3e4a3ce3bcdd93b7aef" Namespace="kube-system" Pod="coredns-668d6bf9bc-pds58" WorkloadEndpoint="srv--jzpa0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--pds58-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jzpa0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--pds58-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"72fd21e3-08ab-4994-bc45-716e2988248d", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 1, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jzpa0.gb1.brightbox.com", ContainerID:"", Pod:"coredns-668d6bf9bc-pds58", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.53.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2e620141db9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:02:24.388757 containerd[1514]: 2025-02-14 01:02:24.329 [INFO][4542] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.53.68/32] ContainerID="4a75e78c3f889b79a21c9101f5095c4c29b4cf5fa200c3e4a3ce3bcdd93b7aef" Namespace="kube-system" Pod="coredns-668d6bf9bc-pds58" WorkloadEndpoint="srv--jzpa0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--pds58-eth0" Feb 14 01:02:24.388757 containerd[1514]: 2025-02-14 01:02:24.330 [INFO][4542] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2e620141db9 ContainerID="4a75e78c3f889b79a21c9101f5095c4c29b4cf5fa200c3e4a3ce3bcdd93b7aef" Namespace="kube-system" Pod="coredns-668d6bf9bc-pds58" WorkloadEndpoint="srv--jzpa0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--pds58-eth0" Feb 14 01:02:24.388757 containerd[1514]: 2025-02-14 01:02:24.346 [INFO][4542] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4a75e78c3f889b79a21c9101f5095c4c29b4cf5fa200c3e4a3ce3bcdd93b7aef" Namespace="kube-system" Pod="coredns-668d6bf9bc-pds58" WorkloadEndpoint="srv--jzpa0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--pds58-eth0" Feb 14 01:02:24.388757 containerd[1514]: 2025-02-14 01:02:24.353 [INFO][4542] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4a75e78c3f889b79a21c9101f5095c4c29b4cf5fa200c3e4a3ce3bcdd93b7aef" Namespace="kube-system" Pod="coredns-668d6bf9bc-pds58" WorkloadEndpoint="srv--jzpa0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--pds58-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jzpa0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--pds58-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"72fd21e3-08ab-4994-bc45-716e2988248d", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 1, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jzpa0.gb1.brightbox.com", ContainerID:"4a75e78c3f889b79a21c9101f5095c4c29b4cf5fa200c3e4a3ce3bcdd93b7aef", Pod:"coredns-668d6bf9bc-pds58", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.53.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2e620141db9", MAC:"0e:29:ad:03:3b:bb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:02:24.388757 containerd[1514]: 2025-02-14 01:02:24.382 [INFO][4542] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4a75e78c3f889b79a21c9101f5095c4c29b4cf5fa200c3e4a3ce3bcdd93b7aef" Namespace="kube-system" Pod="coredns-668d6bf9bc-pds58" WorkloadEndpoint="srv--jzpa0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--pds58-eth0" Feb 14 01:02:24.528182 containerd[1514]: time="2025-02-14T01:02:24.527920718Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 01:02:24.528349 containerd[1514]: time="2025-02-14T01:02:24.528224400Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 01:02:24.528812 containerd[1514]: time="2025-02-14T01:02:24.528373041Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:02:24.530908 containerd[1514]: time="2025-02-14T01:02:24.530607224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:02:24.534221 containerd[1514]: time="2025-02-14T01:02:24.533882158Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:02:24.539046 containerd[1514]: time="2025-02-14T01:02:24.538977412Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 14 01:02:24.547984 containerd[1514]: time="2025-02-14T01:02:24.547938696Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:02:24.560244 containerd[1514]: time="2025-02-14T01:02:24.559048357Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:02:24.563381 containerd[1514]: time="2025-02-14T01:02:24.563315562Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.594458191s" Feb 14 01:02:24.563381 containerd[1514]: time="2025-02-14T01:02:24.563361561Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 14 01:02:24.566729 containerd[1514]: time="2025-02-14T01:02:24.566574861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 14 01:02:24.568420 systemd-networkd[1417]: caliaa148f3e118: Link UP Feb 14 01:02:24.570012 systemd-networkd[1417]: caliaa148f3e118: Gained carrier Feb 14 01:02:24.576584 containerd[1514]: time="2025-02-14T01:02:24.576525550Z" level=info msg="CreateContainer within sandbox \"f1d636e8118b01746c0dce2d2ff3843e35f6a6c13bd8c560aa4e31af0ef23759\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 14 01:02:24.586504 containerd[1514]: time="2025-02-14T01:02:24.585490332Z" level=info msg="StopPodSandbox for \"18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6\"" Feb 14 01:02:24.609733 systemd[1]: Started cri-containerd-4a75e78c3f889b79a21c9101f5095c4c29b4cf5fa200c3e4a3ce3bcdd93b7aef.scope - libcontainer container 4a75e78c3f889b79a21c9101f5095c4c29b4cf5fa200c3e4a3ce3bcdd93b7aef. Feb 14 01:02:24.619630 containerd[1514]: 2025-02-14 01:02:24.213 [INFO][4572] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--jzpa0.gb1.brightbox.com-k8s-calico--apiserver--78cd759b77--mft6l-eth0 calico-apiserver-78cd759b77- calico-apiserver 320253fc-8ef3-4370-b081-23e4e4fce4d9 843 0 2025-02-14 01:01:53 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:78cd759b77 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-jzpa0.gb1.brightbox.com calico-apiserver-78cd759b77-mft6l eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliaa148f3e118 [] []}} ContainerID="89119fc200753d18e22603ae914046f68942bd8a4d2a02314435f0a49b7d0082" Namespace="calico-apiserver" Pod="calico-apiserver-78cd759b77-mft6l" WorkloadEndpoint="srv--jzpa0.gb1.brightbox.com-k8s-calico--apiserver--78cd759b77--mft6l-" Feb 14 01:02:24.619630 containerd[1514]: 2025-02-14 01:02:24.213 [INFO][4572] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="89119fc200753d18e22603ae914046f68942bd8a4d2a02314435f0a49b7d0082" Namespace="calico-apiserver" Pod="calico-apiserver-78cd759b77-mft6l" WorkloadEndpoint="srv--jzpa0.gb1.brightbox.com-k8s-calico--apiserver--78cd759b77--mft6l-eth0" Feb 14 01:02:24.619630 containerd[1514]: 2025-02-14 01:02:24.341 [INFO][4582] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="89119fc200753d18e22603ae914046f68942bd8a4d2a02314435f0a49b7d0082" HandleID="k8s-pod-network.89119fc200753d18e22603ae914046f68942bd8a4d2a02314435f0a49b7d0082" Workload="srv--jzpa0.gb1.brightbox.com-k8s-calico--apiserver--78cd759b77--mft6l-eth0" Feb 14 01:02:24.619630 containerd[1514]: 2025-02-14 01:02:24.373 [INFO][4582] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="89119fc200753d18e22603ae914046f68942bd8a4d2a02314435f0a49b7d0082" HandleID="k8s-pod-network.89119fc200753d18e22603ae914046f68942bd8a4d2a02314435f0a49b7d0082" Workload="srv--jzpa0.gb1.brightbox.com-k8s-calico--apiserver--78cd759b77--mft6l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c9b90), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-jzpa0.gb1.brightbox.com", "pod":"calico-apiserver-78cd759b77-mft6l", "timestamp":"2025-02-14 01:02:24.341767738 +0000 UTC"}, Hostname:"srv-jzpa0.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 14 01:02:24.619630 containerd[1514]: 2025-02-14 01:02:24.374 [INFO][4582] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:02:24.619630 containerd[1514]: 2025-02-14 01:02:24.374 [INFO][4582] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:02:24.619630 containerd[1514]: 2025-02-14 01:02:24.374 [INFO][4582] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-jzpa0.gb1.brightbox.com' Feb 14 01:02:24.619630 containerd[1514]: 2025-02-14 01:02:24.379 [INFO][4582] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.89119fc200753d18e22603ae914046f68942bd8a4d2a02314435f0a49b7d0082" host="srv-jzpa0.gb1.brightbox.com" Feb 14 01:02:24.619630 containerd[1514]: 2025-02-14 01:02:24.470 [INFO][4582] ipam/ipam.go 372: Looking up existing affinities for host host="srv-jzpa0.gb1.brightbox.com" Feb 14 01:02:24.619630 containerd[1514]: 2025-02-14 01:02:24.486 [INFO][4582] ipam/ipam.go 489: Trying affinity for 192.168.53.64/26 host="srv-jzpa0.gb1.brightbox.com" Feb 14 01:02:24.619630 containerd[1514]: 2025-02-14 01:02:24.498 [INFO][4582] ipam/ipam.go 155: Attempting to load block cidr=192.168.53.64/26 host="srv-jzpa0.gb1.brightbox.com" Feb 14 01:02:24.619630 containerd[1514]: 2025-02-14 01:02:24.508 [INFO][4582] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.53.64/26 host="srv-jzpa0.gb1.brightbox.com" Feb 14 01:02:24.619630 containerd[1514]: 2025-02-14 01:02:24.508 [INFO][4582] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.53.64/26 handle="k8s-pod-network.89119fc200753d18e22603ae914046f68942bd8a4d2a02314435f0a49b7d0082" host="srv-jzpa0.gb1.brightbox.com" Feb 14 01:02:24.619630 containerd[1514]: 2025-02-14 01:02:24.513 [INFO][4582] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.89119fc200753d18e22603ae914046f68942bd8a4d2a02314435f0a49b7d0082 Feb 14 01:02:24.619630 containerd[1514]: 2025-02-14 01:02:24.525 [INFO][4582] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.53.64/26 handle="k8s-pod-network.89119fc200753d18e22603ae914046f68942bd8a4d2a02314435f0a49b7d0082" host="srv-jzpa0.gb1.brightbox.com" Feb 14 01:02:24.619630 containerd[1514]: 2025-02-14 01:02:24.548 [INFO][4582] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.53.69/26] block=192.168.53.64/26 handle="k8s-pod-network.89119fc200753d18e22603ae914046f68942bd8a4d2a02314435f0a49b7d0082" host="srv-jzpa0.gb1.brightbox.com" Feb 14 01:02:24.619630 containerd[1514]: 2025-02-14 01:02:24.550 [INFO][4582] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.53.69/26] handle="k8s-pod-network.89119fc200753d18e22603ae914046f68942bd8a4d2a02314435f0a49b7d0082" host="srv-jzpa0.gb1.brightbox.com" Feb 14 01:02:24.619630 containerd[1514]: 2025-02-14 01:02:24.550 [INFO][4582] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:02:24.619630 containerd[1514]: 2025-02-14 01:02:24.551 [INFO][4582] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.53.69/26] IPv6=[] ContainerID="89119fc200753d18e22603ae914046f68942bd8a4d2a02314435f0a49b7d0082" HandleID="k8s-pod-network.89119fc200753d18e22603ae914046f68942bd8a4d2a02314435f0a49b7d0082" Workload="srv--jzpa0.gb1.brightbox.com-k8s-calico--apiserver--78cd759b77--mft6l-eth0" Feb 14 01:02:24.622141 containerd[1514]: 2025-02-14 01:02:24.556 [INFO][4572] cni-plugin/k8s.go 386: Populated endpoint ContainerID="89119fc200753d18e22603ae914046f68942bd8a4d2a02314435f0a49b7d0082" Namespace="calico-apiserver" Pod="calico-apiserver-78cd759b77-mft6l" WorkloadEndpoint="srv--jzpa0.gb1.brightbox.com-k8s-calico--apiserver--78cd759b77--mft6l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jzpa0.gb1.brightbox.com-k8s-calico--apiserver--78cd759b77--mft6l-eth0", GenerateName:"calico-apiserver-78cd759b77-", Namespace:"calico-apiserver", SelfLink:"", UID:"320253fc-8ef3-4370-b081-23e4e4fce4d9", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 1, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78cd759b77", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jzpa0.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-78cd759b77-mft6l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.53.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaa148f3e118", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:02:24.622141 containerd[1514]: 2025-02-14 01:02:24.557 [INFO][4572] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.53.69/32] ContainerID="89119fc200753d18e22603ae914046f68942bd8a4d2a02314435f0a49b7d0082" Namespace="calico-apiserver" Pod="calico-apiserver-78cd759b77-mft6l" WorkloadEndpoint="srv--jzpa0.gb1.brightbox.com-k8s-calico--apiserver--78cd759b77--mft6l-eth0" Feb 14 01:02:24.622141 containerd[1514]: 2025-02-14 01:02:24.557 [INFO][4572] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaa148f3e118 ContainerID="89119fc200753d18e22603ae914046f68942bd8a4d2a02314435f0a49b7d0082" Namespace="calico-apiserver" Pod="calico-apiserver-78cd759b77-mft6l" WorkloadEndpoint="srv--jzpa0.gb1.brightbox.com-k8s-calico--apiserver--78cd759b77--mft6l-eth0" Feb 14 01:02:24.622141 containerd[1514]: 2025-02-14 01:02:24.572 [INFO][4572] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="89119fc200753d18e22603ae914046f68942bd8a4d2a02314435f0a49b7d0082" Namespace="calico-apiserver" Pod="calico-apiserver-78cd759b77-mft6l" WorkloadEndpoint="srv--jzpa0.gb1.brightbox.com-k8s-calico--apiserver--78cd759b77--mft6l-eth0" Feb 14 01:02:24.622141 containerd[1514]: 2025-02-14 01:02:24.585 [INFO][4572] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="89119fc200753d18e22603ae914046f68942bd8a4d2a02314435f0a49b7d0082" Namespace="calico-apiserver" Pod="calico-apiserver-78cd759b77-mft6l" WorkloadEndpoint="srv--jzpa0.gb1.brightbox.com-k8s-calico--apiserver--78cd759b77--mft6l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jzpa0.gb1.brightbox.com-k8s-calico--apiserver--78cd759b77--mft6l-eth0", GenerateName:"calico-apiserver-78cd759b77-", Namespace:"calico-apiserver", SelfLink:"", UID:"320253fc-8ef3-4370-b081-23e4e4fce4d9", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 1, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78cd759b77", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jzpa0.gb1.brightbox.com", ContainerID:"89119fc200753d18e22603ae914046f68942bd8a4d2a02314435f0a49b7d0082", Pod:"calico-apiserver-78cd759b77-mft6l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.53.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaa148f3e118", MAC:"8e:92:49:34:8e:9a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:02:24.622141 containerd[1514]: 2025-02-14 01:02:24.607 [INFO][4572] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="89119fc200753d18e22603ae914046f68942bd8a4d2a02314435f0a49b7d0082" Namespace="calico-apiserver" Pod="calico-apiserver-78cd759b77-mft6l" WorkloadEndpoint="srv--jzpa0.gb1.brightbox.com-k8s-calico--apiserver--78cd759b77--mft6l-eth0" Feb 14 01:02:24.651750 containerd[1514]: time="2025-02-14T01:02:24.651689484Z" level=info msg="CreateContainer within sandbox \"f1d636e8118b01746c0dce2d2ff3843e35f6a6c13bd8c560aa4e31af0ef23759\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"3514fc8a7cf8370e408e6324486aaecab24152626deb4deec0640c0eb60c6487\"" Feb 14 01:02:24.653382 containerd[1514]: time="2025-02-14T01:02:24.653349042Z" level=info msg="StartContainer for \"3514fc8a7cf8370e408e6324486aaecab24152626deb4deec0640c0eb60c6487\"" Feb 14 01:02:24.753695 containerd[1514]: time="2025-02-14T01:02:24.744812496Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 01:02:24.753695 containerd[1514]: time="2025-02-14T01:02:24.753641797Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 01:02:24.754452 containerd[1514]: time="2025-02-14T01:02:24.753670947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:02:24.754452 containerd[1514]: time="2025-02-14T01:02:24.753809654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:02:24.758520 systemd[1]: Started cri-containerd-3514fc8a7cf8370e408e6324486aaecab24152626deb4deec0640c0eb60c6487.scope - libcontainer container 3514fc8a7cf8370e408e6324486aaecab24152626deb4deec0640c0eb60c6487. Feb 14 01:02:24.794686 containerd[1514]: time="2025-02-14T01:02:24.794627455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pds58,Uid:72fd21e3-08ab-4994-bc45-716e2988248d,Namespace:kube-system,Attempt:1,} returns sandbox id \"4a75e78c3f889b79a21c9101f5095c4c29b4cf5fa200c3e4a3ce3bcdd93b7aef\"" Feb 14 01:02:24.806938 containerd[1514]: time="2025-02-14T01:02:24.806887275Z" level=info msg="CreateContainer within sandbox \"4a75e78c3f889b79a21c9101f5095c4c29b4cf5fa200c3e4a3ce3bcdd93b7aef\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 14 01:02:24.813108 systemd[1]: Started cri-containerd-89119fc200753d18e22603ae914046f68942bd8a4d2a02314435f0a49b7d0082.scope - libcontainer container 89119fc200753d18e22603ae914046f68942bd8a4d2a02314435f0a49b7d0082. Feb 14 01:02:24.835186 containerd[1514]: time="2025-02-14T01:02:24.835143274Z" level=info msg="CreateContainer within sandbox \"4a75e78c3f889b79a21c9101f5095c4c29b4cf5fa200c3e4a3ce3bcdd93b7aef\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e70094c28020f0e2bc61f8418f031fa7290a12db984a699560661a84ae5a8408\"" Feb 14 01:02:24.837903 containerd[1514]: time="2025-02-14T01:02:24.837721951Z" level=info msg="StartContainer for \"e70094c28020f0e2bc61f8418f031fa7290a12db984a699560661a84ae5a8408\"" Feb 14 01:02:24.912611 containerd[1514]: time="2025-02-14T01:02:24.912385711Z" level=info msg="StartContainer for \"3514fc8a7cf8370e408e6324486aaecab24152626deb4deec0640c0eb60c6487\" returns successfully" Feb 14 01:02:24.919764 systemd-networkd[1417]: cali838c0776288: Gained IPv6LL Feb 14 01:02:24.940733 systemd[1]: Started cri-containerd-e70094c28020f0e2bc61f8418f031fa7290a12db984a699560661a84ae5a8408.scope - libcontainer container e70094c28020f0e2bc61f8418f031fa7290a12db984a699560661a84ae5a8408. Feb 14 01:02:25.018910 containerd[1514]: time="2025-02-14T01:02:25.018856951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78cd759b77-mft6l,Uid:320253fc-8ef3-4370-b081-23e4e4fce4d9,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"89119fc200753d18e22603ae914046f68942bd8a4d2a02314435f0a49b7d0082\"" Feb 14 01:02:25.045580 containerd[1514]: 2025-02-14 01:02:24.862 [INFO][4650] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6" Feb 14 01:02:25.045580 containerd[1514]: 2025-02-14 01:02:24.868 [INFO][4650] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6" iface="eth0" netns="/var/run/netns/cni-5100b00d-2c8d-106f-8cb5-53f4a4c6f433" Feb 14 01:02:25.045580 containerd[1514]: 2025-02-14 01:02:24.869 [INFO][4650] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6" iface="eth0" netns="/var/run/netns/cni-5100b00d-2c8d-106f-8cb5-53f4a4c6f433" Feb 14 01:02:25.045580 containerd[1514]: 2025-02-14 01:02:24.870 [INFO][4650] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6" iface="eth0" netns="/var/run/netns/cni-5100b00d-2c8d-106f-8cb5-53f4a4c6f433" Feb 14 01:02:25.045580 containerd[1514]: 2025-02-14 01:02:24.870 [INFO][4650] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6" Feb 14 01:02:25.045580 containerd[1514]: 2025-02-14 01:02:24.870 [INFO][4650] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6" Feb 14 01:02:25.045580 containerd[1514]: 2025-02-14 01:02:24.991 [INFO][4740] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6" HandleID="k8s-pod-network.18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6" Workload="srv--jzpa0.gb1.brightbox.com-k8s-calico--apiserver--78cd759b77--szkpk-eth0" Feb 14 01:02:25.045580 containerd[1514]: 2025-02-14 01:02:24.992 [INFO][4740] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:02:25.045580 containerd[1514]: 2025-02-14 01:02:24.992 [INFO][4740] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:02:25.045580 containerd[1514]: 2025-02-14 01:02:25.019 [WARNING][4740] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6" HandleID="k8s-pod-network.18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6" Workload="srv--jzpa0.gb1.brightbox.com-k8s-calico--apiserver--78cd759b77--szkpk-eth0" Feb 14 01:02:25.045580 containerd[1514]: 2025-02-14 01:02:25.019 [INFO][4740] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6" HandleID="k8s-pod-network.18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6" Workload="srv--jzpa0.gb1.brightbox.com-k8s-calico--apiserver--78cd759b77--szkpk-eth0" Feb 14 01:02:25.045580 containerd[1514]: 2025-02-14 01:02:25.023 [INFO][4740] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:02:25.045580 containerd[1514]: 2025-02-14 01:02:25.028 [INFO][4650] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6" Feb 14 01:02:25.045580 containerd[1514]: time="2025-02-14T01:02:25.043668344Z" level=info msg="TearDown network for sandbox \"18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6\" successfully" Feb 14 01:02:25.045580 containerd[1514]: time="2025-02-14T01:02:25.043708198Z" level=info msg="StopPodSandbox for \"18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6\" returns successfully" Feb 14 01:02:25.045580 containerd[1514]: time="2025-02-14T01:02:25.044373147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78cd759b77-szkpk,Uid:a76acc35-8f62-4f71-9d2c-67c0a127253c,Namespace:calico-apiserver,Attempt:1,}" Feb 14 01:02:25.055090 containerd[1514]: time="2025-02-14T01:02:25.054329989Z" level=info msg="StartContainer for \"e70094c28020f0e2bc61f8418f031fa7290a12db984a699560661a84ae5a8408\" returns successfully" Feb 14 01:02:25.058007 systemd[1]: run-netns-cni\x2d5100b00d\x2d2c8d\x2d106f\x2d8cb5\x2d53f4a4c6f433.mount: Deactivated successfully. Feb 14 01:02:25.376407 systemd-networkd[1417]: cali921f7fe55ac: Link UP Feb 14 01:02:25.376783 systemd-networkd[1417]: cali921f7fe55ac: Gained carrier Feb 14 01:02:25.407236 kubelet[2722]: I0214 01:02:25.407141 2722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-pds58" podStartSLOduration=40.407104724 podStartE2EDuration="40.407104724s" podCreationTimestamp="2025-02-14 01:01:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-14 01:02:25.187598243 +0000 UTC m=+45.902321609" watchObservedRunningTime="2025-02-14 01:02:25.407104724 +0000 UTC m=+46.121828072" Feb 14 01:02:25.412912 containerd[1514]: 2025-02-14 01:02:25.202 [INFO][4789] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--jzpa0.gb1.brightbox.com-k8s-calico--apiserver--78cd759b77--szkpk-eth0 calico-apiserver-78cd759b77- calico-apiserver a76acc35-8f62-4f71-9d2c-67c0a127253c 861 0 2025-02-14 01:01:53 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:78cd759b77 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-jzpa0.gb1.brightbox.com calico-apiserver-78cd759b77-szkpk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali921f7fe55ac [] []}} ContainerID="6b06c33287099f12d5c260abafbd4f66793c221a7bcaaecc21ed289b22ea9608" Namespace="calico-apiserver" Pod="calico-apiserver-78cd759b77-szkpk" WorkloadEndpoint="srv--jzpa0.gb1.brightbox.com-k8s-calico--apiserver--78cd759b77--szkpk-" Feb 14 01:02:25.412912 containerd[1514]: 2025-02-14 01:02:25.203 [INFO][4789] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6b06c33287099f12d5c260abafbd4f66793c221a7bcaaecc21ed289b22ea9608" Namespace="calico-apiserver" Pod="calico-apiserver-78cd759b77-szkpk" WorkloadEndpoint="srv--jzpa0.gb1.brightbox.com-k8s-calico--apiserver--78cd759b77--szkpk-eth0" Feb 14 01:02:25.412912 containerd[1514]: 2025-02-14 01:02:25.285 [INFO][4803] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6b06c33287099f12d5c260abafbd4f66793c221a7bcaaecc21ed289b22ea9608" HandleID="k8s-pod-network.6b06c33287099f12d5c260abafbd4f66793c221a7bcaaecc21ed289b22ea9608" Workload="srv--jzpa0.gb1.brightbox.com-k8s-calico--apiserver--78cd759b77--szkpk-eth0" Feb 14 01:02:25.412912 containerd[1514]: 2025-02-14 01:02:25.307 [INFO][4803] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6b06c33287099f12d5c260abafbd4f66793c221a7bcaaecc21ed289b22ea9608" HandleID="k8s-pod-network.6b06c33287099f12d5c260abafbd4f66793c221a7bcaaecc21ed289b22ea9608" Workload="srv--jzpa0.gb1.brightbox.com-k8s-calico--apiserver--78cd759b77--szkpk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031a800), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-jzpa0.gb1.brightbox.com", "pod":"calico-apiserver-78cd759b77-szkpk", "timestamp":"2025-02-14 01:02:25.285339222 +0000 UTC"}, Hostname:"srv-jzpa0.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 14 01:02:25.412912 containerd[1514]: 2025-02-14 01:02:25.307 [INFO][4803] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:02:25.412912 containerd[1514]: 2025-02-14 01:02:25.307 [INFO][4803] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:02:25.412912 containerd[1514]: 2025-02-14 01:02:25.308 [INFO][4803] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-jzpa0.gb1.brightbox.com' Feb 14 01:02:25.412912 containerd[1514]: 2025-02-14 01:02:25.312 [INFO][4803] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6b06c33287099f12d5c260abafbd4f66793c221a7bcaaecc21ed289b22ea9608" host="srv-jzpa0.gb1.brightbox.com" Feb 14 01:02:25.412912 containerd[1514]: 2025-02-14 01:02:25.319 [INFO][4803] ipam/ipam.go 372: Looking up existing affinities for host host="srv-jzpa0.gb1.brightbox.com" Feb 14 01:02:25.412912 containerd[1514]: 2025-02-14 01:02:25.328 [INFO][4803] ipam/ipam.go 489: Trying affinity for 192.168.53.64/26 host="srv-jzpa0.gb1.brightbox.com" Feb 14 01:02:25.412912 containerd[1514]: 2025-02-14 01:02:25.332 [INFO][4803] ipam/ipam.go 155: Attempting to load block cidr=192.168.53.64/26 host="srv-jzpa0.gb1.brightbox.com" Feb 14 01:02:25.412912 containerd[1514]: 2025-02-14 01:02:25.337 [INFO][4803] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.53.64/26 host="srv-jzpa0.gb1.brightbox.com" Feb 14 01:02:25.412912 containerd[1514]: 2025-02-14 01:02:25.337 [INFO][4803] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.53.64/26 handle="k8s-pod-network.6b06c33287099f12d5c260abafbd4f66793c221a7bcaaecc21ed289b22ea9608" host="srv-jzpa0.gb1.brightbox.com" Feb 14 01:02:25.412912 containerd[1514]: 2025-02-14 01:02:25.340 [INFO][4803] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6b06c33287099f12d5c260abafbd4f66793c221a7bcaaecc21ed289b22ea9608 Feb 14 01:02:25.412912 containerd[1514]: 2025-02-14 01:02:25.347 [INFO][4803] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.53.64/26 handle="k8s-pod-network.6b06c33287099f12d5c260abafbd4f66793c221a7bcaaecc21ed289b22ea9608" host="srv-jzpa0.gb1.brightbox.com" Feb 14 01:02:25.412912 containerd[1514]: 2025-02-14 01:02:25.365 [INFO][4803] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.53.70/26] block=192.168.53.64/26 handle="k8s-pod-network.6b06c33287099f12d5c260abafbd4f66793c221a7bcaaecc21ed289b22ea9608" host="srv-jzpa0.gb1.brightbox.com" Feb 14 01:02:25.412912 containerd[1514]: 2025-02-14 01:02:25.365 [INFO][4803] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.53.70/26] handle="k8s-pod-network.6b06c33287099f12d5c260abafbd4f66793c221a7bcaaecc21ed289b22ea9608" host="srv-jzpa0.gb1.brightbox.com" Feb 14 01:02:25.412912 containerd[1514]: 2025-02-14 01:02:25.365 [INFO][4803] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:02:25.412912 containerd[1514]: 2025-02-14 01:02:25.365 [INFO][4803] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.53.70/26] IPv6=[] ContainerID="6b06c33287099f12d5c260abafbd4f66793c221a7bcaaecc21ed289b22ea9608" HandleID="k8s-pod-network.6b06c33287099f12d5c260abafbd4f66793c221a7bcaaecc21ed289b22ea9608" Workload="srv--jzpa0.gb1.brightbox.com-k8s-calico--apiserver--78cd759b77--szkpk-eth0" Feb 14 01:02:25.416134 containerd[1514]: 2025-02-14 01:02:25.368 [INFO][4789] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6b06c33287099f12d5c260abafbd4f66793c221a7bcaaecc21ed289b22ea9608" Namespace="calico-apiserver" Pod="calico-apiserver-78cd759b77-szkpk" WorkloadEndpoint="srv--jzpa0.gb1.brightbox.com-k8s-calico--apiserver--78cd759b77--szkpk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jzpa0.gb1.brightbox.com-k8s-calico--apiserver--78cd759b77--szkpk-eth0", GenerateName:"calico-apiserver-78cd759b77-", Namespace:"calico-apiserver", SelfLink:"", UID:"a76acc35-8f62-4f71-9d2c-67c0a127253c", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 1, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78cd759b77", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jzpa0.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-78cd759b77-szkpk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.53.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali921f7fe55ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:02:25.416134 containerd[1514]: 2025-02-14 01:02:25.368 [INFO][4789] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.53.70/32] ContainerID="6b06c33287099f12d5c260abafbd4f66793c221a7bcaaecc21ed289b22ea9608" Namespace="calico-apiserver" Pod="calico-apiserver-78cd759b77-szkpk" WorkloadEndpoint="srv--jzpa0.gb1.brightbox.com-k8s-calico--apiserver--78cd759b77--szkpk-eth0" Feb 14 01:02:25.416134 containerd[1514]: 2025-02-14 01:02:25.368 [INFO][4789] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali921f7fe55ac ContainerID="6b06c33287099f12d5c260abafbd4f66793c221a7bcaaecc21ed289b22ea9608" Namespace="calico-apiserver" Pod="calico-apiserver-78cd759b77-szkpk" WorkloadEndpoint="srv--jzpa0.gb1.brightbox.com-k8s-calico--apiserver--78cd759b77--szkpk-eth0" Feb 14 01:02:25.416134 containerd[1514]: 2025-02-14 01:02:25.375 [INFO][4789] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6b06c33287099f12d5c260abafbd4f66793c221a7bcaaecc21ed289b22ea9608" Namespace="calico-apiserver" Pod="calico-apiserver-78cd759b77-szkpk" WorkloadEndpoint="srv--jzpa0.gb1.brightbox.com-k8s-calico--apiserver--78cd759b77--szkpk-eth0" Feb 14 01:02:25.416134 containerd[1514]: 2025-02-14 01:02:25.376 [INFO][4789] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6b06c33287099f12d5c260abafbd4f66793c221a7bcaaecc21ed289b22ea9608" Namespace="calico-apiserver" Pod="calico-apiserver-78cd759b77-szkpk" WorkloadEndpoint="srv--jzpa0.gb1.brightbox.com-k8s-calico--apiserver--78cd759b77--szkpk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jzpa0.gb1.brightbox.com-k8s-calico--apiserver--78cd759b77--szkpk-eth0", GenerateName:"calico-apiserver-78cd759b77-", Namespace:"calico-apiserver", SelfLink:"", UID:"a76acc35-8f62-4f71-9d2c-67c0a127253c", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 1, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78cd759b77", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jzpa0.gb1.brightbox.com", ContainerID:"6b06c33287099f12d5c260abafbd4f66793c221a7bcaaecc21ed289b22ea9608", Pod:"calico-apiserver-78cd759b77-szkpk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.53.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali921f7fe55ac", MAC:"4a:ff:a5:c5:52:b1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:02:25.416134 containerd[1514]: 2025-02-14 01:02:25.408 [INFO][4789] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6b06c33287099f12d5c260abafbd4f66793c221a7bcaaecc21ed289b22ea9608" Namespace="calico-apiserver" Pod="calico-apiserver-78cd759b77-szkpk" WorkloadEndpoint="srv--jzpa0.gb1.brightbox.com-k8s-calico--apiserver--78cd759b77--szkpk-eth0" Feb 14 01:02:25.431707 systemd-networkd[1417]: cali2e620141db9: Gained IPv6LL Feb 14 01:02:25.458227 containerd[1514]: time="2025-02-14T01:02:25.457968320Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 01:02:25.458227 containerd[1514]: time="2025-02-14T01:02:25.458071732Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 01:02:25.458227 containerd[1514]: time="2025-02-14T01:02:25.458096158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:02:25.458735 containerd[1514]: time="2025-02-14T01:02:25.458244238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:02:25.488744 systemd[1]: Started cri-containerd-6b06c33287099f12d5c260abafbd4f66793c221a7bcaaecc21ed289b22ea9608.scope - libcontainer container 6b06c33287099f12d5c260abafbd4f66793c221a7bcaaecc21ed289b22ea9608. Feb 14 01:02:25.551971 containerd[1514]: time="2025-02-14T01:02:25.551884392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78cd759b77-szkpk,Uid:a76acc35-8f62-4f71-9d2c-67c0a127253c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"6b06c33287099f12d5c260abafbd4f66793c221a7bcaaecc21ed289b22ea9608\"" Feb 14 01:02:25.815714 systemd-networkd[1417]: caliaa148f3e118: Gained IPv6LL Feb 14 01:02:27.351971 systemd-networkd[1417]: cali921f7fe55ac: Gained IPv6LL Feb 14 01:02:27.964168 containerd[1514]: time="2025-02-14T01:02:27.962908785Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:02:27.964168 containerd[1514]: time="2025-02-14T01:02:27.964095868Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Feb 14 01:02:27.965092 containerd[1514]: time="2025-02-14T01:02:27.965032919Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:02:27.967935 containerd[1514]: time="2025-02-14T01:02:27.967888429Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:02:27.969048 containerd[1514]: time="2025-02-14T01:02:27.968989803Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 3.402371215s" Feb 14 01:02:27.969128 containerd[1514]: time="2025-02-14T01:02:27.969055828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Feb 14 01:02:27.972395 containerd[1514]: time="2025-02-14T01:02:27.972345052Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 14 01:02:27.999163 containerd[1514]: time="2025-02-14T01:02:27.999103403Z" level=info msg="CreateContainer within sandbox \"1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 14 01:02:28.022111 containerd[1514]: time="2025-02-14T01:02:28.022066328Z" level=info msg="CreateContainer within sandbox \"1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"79b90033fe3c22139d6a107f0ccc060cb9fb43ec4c4d7cd5bb74b1c949101d9b\"" Feb 14 01:02:28.023840 containerd[1514]: time="2025-02-14T01:02:28.023676940Z" level=info msg="StartContainer for \"79b90033fe3c22139d6a107f0ccc060cb9fb43ec4c4d7cd5bb74b1c949101d9b\"" Feb 14 01:02:28.071783 systemd[1]: Started cri-containerd-79b90033fe3c22139d6a107f0ccc060cb9fb43ec4c4d7cd5bb74b1c949101d9b.scope - libcontainer container 79b90033fe3c22139d6a107f0ccc060cb9fb43ec4c4d7cd5bb74b1c949101d9b. Feb 14 01:02:28.140383 containerd[1514]: time="2025-02-14T01:02:28.140015859Z" level=info msg="StartContainer for \"79b90033fe3c22139d6a107f0ccc060cb9fb43ec4c4d7cd5bb74b1c949101d9b\" returns successfully" Feb 14 01:02:28.188612 kubelet[2722]: I0214 01:02:28.187399 2722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6646bd4b95-9q284" podStartSLOduration=31.720693543 podStartE2EDuration="36.187368981s" podCreationTimestamp="2025-02-14 01:01:52 +0000 UTC" firstStartedPulling="2025-02-14 01:02:23.504334703 +0000 UTC m=+44.219058046" lastFinishedPulling="2025-02-14 01:02:27.971010148 +0000 UTC m=+48.685733484" observedRunningTime="2025-02-14 01:02:28.185407579 +0000 UTC m=+48.900130934" watchObservedRunningTime="2025-02-14 01:02:28.187368981 +0000 UTC m=+48.902092324" Feb 14 01:02:29.986507 containerd[1514]: time="2025-02-14T01:02:29.986439627Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:02:29.988730 containerd[1514]: time="2025-02-14T01:02:29.988659955Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 14 01:02:29.989617 containerd[1514]: time="2025-02-14T01:02:29.989524318Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:02:30.005179 containerd[1514]: time="2025-02-14T01:02:30.005114337Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:02:30.005777 containerd[1514]: time="2025-02-14T01:02:30.005728270Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.033324035s" Feb 14 01:02:30.005877 containerd[1514]: time="2025-02-14T01:02:30.005785290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 14 01:02:30.009336 containerd[1514]: time="2025-02-14T01:02:30.009265653Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 14 01:02:30.013918 containerd[1514]: time="2025-02-14T01:02:30.013843065Z" level=info msg="CreateContainer within sandbox \"f1d636e8118b01746c0dce2d2ff3843e35f6a6c13bd8c560aa4e31af0ef23759\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 14 01:02:30.049435 containerd[1514]: time="2025-02-14T01:02:30.049367084Z" level=info msg="CreateContainer within sandbox \"f1d636e8118b01746c0dce2d2ff3843e35f6a6c13bd8c560aa4e31af0ef23759\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4d10077222670f4f77453d63a0d1f763005cbfcf94f067069f678d515e1a4701\"" Feb 14 01:02:30.052268 containerd[1514]: time="2025-02-14T01:02:30.051961151Z" level=info msg="StartContainer for \"4d10077222670f4f77453d63a0d1f763005cbfcf94f067069f678d515e1a4701\"" Feb 14 01:02:30.120090 systemd[1]: run-containerd-runc-k8s.io-4d10077222670f4f77453d63a0d1f763005cbfcf94f067069f678d515e1a4701-runc.JLal0u.mount: Deactivated successfully. Feb 14 01:02:30.129756 systemd[1]: Started cri-containerd-4d10077222670f4f77453d63a0d1f763005cbfcf94f067069f678d515e1a4701.scope - libcontainer container 4d10077222670f4f77453d63a0d1f763005cbfcf94f067069f678d515e1a4701. Feb 14 01:02:30.191255 containerd[1514]: time="2025-02-14T01:02:30.191149600Z" level=info msg="StartContainer for \"4d10077222670f4f77453d63a0d1f763005cbfcf94f067069f678d515e1a4701\" returns successfully" Feb 14 01:02:30.899873 kubelet[2722]: I0214 01:02:30.899790 2722 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 14 01:02:30.899873 kubelet[2722]: I0214 01:02:30.899884 2722 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 14 01:02:31.199734 kubelet[2722]: I0214 01:02:31.198606 2722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-gt8pt" podStartSLOduration=31.157462546 podStartE2EDuration="39.198403849s" podCreationTimestamp="2025-02-14 01:01:52 +0000 UTC" firstStartedPulling="2025-02-14 01:02:21.967998985 +0000 UTC m=+42.682722321" lastFinishedPulling="2025-02-14 01:02:30.008940283 +0000 UTC m=+50.723663624" observedRunningTime="2025-02-14 01:02:31.196697772 +0000 UTC m=+51.911421138" watchObservedRunningTime="2025-02-14 01:02:31.198403849 +0000 UTC m=+51.913127198" Feb 14 01:02:33.945592 containerd[1514]: time="2025-02-14T01:02:33.945498897Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:02:33.947225 containerd[1514]: time="2025-02-14T01:02:33.946911947Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Feb 14 01:02:33.948704 containerd[1514]: time="2025-02-14T01:02:33.948231801Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:02:33.951304 containerd[1514]: time="2025-02-14T01:02:33.951262646Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:02:33.952619 containerd[1514]: time="2025-02-14T01:02:33.952581645Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 3.943259338s" Feb 14 01:02:33.952766 containerd[1514]: time="2025-02-14T01:02:33.952736713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 14 01:02:33.954185 containerd[1514]: time="2025-02-14T01:02:33.954153482Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 14 01:02:33.956807 containerd[1514]: time="2025-02-14T01:02:33.956731868Z" level=info msg="CreateContainer within sandbox \"89119fc200753d18e22603ae914046f68942bd8a4d2a02314435f0a49b7d0082\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 14 01:02:33.981240 containerd[1514]: time="2025-02-14T01:02:33.981173614Z" level=info msg="CreateContainer within sandbox \"89119fc200753d18e22603ae914046f68942bd8a4d2a02314435f0a49b7d0082\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"598b6a677dd6216cb315a80b3a7f4abbcc304f7019fdb8508cd9aec6272b292a\"" Feb 14 01:02:33.983063 containerd[1514]: time="2025-02-14T01:02:33.983007920Z" level=info msg="StartContainer for \"598b6a677dd6216cb315a80b3a7f4abbcc304f7019fdb8508cd9aec6272b292a\"" Feb 14 01:02:34.047897 systemd[1]: Started cri-containerd-598b6a677dd6216cb315a80b3a7f4abbcc304f7019fdb8508cd9aec6272b292a.scope - libcontainer container 598b6a677dd6216cb315a80b3a7f4abbcc304f7019fdb8508cd9aec6272b292a. Feb 14 01:02:34.121299 containerd[1514]: time="2025-02-14T01:02:34.121232995Z" level=info msg="StartContainer for \"598b6a677dd6216cb315a80b3a7f4abbcc304f7019fdb8508cd9aec6272b292a\" returns successfully" Feb 14 01:02:34.213510 kubelet[2722]: I0214 01:02:34.213299 2722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-78cd759b77-mft6l" podStartSLOduration=32.283919598 podStartE2EDuration="41.213270505s" podCreationTimestamp="2025-02-14 01:01:53 +0000 UTC" firstStartedPulling="2025-02-14 01:02:25.024561233 +0000 UTC m=+45.739284580" lastFinishedPulling="2025-02-14 01:02:33.95391214 +0000 UTC m=+54.668635487" observedRunningTime="2025-02-14 01:02:34.211855369 +0000 UTC m=+54.926578723" watchObservedRunningTime="2025-02-14 01:02:34.213270505 +0000 UTC m=+54.927993857" Feb 14 01:02:34.381360 containerd[1514]: time="2025-02-14T01:02:34.379693762Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:02:34.383244 containerd[1514]: time="2025-02-14T01:02:34.382798773Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Feb 14 01:02:34.386400 containerd[1514]: time="2025-02-14T01:02:34.386278494Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 431.664082ms" Feb 14 01:02:34.386400 containerd[1514]: time="2025-02-14T01:02:34.386342975Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 14 01:02:34.394283 containerd[1514]: time="2025-02-14T01:02:34.394080114Z" level=info msg="CreateContainer within sandbox \"6b06c33287099f12d5c260abafbd4f66793c221a7bcaaecc21ed289b22ea9608\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 14 01:02:34.426323 containerd[1514]: time="2025-02-14T01:02:34.425898813Z" level=info msg="CreateContainer within sandbox \"6b06c33287099f12d5c260abafbd4f66793c221a7bcaaecc21ed289b22ea9608\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"90043b991cba7bd81223803f8997cbd3c109c508e2b14e6324003894a1298448\"" Feb 14 01:02:34.429186 containerd[1514]: time="2025-02-14T01:02:34.429123624Z" level=info msg="StartContainer for \"90043b991cba7bd81223803f8997cbd3c109c508e2b14e6324003894a1298448\"" Feb 14 01:02:34.487750 systemd[1]: Started cri-containerd-90043b991cba7bd81223803f8997cbd3c109c508e2b14e6324003894a1298448.scope - libcontainer container 90043b991cba7bd81223803f8997cbd3c109c508e2b14e6324003894a1298448. Feb 14 01:02:34.588064 containerd[1514]: time="2025-02-14T01:02:34.586821198Z" level=info msg="StartContainer for \"90043b991cba7bd81223803f8997cbd3c109c508e2b14e6324003894a1298448\" returns successfully" Feb 14 01:02:35.208113 kubelet[2722]: I0214 01:02:35.207892 2722 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 01:02:35.865215 kubelet[2722]: I0214 01:02:35.865111 2722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-78cd759b77-szkpk" podStartSLOduration=34.031356146 podStartE2EDuration="42.865074206s" podCreationTimestamp="2025-02-14 01:01:53 +0000 UTC" firstStartedPulling="2025-02-14 01:02:25.554423044 +0000 UTC m=+46.269146386" lastFinishedPulling="2025-02-14 01:02:34.388141104 +0000 UTC m=+55.102864446" observedRunningTime="2025-02-14 01:02:35.238300858 +0000 UTC m=+55.953024214" watchObservedRunningTime="2025-02-14 01:02:35.865074206 +0000 UTC m=+56.579797556" Feb 14 01:02:39.555220 containerd[1514]: time="2025-02-14T01:02:39.554586269Z" level=info msg="StopPodSandbox for \"515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e\"" Feb 14 01:02:39.798002 containerd[1514]: 2025-02-14 01:02:39.703 [WARNING][5105] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jzpa0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--pds58-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"72fd21e3-08ab-4994-bc45-716e2988248d", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 1, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jzpa0.gb1.brightbox.com", ContainerID:"4a75e78c3f889b79a21c9101f5095c4c29b4cf5fa200c3e4a3ce3bcdd93b7aef", Pod:"coredns-668d6bf9bc-pds58", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.53.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2e620141db9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:02:39.798002 containerd[1514]: 2025-02-14 01:02:39.705 [INFO][5105] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e" Feb 14 01:02:39.798002 containerd[1514]: 2025-02-14 01:02:39.705 [INFO][5105] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e" iface="eth0" netns="" Feb 14 01:02:39.798002 containerd[1514]: 2025-02-14 01:02:39.705 [INFO][5105] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e" Feb 14 01:02:39.798002 containerd[1514]: 2025-02-14 01:02:39.706 [INFO][5105] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e" Feb 14 01:02:39.798002 containerd[1514]: 2025-02-14 01:02:39.777 [INFO][5113] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e" HandleID="k8s-pod-network.515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e" Workload="srv--jzpa0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--pds58-eth0" Feb 14 01:02:39.798002 containerd[1514]: 2025-02-14 01:02:39.778 [INFO][5113] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:02:39.798002 containerd[1514]: 2025-02-14 01:02:39.778 [INFO][5113] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:02:39.798002 containerd[1514]: 2025-02-14 01:02:39.790 [WARNING][5113] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e" HandleID="k8s-pod-network.515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e" Workload="srv--jzpa0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--pds58-eth0" Feb 14 01:02:39.798002 containerd[1514]: 2025-02-14 01:02:39.790 [INFO][5113] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e" HandleID="k8s-pod-network.515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e" Workload="srv--jzpa0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--pds58-eth0" Feb 14 01:02:39.798002 containerd[1514]: 2025-02-14 01:02:39.792 [INFO][5113] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:02:39.798002 containerd[1514]: 2025-02-14 01:02:39.795 [INFO][5105] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e" Feb 14 01:02:39.801904 containerd[1514]: time="2025-02-14T01:02:39.798110028Z" level=info msg="TearDown network for sandbox \"515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e\" successfully" Feb 14 01:02:39.801904 containerd[1514]: time="2025-02-14T01:02:39.798154586Z" level=info msg="StopPodSandbox for \"515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e\" returns successfully" Feb 14 01:02:39.835483 containerd[1514]: time="2025-02-14T01:02:39.835232643Z" level=info msg="RemovePodSandbox for \"515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e\"" Feb 14 01:02:39.835483 containerd[1514]: time="2025-02-14T01:02:39.835315165Z" level=info msg="Forcibly stopping sandbox \"515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e\"" Feb 14 01:02:39.958844 containerd[1514]: 2025-02-14 01:02:39.903 [WARNING][5131] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jzpa0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--pds58-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"72fd21e3-08ab-4994-bc45-716e2988248d", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 1, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jzpa0.gb1.brightbox.com", ContainerID:"4a75e78c3f889b79a21c9101f5095c4c29b4cf5fa200c3e4a3ce3bcdd93b7aef", Pod:"coredns-668d6bf9bc-pds58", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.53.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2e620141db9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:02:39.958844 containerd[1514]: 2025-02-14 01:02:39.904 [INFO][5131] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e" Feb 14 01:02:39.958844 containerd[1514]: 2025-02-14 01:02:39.904 [INFO][5131] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e" iface="eth0" netns="" Feb 14 01:02:39.958844 containerd[1514]: 2025-02-14 01:02:39.904 [INFO][5131] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e" Feb 14 01:02:39.958844 containerd[1514]: 2025-02-14 01:02:39.904 [INFO][5131] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e" Feb 14 01:02:39.958844 containerd[1514]: 2025-02-14 01:02:39.940 [INFO][5137] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e" HandleID="k8s-pod-network.515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e" Workload="srv--jzpa0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--pds58-eth0" Feb 14 01:02:39.958844 containerd[1514]: 2025-02-14 01:02:39.941 [INFO][5137] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:02:39.958844 containerd[1514]: 2025-02-14 01:02:39.941 [INFO][5137] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:02:39.958844 containerd[1514]: 2025-02-14 01:02:39.951 [WARNING][5137] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e" HandleID="k8s-pod-network.515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e" Workload="srv--jzpa0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--pds58-eth0" Feb 14 01:02:39.958844 containerd[1514]: 2025-02-14 01:02:39.951 [INFO][5137] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e" HandleID="k8s-pod-network.515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e" Workload="srv--jzpa0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--pds58-eth0" Feb 14 01:02:39.958844 containerd[1514]: 2025-02-14 01:02:39.953 [INFO][5137] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:02:39.958844 containerd[1514]: 2025-02-14 01:02:39.956 [INFO][5131] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e" Feb 14 01:02:39.960482 containerd[1514]: time="2025-02-14T01:02:39.959059503Z" level=info msg="TearDown network for sandbox \"515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e\" successfully" Feb 14 01:02:39.976756 containerd[1514]: time="2025-02-14T01:02:39.976683068Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 14 01:02:39.989225 containerd[1514]: time="2025-02-14T01:02:39.989174046Z" level=info msg="RemovePodSandbox \"515a343780a5eba909012e1369465a145ad1d1d64c46ce667167cd6327ed067e\" returns successfully" Feb 14 01:02:39.990588 containerd[1514]: time="2025-02-14T01:02:39.990194453Z" level=info msg="StopPodSandbox for \"3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544\"" Feb 14 01:02:40.138830 containerd[1514]: 2025-02-14 01:02:40.054 [WARNING][5155] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jzpa0.gb1.brightbox.com-k8s-csi--node--driver--gt8pt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d3d8e6bb-a729-4387-87b3-0f4e4f6643d0", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 1, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jzpa0.gb1.brightbox.com", ContainerID:"f1d636e8118b01746c0dce2d2ff3843e35f6a6c13bd8c560aa4e31af0ef23759", Pod:"csi-node-driver-gt8pt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.53.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8f23cc04da3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:02:40.138830 containerd[1514]: 2025-02-14 01:02:40.054 [INFO][5155] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544" Feb 14 01:02:40.138830 containerd[1514]: 2025-02-14 01:02:40.054 [INFO][5155] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544" iface="eth0" netns="" Feb 14 01:02:40.138830 containerd[1514]: 2025-02-14 01:02:40.054 [INFO][5155] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544" Feb 14 01:02:40.138830 containerd[1514]: 2025-02-14 01:02:40.055 [INFO][5155] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544" Feb 14 01:02:40.138830 containerd[1514]: 2025-02-14 01:02:40.114 [INFO][5161] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544" HandleID="k8s-pod-network.3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544" Workload="srv--jzpa0.gb1.brightbox.com-k8s-csi--node--driver--gt8pt-eth0" Feb 14 01:02:40.138830 containerd[1514]: 2025-02-14 01:02:40.114 [INFO][5161] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:02:40.138830 containerd[1514]: 2025-02-14 01:02:40.115 [INFO][5161] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:02:40.138830 containerd[1514]: 2025-02-14 01:02:40.130 [WARNING][5161] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544" HandleID="k8s-pod-network.3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544" Workload="srv--jzpa0.gb1.brightbox.com-k8s-csi--node--driver--gt8pt-eth0" Feb 14 01:02:40.138830 containerd[1514]: 2025-02-14 01:02:40.130 [INFO][5161] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544" HandleID="k8s-pod-network.3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544" Workload="srv--jzpa0.gb1.brightbox.com-k8s-csi--node--driver--gt8pt-eth0" Feb 14 01:02:40.138830 containerd[1514]: 2025-02-14 01:02:40.132 [INFO][5161] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:02:40.138830 containerd[1514]: 2025-02-14 01:02:40.135 [INFO][5155] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544" Feb 14 01:02:40.140513 containerd[1514]: time="2025-02-14T01:02:40.140104453Z" level=info msg="TearDown network for sandbox \"3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544\" successfully" Feb 14 01:02:40.140513 containerd[1514]: time="2025-02-14T01:02:40.140173033Z" level=info msg="StopPodSandbox for \"3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544\" returns successfully" Feb 14 01:02:40.141317 containerd[1514]: time="2025-02-14T01:02:40.141189972Z" level=info msg="RemovePodSandbox for \"3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544\"" Feb 14 01:02:40.141317 containerd[1514]: time="2025-02-14T01:02:40.141252242Z" level=info msg="Forcibly stopping sandbox \"3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544\"" Feb 14 01:02:40.273336 containerd[1514]: 2025-02-14 01:02:40.204 [WARNING][5182] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jzpa0.gb1.brightbox.com-k8s-csi--node--driver--gt8pt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d3d8e6bb-a729-4387-87b3-0f4e4f6643d0", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 1, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jzpa0.gb1.brightbox.com", ContainerID:"f1d636e8118b01746c0dce2d2ff3843e35f6a6c13bd8c560aa4e31af0ef23759", Pod:"csi-node-driver-gt8pt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.53.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8f23cc04da3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:02:40.273336 containerd[1514]: 2025-02-14 01:02:40.204 [INFO][5182] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544" Feb 14 01:02:40.273336 containerd[1514]: 2025-02-14 01:02:40.204 [INFO][5182] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544" iface="eth0" netns="" Feb 14 01:02:40.273336 containerd[1514]: 2025-02-14 01:02:40.204 [INFO][5182] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544" Feb 14 01:02:40.273336 containerd[1514]: 2025-02-14 01:02:40.204 [INFO][5182] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544" Feb 14 01:02:40.273336 containerd[1514]: 2025-02-14 01:02:40.255 [INFO][5189] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544" HandleID="k8s-pod-network.3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544" Workload="srv--jzpa0.gb1.brightbox.com-k8s-csi--node--driver--gt8pt-eth0" Feb 14 01:02:40.273336 containerd[1514]: 2025-02-14 01:02:40.256 [INFO][5189] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:02:40.273336 containerd[1514]: 2025-02-14 01:02:40.256 [INFO][5189] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:02:40.273336 containerd[1514]: 2025-02-14 01:02:40.265 [WARNING][5189] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544" HandleID="k8s-pod-network.3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544" Workload="srv--jzpa0.gb1.brightbox.com-k8s-csi--node--driver--gt8pt-eth0" Feb 14 01:02:40.273336 containerd[1514]: 2025-02-14 01:02:40.265 [INFO][5189] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544" HandleID="k8s-pod-network.3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544" Workload="srv--jzpa0.gb1.brightbox.com-k8s-csi--node--driver--gt8pt-eth0" Feb 14 01:02:40.273336 containerd[1514]: 2025-02-14 01:02:40.268 [INFO][5189] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:02:40.273336 containerd[1514]: 2025-02-14 01:02:40.270 [INFO][5182] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544" Feb 14 01:02:40.274976 containerd[1514]: time="2025-02-14T01:02:40.273347278Z" level=info msg="TearDown network for sandbox \"3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544\" successfully" Feb 14 01:02:40.281798 containerd[1514]: time="2025-02-14T01:02:40.281687587Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 14 01:02:40.281954 containerd[1514]: time="2025-02-14T01:02:40.281849138Z" level=info msg="RemovePodSandbox \"3b6d8b5ec8c2db75f10b4e3d68e6b3fba8c64a0eb4689c0aae0db8a0b589c544\" returns successfully" Feb 14 01:02:40.282892 containerd[1514]: time="2025-02-14T01:02:40.282663827Z" level=info msg="StopPodSandbox for \"18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6\"" Feb 14 01:02:40.408221 containerd[1514]: 2025-02-14 01:02:40.349 [WARNING][5207] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jzpa0.gb1.brightbox.com-k8s-calico--apiserver--78cd759b77--szkpk-eth0", GenerateName:"calico-apiserver-78cd759b77-", Namespace:"calico-apiserver", SelfLink:"", UID:"a76acc35-8f62-4f71-9d2c-67c0a127253c", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 1, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78cd759b77", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jzpa0.gb1.brightbox.com", ContainerID:"6b06c33287099f12d5c260abafbd4f66793c221a7bcaaecc21ed289b22ea9608", Pod:"calico-apiserver-78cd759b77-szkpk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.53.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali921f7fe55ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:02:40.408221 containerd[1514]: 2025-02-14 01:02:40.350 [INFO][5207] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6" Feb 14 01:02:40.408221 containerd[1514]: 2025-02-14 01:02:40.350 [INFO][5207] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6" iface="eth0" netns="" Feb 14 01:02:40.408221 containerd[1514]: 2025-02-14 01:02:40.350 [INFO][5207] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6" Feb 14 01:02:40.408221 containerd[1514]: 2025-02-14 01:02:40.350 [INFO][5207] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6" Feb 14 01:02:40.408221 containerd[1514]: 2025-02-14 01:02:40.389 [INFO][5214] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6" HandleID="k8s-pod-network.18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6" Workload="srv--jzpa0.gb1.brightbox.com-k8s-calico--apiserver--78cd759b77--szkpk-eth0" Feb 14 01:02:40.408221 containerd[1514]: 2025-02-14 01:02:40.389 [INFO][5214] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:02:40.408221 containerd[1514]: 2025-02-14 01:02:40.389 [INFO][5214] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:02:40.408221 containerd[1514]: 2025-02-14 01:02:40.400 [WARNING][5214] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6" HandleID="k8s-pod-network.18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6" Workload="srv--jzpa0.gb1.brightbox.com-k8s-calico--apiserver--78cd759b77--szkpk-eth0" Feb 14 01:02:40.408221 containerd[1514]: 2025-02-14 01:02:40.400 [INFO][5214] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6" HandleID="k8s-pod-network.18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6" Workload="srv--jzpa0.gb1.brightbox.com-k8s-calico--apiserver--78cd759b77--szkpk-eth0" Feb 14 01:02:40.408221 containerd[1514]: 2025-02-14 01:02:40.402 [INFO][5214] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:02:40.408221 containerd[1514]: 2025-02-14 01:02:40.405 [INFO][5207] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6" Feb 14 01:02:40.411428 containerd[1514]: time="2025-02-14T01:02:40.408952184Z" level=info msg="TearDown network for sandbox \"18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6\" successfully" Feb 14 01:02:40.411428 containerd[1514]: time="2025-02-14T01:02:40.409094526Z" level=info msg="StopPodSandbox for \"18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6\" returns successfully" Feb 14 01:02:40.411428 containerd[1514]: time="2025-02-14T01:02:40.410443879Z" level=info msg="RemovePodSandbox for \"18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6\"" Feb 14 01:02:40.411428 containerd[1514]: time="2025-02-14T01:02:40.410487558Z" level=info msg="Forcibly stopping sandbox \"18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6\"" Feb 14 01:02:40.536498 containerd[1514]: 2025-02-14 01:02:40.471 [WARNING][5232] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jzpa0.gb1.brightbox.com-k8s-calico--apiserver--78cd759b77--szkpk-eth0", GenerateName:"calico-apiserver-78cd759b77-", Namespace:"calico-apiserver", SelfLink:"", UID:"a76acc35-8f62-4f71-9d2c-67c0a127253c", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 1, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78cd759b77", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jzpa0.gb1.brightbox.com", ContainerID:"6b06c33287099f12d5c260abafbd4f66793c221a7bcaaecc21ed289b22ea9608", Pod:"calico-apiserver-78cd759b77-szkpk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.53.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali921f7fe55ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:02:40.536498 containerd[1514]: 2025-02-14 01:02:40.471 [INFO][5232] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6" Feb 14 01:02:40.536498 containerd[1514]: 2025-02-14 01:02:40.471 [INFO][5232] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6" iface="eth0" netns="" Feb 14 01:02:40.536498 containerd[1514]: 2025-02-14 01:02:40.471 [INFO][5232] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6" Feb 14 01:02:40.536498 containerd[1514]: 2025-02-14 01:02:40.471 [INFO][5232] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6" Feb 14 01:02:40.536498 containerd[1514]: 2025-02-14 01:02:40.518 [INFO][5238] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6" HandleID="k8s-pod-network.18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6" Workload="srv--jzpa0.gb1.brightbox.com-k8s-calico--apiserver--78cd759b77--szkpk-eth0" Feb 14 01:02:40.536498 containerd[1514]: 2025-02-14 01:02:40.519 [INFO][5238] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:02:40.536498 containerd[1514]: 2025-02-14 01:02:40.519 [INFO][5238] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:02:40.536498 containerd[1514]: 2025-02-14 01:02:40.529 [WARNING][5238] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6" HandleID="k8s-pod-network.18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6" Workload="srv--jzpa0.gb1.brightbox.com-k8s-calico--apiserver--78cd759b77--szkpk-eth0" Feb 14 01:02:40.536498 containerd[1514]: 2025-02-14 01:02:40.529 [INFO][5238] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6" HandleID="k8s-pod-network.18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6" Workload="srv--jzpa0.gb1.brightbox.com-k8s-calico--apiserver--78cd759b77--szkpk-eth0" Feb 14 01:02:40.536498 containerd[1514]: 2025-02-14 01:02:40.531 [INFO][5238] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:02:40.536498 containerd[1514]: 2025-02-14 01:02:40.534 [INFO][5232] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6" Feb 14 01:02:40.538105 containerd[1514]: time="2025-02-14T01:02:40.536578798Z" level=info msg="TearDown network for sandbox \"18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6\" successfully" Feb 14 01:02:40.543135 containerd[1514]: time="2025-02-14T01:02:40.543074813Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 14 01:02:40.543227 containerd[1514]: time="2025-02-14T01:02:40.543163982Z" level=info msg="RemovePodSandbox \"18d23ba032d84425e6318263b719d27e1e7ab2189bd97f5ec1145c425076afa6\" returns successfully" Feb 14 01:02:40.544255 containerd[1514]: time="2025-02-14T01:02:40.544211498Z" level=info msg="StopPodSandbox for \"15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f\"" Feb 14 01:02:40.660727 containerd[1514]: 2025-02-14 01:02:40.607 [WARNING][5256] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jzpa0.gb1.brightbox.com-k8s-calico--kube--controllers--6646bd4b95--9q284-eth0", GenerateName:"calico-kube-controllers-6646bd4b95-", Namespace:"calico-system", SelfLink:"", UID:"73cf4496-4287-424e-9aa7-d12b46f31c22", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 1, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6646bd4b95", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jzpa0.gb1.brightbox.com", ContainerID:"1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f", Pod:"calico-kube-controllers-6646bd4b95-9q284", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.53.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali838c0776288", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:02:40.660727 containerd[1514]: 2025-02-14 01:02:40.607 [INFO][5256] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f" Feb 14 01:02:40.660727 containerd[1514]: 2025-02-14 01:02:40.607 [INFO][5256] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f" iface="eth0" netns="" Feb 14 01:02:40.660727 containerd[1514]: 2025-02-14 01:02:40.607 [INFO][5256] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f" Feb 14 01:02:40.660727 containerd[1514]: 2025-02-14 01:02:40.607 [INFO][5256] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f" Feb 14 01:02:40.660727 containerd[1514]: 2025-02-14 01:02:40.643 [INFO][5262] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f" HandleID="k8s-pod-network.15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f" Workload="srv--jzpa0.gb1.brightbox.com-k8s-calico--kube--controllers--6646bd4b95--9q284-eth0" Feb 14 01:02:40.660727 containerd[1514]: 2025-02-14 01:02:40.643 [INFO][5262] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:02:40.660727 containerd[1514]: 2025-02-14 01:02:40.643 [INFO][5262] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:02:40.660727 containerd[1514]: 2025-02-14 01:02:40.653 [WARNING][5262] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f" HandleID="k8s-pod-network.15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f" Workload="srv--jzpa0.gb1.brightbox.com-k8s-calico--kube--controllers--6646bd4b95--9q284-eth0" Feb 14 01:02:40.660727 containerd[1514]: 2025-02-14 01:02:40.653 [INFO][5262] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f" HandleID="k8s-pod-network.15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f" Workload="srv--jzpa0.gb1.brightbox.com-k8s-calico--kube--controllers--6646bd4b95--9q284-eth0" Feb 14 01:02:40.660727 containerd[1514]: 2025-02-14 01:02:40.655 [INFO][5262] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:02:40.660727 containerd[1514]: 2025-02-14 01:02:40.657 [INFO][5256] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f" Feb 14 01:02:40.660727 containerd[1514]: time="2025-02-14T01:02:40.660686079Z" level=info msg="TearDown network for sandbox \"15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f\" successfully" Feb 14 01:02:40.664140 containerd[1514]: time="2025-02-14T01:02:40.660730672Z" level=info msg="StopPodSandbox for \"15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f\" returns successfully" Feb 14 01:02:40.664140 containerd[1514]: time="2025-02-14T01:02:40.662899722Z" level=info msg="RemovePodSandbox for \"15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f\"" Feb 14 01:02:40.664140 containerd[1514]: time="2025-02-14T01:02:40.662976937Z" level=info msg="Forcibly stopping sandbox \"15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f\"" Feb 14 01:02:40.784712 containerd[1514]: 2025-02-14 01:02:40.729 [WARNING][5280] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jzpa0.gb1.brightbox.com-k8s-calico--kube--controllers--6646bd4b95--9q284-eth0", GenerateName:"calico-kube-controllers-6646bd4b95-", Namespace:"calico-system", SelfLink:"", UID:"73cf4496-4287-424e-9aa7-d12b46f31c22", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 1, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6646bd4b95", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jzpa0.gb1.brightbox.com", ContainerID:"1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f", Pod:"calico-kube-controllers-6646bd4b95-9q284", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.53.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali838c0776288", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:02:40.784712 containerd[1514]: 2025-02-14 01:02:40.730 [INFO][5280] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f" Feb 14 01:02:40.784712 containerd[1514]: 2025-02-14 01:02:40.730 [INFO][5280] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f" iface="eth0" netns="" Feb 14 01:02:40.784712 containerd[1514]: 2025-02-14 01:02:40.730 [INFO][5280] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f" Feb 14 01:02:40.784712 containerd[1514]: 2025-02-14 01:02:40.730 [INFO][5280] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f" Feb 14 01:02:40.784712 containerd[1514]: 2025-02-14 01:02:40.765 [INFO][5286] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f" HandleID="k8s-pod-network.15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f" Workload="srv--jzpa0.gb1.brightbox.com-k8s-calico--kube--controllers--6646bd4b95--9q284-eth0" Feb 14 01:02:40.784712 containerd[1514]: 2025-02-14 01:02:40.766 [INFO][5286] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:02:40.784712 containerd[1514]: 2025-02-14 01:02:40.766 [INFO][5286] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:02:40.784712 containerd[1514]: 2025-02-14 01:02:40.776 [WARNING][5286] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f" HandleID="k8s-pod-network.15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f" Workload="srv--jzpa0.gb1.brightbox.com-k8s-calico--kube--controllers--6646bd4b95--9q284-eth0" Feb 14 01:02:40.784712 containerd[1514]: 2025-02-14 01:02:40.776 [INFO][5286] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f" HandleID="k8s-pod-network.15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f" Workload="srv--jzpa0.gb1.brightbox.com-k8s-calico--kube--controllers--6646bd4b95--9q284-eth0" Feb 14 01:02:40.784712 containerd[1514]: 2025-02-14 01:02:40.779 [INFO][5286] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:02:40.784712 containerd[1514]: 2025-02-14 01:02:40.781 [INFO][5280] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f" Feb 14 01:02:40.785665 containerd[1514]: time="2025-02-14T01:02:40.784848937Z" level=info msg="TearDown network for sandbox \"15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f\" successfully" Feb 14 01:02:40.790373 containerd[1514]: time="2025-02-14T01:02:40.790334288Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 14 01:02:40.790476 containerd[1514]: time="2025-02-14T01:02:40.790423332Z" level=info msg="RemovePodSandbox \"15a3e8082050ffa1c4988cdb2f34fc8b72ac4aa6b2a361910b51286e65d0ad9f\" returns successfully" Feb 14 01:02:40.791295 containerd[1514]: time="2025-02-14T01:02:40.791242433Z" level=info msg="StopPodSandbox for \"2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679\"" Feb 14 01:02:41.008680 containerd[1514]: 2025-02-14 01:02:40.946 [WARNING][5304] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jzpa0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--rhl98-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2d4aa1d2-3ade-47c1-87a9-67c8a86c7457", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 1, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jzpa0.gb1.brightbox.com", ContainerID:"c336b6079eb32b3032436921d20be571a87240dfe2d782b431fb39d0945430dc", Pod:"coredns-668d6bf9bc-rhl98", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.53.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0016ed9e592", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:02:41.008680 containerd[1514]: 2025-02-14 01:02:40.947 [INFO][5304] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679" Feb 14 01:02:41.008680 containerd[1514]: 2025-02-14 01:02:40.947 [INFO][5304] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679" iface="eth0" netns="" Feb 14 01:02:41.008680 containerd[1514]: 2025-02-14 01:02:40.947 [INFO][5304] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679" Feb 14 01:02:41.008680 containerd[1514]: 2025-02-14 01:02:40.947 [INFO][5304] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679" Feb 14 01:02:41.008680 containerd[1514]: 2025-02-14 01:02:40.991 [INFO][5310] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679" HandleID="k8s-pod-network.2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679" Workload="srv--jzpa0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--rhl98-eth0" Feb 14 01:02:41.008680 containerd[1514]: 2025-02-14 01:02:40.991 [INFO][5310] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:02:41.008680 containerd[1514]: 2025-02-14 01:02:40.992 [INFO][5310] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:02:41.008680 containerd[1514]: 2025-02-14 01:02:41.000 [WARNING][5310] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679" HandleID="k8s-pod-network.2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679" Workload="srv--jzpa0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--rhl98-eth0" Feb 14 01:02:41.008680 containerd[1514]: 2025-02-14 01:02:41.001 [INFO][5310] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679" HandleID="k8s-pod-network.2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679" Workload="srv--jzpa0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--rhl98-eth0" Feb 14 01:02:41.008680 containerd[1514]: 2025-02-14 01:02:41.002 [INFO][5310] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:02:41.008680 containerd[1514]: 2025-02-14 01:02:41.005 [INFO][5304] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679" Feb 14 01:02:41.008680 containerd[1514]: time="2025-02-14T01:02:41.008575142Z" level=info msg="TearDown network for sandbox \"2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679\" successfully" Feb 14 01:02:41.008680 containerd[1514]: time="2025-02-14T01:02:41.008624310Z" level=info msg="StopPodSandbox for \"2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679\" returns successfully" Feb 14 01:02:41.012380 containerd[1514]: time="2025-02-14T01:02:41.011186371Z" level=info msg="RemovePodSandbox for \"2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679\"" Feb 14 01:02:41.012380 containerd[1514]: time="2025-02-14T01:02:41.011366982Z" level=info msg="Forcibly stopping sandbox \"2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679\"" Feb 14 01:02:41.156813 containerd[1514]: 2025-02-14 01:02:41.083 [WARNING][5328] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jzpa0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--rhl98-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2d4aa1d2-3ade-47c1-87a9-67c8a86c7457", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 1, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jzpa0.gb1.brightbox.com", ContainerID:"c336b6079eb32b3032436921d20be571a87240dfe2d782b431fb39d0945430dc", Pod:"coredns-668d6bf9bc-rhl98", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.53.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0016ed9e592", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:02:41.156813 containerd[1514]: 2025-02-14 01:02:41.083 [INFO][5328] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679" Feb 14 01:02:41.156813 containerd[1514]: 2025-02-14 01:02:41.083 [INFO][5328] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679" iface="eth0" netns="" Feb 14 01:02:41.156813 containerd[1514]: 2025-02-14 01:02:41.084 [INFO][5328] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679" Feb 14 01:02:41.156813 containerd[1514]: 2025-02-14 01:02:41.084 [INFO][5328] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679" Feb 14 01:02:41.156813 containerd[1514]: 2025-02-14 01:02:41.137 [INFO][5334] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679" HandleID="k8s-pod-network.2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679" Workload="srv--jzpa0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--rhl98-eth0" Feb 14 01:02:41.156813 containerd[1514]: 2025-02-14 01:02:41.138 [INFO][5334] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:02:41.156813 containerd[1514]: 2025-02-14 01:02:41.138 [INFO][5334] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:02:41.156813 containerd[1514]: 2025-02-14 01:02:41.149 [WARNING][5334] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679" HandleID="k8s-pod-network.2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679" Workload="srv--jzpa0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--rhl98-eth0" Feb 14 01:02:41.156813 containerd[1514]: 2025-02-14 01:02:41.149 [INFO][5334] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679" HandleID="k8s-pod-network.2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679" Workload="srv--jzpa0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--rhl98-eth0" Feb 14 01:02:41.156813 containerd[1514]: 2025-02-14 01:02:41.151 [INFO][5334] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:02:41.156813 containerd[1514]: 2025-02-14 01:02:41.154 [INFO][5328] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679" Feb 14 01:02:41.160068 containerd[1514]: time="2025-02-14T01:02:41.157723674Z" level=info msg="TearDown network for sandbox \"2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679\" successfully" Feb 14 01:02:41.165108 containerd[1514]: time="2025-02-14T01:02:41.164843674Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 14 01:02:41.165108 containerd[1514]: time="2025-02-14T01:02:41.164938045Z" level=info msg="RemovePodSandbox \"2a5cd72d820486ae6fe2845d5abb5daf6ce0e9c02339fb56bb7314397456f679\" returns successfully" Feb 14 01:02:41.167228 containerd[1514]: time="2025-02-14T01:02:41.167140552Z" level=info msg="StopPodSandbox for \"34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab\"" Feb 14 01:02:41.325214 containerd[1514]: 2025-02-14 01:02:41.243 [WARNING][5352] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jzpa0.gb1.brightbox.com-k8s-calico--apiserver--78cd759b77--mft6l-eth0", GenerateName:"calico-apiserver-78cd759b77-", Namespace:"calico-apiserver", SelfLink:"", UID:"320253fc-8ef3-4370-b081-23e4e4fce4d9", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 1, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78cd759b77", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jzpa0.gb1.brightbox.com", ContainerID:"89119fc200753d18e22603ae914046f68942bd8a4d2a02314435f0a49b7d0082", Pod:"calico-apiserver-78cd759b77-mft6l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.53.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaa148f3e118", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:02:41.325214 containerd[1514]: 2025-02-14 01:02:41.244 [INFO][5352] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab" Feb 14 01:02:41.325214 containerd[1514]: 2025-02-14 01:02:41.244 [INFO][5352] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab" iface="eth0" netns="" Feb 14 01:02:41.325214 containerd[1514]: 2025-02-14 01:02:41.244 [INFO][5352] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab" Feb 14 01:02:41.325214 containerd[1514]: 2025-02-14 01:02:41.244 [INFO][5352] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab" Feb 14 01:02:41.325214 containerd[1514]: 2025-02-14 01:02:41.305 [INFO][5359] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab" HandleID="k8s-pod-network.34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab" Workload="srv--jzpa0.gb1.brightbox.com-k8s-calico--apiserver--78cd759b77--mft6l-eth0" Feb 14 01:02:41.325214 containerd[1514]: 2025-02-14 01:02:41.306 [INFO][5359] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:02:41.325214 containerd[1514]: 2025-02-14 01:02:41.306 [INFO][5359] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:02:41.325214 containerd[1514]: 2025-02-14 01:02:41.316 [WARNING][5359] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab" HandleID="k8s-pod-network.34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab" Workload="srv--jzpa0.gb1.brightbox.com-k8s-calico--apiserver--78cd759b77--mft6l-eth0" Feb 14 01:02:41.325214 containerd[1514]: 2025-02-14 01:02:41.316 [INFO][5359] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab" HandleID="k8s-pod-network.34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab" Workload="srv--jzpa0.gb1.brightbox.com-k8s-calico--apiserver--78cd759b77--mft6l-eth0" Feb 14 01:02:41.325214 containerd[1514]: 2025-02-14 01:02:41.318 [INFO][5359] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:02:41.325214 containerd[1514]: 2025-02-14 01:02:41.321 [INFO][5352] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab" Feb 14 01:02:41.326331 containerd[1514]: time="2025-02-14T01:02:41.325296834Z" level=info msg="TearDown network for sandbox \"34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab\" successfully" Feb 14 01:02:41.326331 containerd[1514]: time="2025-02-14T01:02:41.325342312Z" level=info msg="StopPodSandbox for \"34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab\" returns successfully" Feb 14 01:02:41.327313 containerd[1514]: time="2025-02-14T01:02:41.326763757Z" level=info msg="RemovePodSandbox for \"34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab\"" Feb 14 01:02:41.327313 containerd[1514]: time="2025-02-14T01:02:41.326826870Z" level=info msg="Forcibly stopping sandbox \"34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab\"" Feb 14 01:02:41.471313 containerd[1514]: 2025-02-14 01:02:41.409 [WARNING][5377] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jzpa0.gb1.brightbox.com-k8s-calico--apiserver--78cd759b77--mft6l-eth0", GenerateName:"calico-apiserver-78cd759b77-", Namespace:"calico-apiserver", SelfLink:"", UID:"320253fc-8ef3-4370-b081-23e4e4fce4d9", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 1, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78cd759b77", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jzpa0.gb1.brightbox.com", ContainerID:"89119fc200753d18e22603ae914046f68942bd8a4d2a02314435f0a49b7d0082", Pod:"calico-apiserver-78cd759b77-mft6l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.53.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaa148f3e118", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:02:41.471313 containerd[1514]: 2025-02-14 01:02:41.410 [INFO][5377] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab" Feb 14 01:02:41.471313 containerd[1514]: 2025-02-14 01:02:41.410 [INFO][5377] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab" iface="eth0" netns="" Feb 14 01:02:41.471313 containerd[1514]: 2025-02-14 01:02:41.410 [INFO][5377] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab" Feb 14 01:02:41.471313 containerd[1514]: 2025-02-14 01:02:41.411 [INFO][5377] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab" Feb 14 01:02:41.471313 containerd[1514]: 2025-02-14 01:02:41.451 [INFO][5383] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab" HandleID="k8s-pod-network.34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab" Workload="srv--jzpa0.gb1.brightbox.com-k8s-calico--apiserver--78cd759b77--mft6l-eth0" Feb 14 01:02:41.471313 containerd[1514]: 2025-02-14 01:02:41.451 [INFO][5383] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:02:41.471313 containerd[1514]: 2025-02-14 01:02:41.451 [INFO][5383] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:02:41.471313 containerd[1514]: 2025-02-14 01:02:41.462 [WARNING][5383] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab" HandleID="k8s-pod-network.34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab" Workload="srv--jzpa0.gb1.brightbox.com-k8s-calico--apiserver--78cd759b77--mft6l-eth0" Feb 14 01:02:41.471313 containerd[1514]: 2025-02-14 01:02:41.462 [INFO][5383] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab" HandleID="k8s-pod-network.34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab" Workload="srv--jzpa0.gb1.brightbox.com-k8s-calico--apiserver--78cd759b77--mft6l-eth0" Feb 14 01:02:41.471313 containerd[1514]: 2025-02-14 01:02:41.466 [INFO][5383] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:02:41.471313 containerd[1514]: 2025-02-14 01:02:41.468 [INFO][5377] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab" Feb 14 01:02:41.472728 containerd[1514]: time="2025-02-14T01:02:41.471375864Z" level=info msg="TearDown network for sandbox \"34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab\" successfully" Feb 14 01:02:41.475775 containerd[1514]: time="2025-02-14T01:02:41.475735959Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 14 01:02:41.475865 containerd[1514]: time="2025-02-14T01:02:41.475840141Z" level=info msg="RemovePodSandbox \"34fd39448310577fe593bd77e8954b2ef9486b4caf34d6a2079f4c10d075a5ab\" returns successfully" Feb 14 01:02:44.440318 systemd[1]: Started sshd@13-10.230.12.186:22-147.75.109.163:50762.service - OpenSSH per-connection server daemon (147.75.109.163:50762). Feb 14 01:02:45.418269 sshd[5400]: Accepted publickey for core from 147.75.109.163 port 50762 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 01:02:45.421217 sshd[5400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:02:45.430135 systemd-logind[1490]: New session 12 of user core. Feb 14 01:02:45.437708 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 14 01:02:46.604360 sshd[5400]: pam_unix(sshd:session): session closed for user core Feb 14 01:02:46.611898 systemd[1]: sshd@13-10.230.12.186:22-147.75.109.163:50762.service: Deactivated successfully. Feb 14 01:02:46.614406 systemd[1]: session-12.scope: Deactivated successfully. Feb 14 01:02:46.615730 systemd-logind[1490]: Session 12 logged out. Waiting for processes to exit. Feb 14 01:02:46.618079 systemd-logind[1490]: Removed session 12. Feb 14 01:02:51.786840 systemd[1]: Started sshd@14-10.230.12.186:22-147.75.109.163:33274.service - OpenSSH per-connection server daemon (147.75.109.163:33274). Feb 14 01:02:52.724769 sshd[5418]: Accepted publickey for core from 147.75.109.163 port 33274 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 01:02:52.727285 sshd[5418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:02:52.735488 systemd-logind[1490]: New session 13 of user core. Feb 14 01:02:52.738732 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 14 01:02:53.631250 sshd[5418]: pam_unix(sshd:session): session closed for user core Feb 14 01:02:53.639525 systemd-logind[1490]: Session 13 logged out. Waiting for processes to exit. Feb 14 01:02:53.641281 systemd[1]: sshd@14-10.230.12.186:22-147.75.109.163:33274.service: Deactivated successfully. Feb 14 01:02:53.647481 systemd[1]: session-13.scope: Deactivated successfully. Feb 14 01:02:53.652404 systemd-logind[1490]: Removed session 13. Feb 14 01:02:53.764809 containerd[1514]: time="2025-02-14T01:02:53.764661989Z" level=info msg="StopContainer for \"7d317b041e7c14e5547c16b1bf1be569d6ec7e40ffa4a950653b4819a340d2f0\" with timeout 300 (s)" Feb 14 01:02:53.774808 containerd[1514]: time="2025-02-14T01:02:53.774691338Z" level=info msg="Stop container \"7d317b041e7c14e5547c16b1bf1be569d6ec7e40ffa4a950653b4819a340d2f0\" with signal terminated" Feb 14 01:02:53.943742 containerd[1514]: time="2025-02-14T01:02:53.941794904Z" level=info msg="StopContainer for \"79b90033fe3c22139d6a107f0ccc060cb9fb43ec4c4d7cd5bb74b1c949101d9b\" with timeout 30 (s)" Feb 14 01:02:53.944236 containerd[1514]: time="2025-02-14T01:02:53.944077559Z" level=info msg="Stop container \"79b90033fe3c22139d6a107f0ccc060cb9fb43ec4c4d7cd5bb74b1c949101d9b\" with signal terminated" Feb 14 01:02:53.979941 systemd[1]: cri-containerd-79b90033fe3c22139d6a107f0ccc060cb9fb43ec4c4d7cd5bb74b1c949101d9b.scope: Deactivated successfully. Feb 14 01:02:54.044173 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-79b90033fe3c22139d6a107f0ccc060cb9fb43ec4c4d7cd5bb74b1c949101d9b-rootfs.mount: Deactivated successfully. Feb 14 01:02:54.084243 containerd[1514]: time="2025-02-14T01:02:54.055061203Z" level=info msg="shim disconnected" id=79b90033fe3c22139d6a107f0ccc060cb9fb43ec4c4d7cd5bb74b1c949101d9b namespace=k8s.io Feb 14 01:02:54.096661 containerd[1514]: time="2025-02-14T01:02:54.095940643Z" level=warning msg="cleaning up after shim disconnected" id=79b90033fe3c22139d6a107f0ccc060cb9fb43ec4c4d7cd5bb74b1c949101d9b namespace=k8s.io Feb 14 01:02:54.096661 containerd[1514]: time="2025-02-14T01:02:54.095996190Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 14 01:02:54.131622 containerd[1514]: time="2025-02-14T01:02:54.131308993Z" level=warning msg="cleanup warnings time=\"2025-02-14T01:02:54Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 14 01:02:54.136741 containerd[1514]: time="2025-02-14T01:02:54.135890796Z" level=info msg="StopContainer for \"79b90033fe3c22139d6a107f0ccc060cb9fb43ec4c4d7cd5bb74b1c949101d9b\" returns successfully" Feb 14 01:02:54.138295 containerd[1514]: time="2025-02-14T01:02:54.138255973Z" level=info msg="StopPodSandbox for \"1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f\"" Feb 14 01:02:54.140378 containerd[1514]: time="2025-02-14T01:02:54.140311378Z" level=info msg="Container to stop \"79b90033fe3c22139d6a107f0ccc060cb9fb43ec4c4d7cd5bb74b1c949101d9b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 14 01:02:54.150449 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f-shm.mount: Deactivated successfully. Feb 14 01:02:54.160684 systemd[1]: cri-containerd-1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f.scope: Deactivated successfully. Feb 14 01:02:54.210408 containerd[1514]: time="2025-02-14T01:02:54.209972253Z" level=info msg="shim disconnected" id=1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f namespace=k8s.io Feb 14 01:02:54.212288 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f-rootfs.mount: Deactivated successfully. Feb 14 01:02:54.215000 containerd[1514]: time="2025-02-14T01:02:54.213706925Z" level=warning msg="cleaning up after shim disconnected" id=1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f namespace=k8s.io Feb 14 01:02:54.215000 containerd[1514]: time="2025-02-14T01:02:54.213736401Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 14 01:02:54.363133 kubelet[2722]: I0214 01:02:54.363068 2722 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f" Feb 14 01:02:54.400321 systemd-networkd[1417]: cali838c0776288: Link DOWN Feb 14 01:02:54.400334 systemd-networkd[1417]: cali838c0776288: Lost carrier Feb 14 01:02:54.526201 containerd[1514]: 2025-02-14 01:02:54.393 [INFO][5540] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f" Feb 14 01:02:54.526201 containerd[1514]: 2025-02-14 01:02:54.393 [INFO][5540] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f" iface="eth0" netns="/var/run/netns/cni-e839c04e-7eaf-c540-bb10-5637a95790c2" Feb 14 01:02:54.526201 containerd[1514]: 2025-02-14 01:02:54.394 [INFO][5540] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f" iface="eth0" netns="/var/run/netns/cni-e839c04e-7eaf-c540-bb10-5637a95790c2" Feb 14 01:02:54.526201 containerd[1514]: 2025-02-14 01:02:54.407 [INFO][5540] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f" after=13.618051ms iface="eth0" netns="/var/run/netns/cni-e839c04e-7eaf-c540-bb10-5637a95790c2" Feb 14 01:02:54.526201 containerd[1514]: 2025-02-14 01:02:54.407 [INFO][5540] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f" Feb 14 01:02:54.526201 containerd[1514]: 2025-02-14 01:02:54.407 [INFO][5540] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f" Feb 14 01:02:54.526201 containerd[1514]: 2025-02-14 01:02:54.457 [INFO][5548] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f" HandleID="k8s-pod-network.1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f" Workload="srv--jzpa0.gb1.brightbox.com-k8s-calico--kube--controllers--6646bd4b95--9q284-eth0" Feb 14 01:02:54.526201 containerd[1514]: 2025-02-14 01:02:54.457 [INFO][5548] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:02:54.526201 containerd[1514]: 2025-02-14 01:02:54.457 [INFO][5548] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:02:54.526201 containerd[1514]: 2025-02-14 01:02:54.515 [INFO][5548] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f" HandleID="k8s-pod-network.1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f" Workload="srv--jzpa0.gb1.brightbox.com-k8s-calico--kube--controllers--6646bd4b95--9q284-eth0" Feb 14 01:02:54.526201 containerd[1514]: 2025-02-14 01:02:54.516 [INFO][5548] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f" HandleID="k8s-pod-network.1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f" Workload="srv--jzpa0.gb1.brightbox.com-k8s-calico--kube--controllers--6646bd4b95--9q284-eth0" Feb 14 01:02:54.526201 containerd[1514]: 2025-02-14 01:02:54.518 [INFO][5548] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:02:54.526201 containerd[1514]: 2025-02-14 01:02:54.521 [INFO][5540] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f" Feb 14 01:02:54.528278 containerd[1514]: time="2025-02-14T01:02:54.527495832Z" level=info msg="TearDown network for sandbox \"1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f\" successfully" Feb 14 01:02:54.528278 containerd[1514]: time="2025-02-14T01:02:54.527576065Z" level=info msg="StopPodSandbox for \"1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f\" returns successfully" Feb 14 01:02:54.538678 systemd[1]: run-netns-cni\x2de839c04e\x2d7eaf\x2dc540\x2dbb10\x2d5637a95790c2.mount: Deactivated successfully. Feb 14 01:02:54.635824 kubelet[2722]: I0214 01:02:54.634575 2722 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73cf4496-4287-424e-9aa7-d12b46f31c22-tigera-ca-bundle\") pod \"73cf4496-4287-424e-9aa7-d12b46f31c22\" (UID: \"73cf4496-4287-424e-9aa7-d12b46f31c22\") " Feb 14 01:02:54.635824 kubelet[2722]: I0214 01:02:54.634661 2722 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6zk6\" (UniqueName: \"kubernetes.io/projected/73cf4496-4287-424e-9aa7-d12b46f31c22-kube-api-access-z6zk6\") pod \"73cf4496-4287-424e-9aa7-d12b46f31c22\" (UID: \"73cf4496-4287-424e-9aa7-d12b46f31c22\") " Feb 14 01:02:54.675010 systemd[1]: var-lib-kubelet-pods-73cf4496\x2d4287\x2d424e\x2d9aa7\x2dd12b46f31c22-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz6zk6.mount: Deactivated successfully. Feb 14 01:02:54.680307 kubelet[2722]: I0214 01:02:54.676448 2722 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73cf4496-4287-424e-9aa7-d12b46f31c22-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "73cf4496-4287-424e-9aa7-d12b46f31c22" (UID: "73cf4496-4287-424e-9aa7-d12b46f31c22"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 14 01:02:54.680760 kubelet[2722]: I0214 01:02:54.677432 2722 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73cf4496-4287-424e-9aa7-d12b46f31c22-kube-api-access-z6zk6" (OuterVolumeSpecName: "kube-api-access-z6zk6") pod "73cf4496-4287-424e-9aa7-d12b46f31c22" (UID: "73cf4496-4287-424e-9aa7-d12b46f31c22"). InnerVolumeSpecName "kube-api-access-z6zk6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 14 01:02:54.735463 kubelet[2722]: I0214 01:02:54.735366 2722 reconciler_common.go:299] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73cf4496-4287-424e-9aa7-d12b46f31c22-tigera-ca-bundle\") on node \"srv-jzpa0.gb1.brightbox.com\" DevicePath \"\"" Feb 14 01:02:54.735463 kubelet[2722]: I0214 01:02:54.735411 2722 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z6zk6\" (UniqueName: \"kubernetes.io/projected/73cf4496-4287-424e-9aa7-d12b46f31c22-kube-api-access-z6zk6\") on node \"srv-jzpa0.gb1.brightbox.com\" DevicePath \"\"" Feb 14 01:02:55.040589 systemd[1]: var-lib-kubelet-pods-73cf4496\x2d4287\x2d424e\x2d9aa7\x2dd12b46f31c22-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dkube\x2dcontrollers-1.mount: Deactivated successfully. Feb 14 01:02:55.396665 systemd[1]: Removed slice kubepods-besteffort-pod73cf4496_4287_424e_9aa7_d12b46f31c22.slice - libcontainer container kubepods-besteffort-pod73cf4496_4287_424e_9aa7_d12b46f31c22.slice. Feb 14 01:02:55.589611 kubelet[2722]: I0214 01:02:55.589521 2722 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73cf4496-4287-424e-9aa7-d12b46f31c22" path="/var/lib/kubelet/pods/73cf4496-4287-424e-9aa7-d12b46f31c22/volumes" Feb 14 01:02:58.389298 systemd[1]: cri-containerd-7d317b041e7c14e5547c16b1bf1be569d6ec7e40ffa4a950653b4819a340d2f0.scope: Deactivated successfully. Feb 14 01:02:58.425348 containerd[1514]: time="2025-02-14T01:02:58.425135947Z" level=info msg="shim disconnected" id=7d317b041e7c14e5547c16b1bf1be569d6ec7e40ffa4a950653b4819a340d2f0 namespace=k8s.io Feb 14 01:02:58.425348 containerd[1514]: time="2025-02-14T01:02:58.425232335Z" level=warning msg="cleaning up after shim disconnected" id=7d317b041e7c14e5547c16b1bf1be569d6ec7e40ffa4a950653b4819a340d2f0 namespace=k8s.io Feb 14 01:02:58.425348 containerd[1514]: time="2025-02-14T01:02:58.425268497Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 14 01:02:58.427103 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d317b041e7c14e5547c16b1bf1be569d6ec7e40ffa4a950653b4819a340d2f0-rootfs.mount: Deactivated successfully. Feb 14 01:02:58.459771 containerd[1514]: time="2025-02-14T01:02:58.459652127Z" level=info msg="StopContainer for \"7d317b041e7c14e5547c16b1bf1be569d6ec7e40ffa4a950653b4819a340d2f0\" returns successfully" Feb 14 01:02:58.460622 containerd[1514]: time="2025-02-14T01:02:58.460426556Z" level=info msg="StopPodSandbox for \"b66b87510d6bf37f5be0d9eb4f74f6dbc2738aa8068a0b33521029f807a32b5c\"" Feb 14 01:02:58.460622 containerd[1514]: time="2025-02-14T01:02:58.460478446Z" level=info msg="Container to stop \"7d317b041e7c14e5547c16b1bf1be569d6ec7e40ffa4a950653b4819a340d2f0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 14 01:02:58.465867 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b66b87510d6bf37f5be0d9eb4f74f6dbc2738aa8068a0b33521029f807a32b5c-shm.mount: Deactivated successfully. Feb 14 01:02:58.474072 systemd[1]: cri-containerd-b66b87510d6bf37f5be0d9eb4f74f6dbc2738aa8068a0b33521029f807a32b5c.scope: Deactivated successfully. Feb 14 01:02:58.503703 containerd[1514]: time="2025-02-14T01:02:58.503400974Z" level=info msg="shim disconnected" id=b66b87510d6bf37f5be0d9eb4f74f6dbc2738aa8068a0b33521029f807a32b5c namespace=k8s.io Feb 14 01:02:58.503703 containerd[1514]: time="2025-02-14T01:02:58.503474275Z" level=warning msg="cleaning up after shim disconnected" id=b66b87510d6bf37f5be0d9eb4f74f6dbc2738aa8068a0b33521029f807a32b5c namespace=k8s.io Feb 14 01:02:58.503703 containerd[1514]: time="2025-02-14T01:02:58.503489995Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 14 01:02:58.506502 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b66b87510d6bf37f5be0d9eb4f74f6dbc2738aa8068a0b33521029f807a32b5c-rootfs.mount: Deactivated successfully. Feb 14 01:02:58.532284 containerd[1514]: time="2025-02-14T01:02:58.532095731Z" level=info msg="TearDown network for sandbox \"b66b87510d6bf37f5be0d9eb4f74f6dbc2738aa8068a0b33521029f807a32b5c\" successfully" Feb 14 01:02:58.532284 containerd[1514]: time="2025-02-14T01:02:58.532153712Z" level=info msg="StopPodSandbox for \"b66b87510d6bf37f5be0d9eb4f74f6dbc2738aa8068a0b33521029f807a32b5c\" returns successfully" Feb 14 01:02:58.663335 kubelet[2722]: I0214 01:02:58.663127 2722 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/257e20d1-c896-4af3-9c16-b518e0e806f9-typha-certs\") pod \"257e20d1-c896-4af3-9c16-b518e0e806f9\" (UID: \"257e20d1-c896-4af3-9c16-b518e0e806f9\") " Feb 14 01:02:58.663335 kubelet[2722]: I0214 01:02:58.663213 2722 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lvnt\" (UniqueName: \"kubernetes.io/projected/257e20d1-c896-4af3-9c16-b518e0e806f9-kube-api-access-5lvnt\") pod \"257e20d1-c896-4af3-9c16-b518e0e806f9\" (UID: \"257e20d1-c896-4af3-9c16-b518e0e806f9\") " Feb 14 01:02:58.663335 kubelet[2722]: I0214 01:02:58.663265 2722 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/257e20d1-c896-4af3-9c16-b518e0e806f9-tigera-ca-bundle\") pod \"257e20d1-c896-4af3-9c16-b518e0e806f9\" (UID: \"257e20d1-c896-4af3-9c16-b518e0e806f9\") " Feb 14 01:02:58.672662 kubelet[2722]: I0214 01:02:58.671834 2722 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/257e20d1-c896-4af3-9c16-b518e0e806f9-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "257e20d1-c896-4af3-9c16-b518e0e806f9" (UID: "257e20d1-c896-4af3-9c16-b518e0e806f9"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 14 01:02:58.673463 systemd[1]: var-lib-kubelet-pods-257e20d1\x2dc896\x2d4af3\x2d9c16\x2db518e0e806f9-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Feb 14 01:02:58.675993 kubelet[2722]: I0214 01:02:58.675908 2722 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/257e20d1-c896-4af3-9c16-b518e0e806f9-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "257e20d1-c896-4af3-9c16-b518e0e806f9" (UID: "257e20d1-c896-4af3-9c16-b518e0e806f9"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 14 01:02:58.681123 kubelet[2722]: I0214 01:02:58.681073 2722 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/257e20d1-c896-4af3-9c16-b518e0e806f9-kube-api-access-5lvnt" (OuterVolumeSpecName: "kube-api-access-5lvnt") pod "257e20d1-c896-4af3-9c16-b518e0e806f9" (UID: "257e20d1-c896-4af3-9c16-b518e0e806f9"). InnerVolumeSpecName "kube-api-access-5lvnt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 14 01:02:58.682839 systemd[1]: var-lib-kubelet-pods-257e20d1\x2dc896\x2d4af3\x2d9c16\x2db518e0e806f9-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Feb 14 01:02:58.764178 kubelet[2722]: I0214 01:02:58.764063 2722 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lvnt\" (UniqueName: \"kubernetes.io/projected/257e20d1-c896-4af3-9c16-b518e0e806f9-kube-api-access-5lvnt\") on node \"srv-jzpa0.gb1.brightbox.com\" DevicePath \"\"" Feb 14 01:02:58.764178 kubelet[2722]: I0214 01:02:58.764127 2722 reconciler_common.go:299] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/257e20d1-c896-4af3-9c16-b518e0e806f9-typha-certs\") on node \"srv-jzpa0.gb1.brightbox.com\" DevicePath \"\"" Feb 14 01:02:58.764178 kubelet[2722]: I0214 01:02:58.764145 2722 reconciler_common.go:299] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/257e20d1-c896-4af3-9c16-b518e0e806f9-tigera-ca-bundle\") on node \"srv-jzpa0.gb1.brightbox.com\" DevicePath \"\"" Feb 14 01:02:58.787978 systemd[1]: Started sshd@15-10.230.12.186:22-147.75.109.163:33278.service - OpenSSH per-connection server daemon (147.75.109.163:33278). Feb 14 01:02:59.419021 systemd[1]: Removed slice kubepods-besteffort-pod257e20d1_c896_4af3_9c16_b518e0e806f9.slice - libcontainer container kubepods-besteffort-pod257e20d1_c896_4af3_9c16_b518e0e806f9.slice. Feb 14 01:02:59.424968 kubelet[2722]: I0214 01:02:59.423759 2722 scope.go:117] "RemoveContainer" containerID="7d317b041e7c14e5547c16b1bf1be569d6ec7e40ffa4a950653b4819a340d2f0" Feb 14 01:02:59.427423 systemd[1]: var-lib-kubelet-pods-257e20d1\x2dc896\x2d4af3\x2d9c16\x2db518e0e806f9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5lvnt.mount: Deactivated successfully. Feb 14 01:02:59.432805 containerd[1514]: time="2025-02-14T01:02:59.432442252Z" level=info msg="RemoveContainer for \"7d317b041e7c14e5547c16b1bf1be569d6ec7e40ffa4a950653b4819a340d2f0\"" Feb 14 01:02:59.439042 containerd[1514]: time="2025-02-14T01:02:59.438941901Z" level=info msg="RemoveContainer for \"7d317b041e7c14e5547c16b1bf1be569d6ec7e40ffa4a950653b4819a340d2f0\" returns successfully" Feb 14 01:02:59.439326 kubelet[2722]: I0214 01:02:59.439196 2722 scope.go:117] "RemoveContainer" containerID="7d317b041e7c14e5547c16b1bf1be569d6ec7e40ffa4a950653b4819a340d2f0" Feb 14 01:02:59.465053 containerd[1514]: time="2025-02-14T01:02:59.454509197Z" level=error msg="ContainerStatus for \"7d317b041e7c14e5547c16b1bf1be569d6ec7e40ffa4a950653b4819a340d2f0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7d317b041e7c14e5547c16b1bf1be569d6ec7e40ffa4a950653b4819a340d2f0\": not found" Feb 14 01:02:59.481823 kubelet[2722]: E0214 01:02:59.481769 2722 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7d317b041e7c14e5547c16b1bf1be569d6ec7e40ffa4a950653b4819a340d2f0\": not found" containerID="7d317b041e7c14e5547c16b1bf1be569d6ec7e40ffa4a950653b4819a340d2f0" Feb 14 01:02:59.482535 kubelet[2722]: I0214 01:02:59.482063 2722 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7d317b041e7c14e5547c16b1bf1be569d6ec7e40ffa4a950653b4819a340d2f0"} err="failed to get container status \"7d317b041e7c14e5547c16b1bf1be569d6ec7e40ffa4a950653b4819a340d2f0\": rpc error: code = NotFound desc = an error occurred when try to find container \"7d317b041e7c14e5547c16b1bf1be569d6ec7e40ffa4a950653b4819a340d2f0\": not found" Feb 14 01:02:59.583802 kubelet[2722]: I0214 01:02:59.583514 2722 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="257e20d1-c896-4af3-9c16-b518e0e806f9" path="/var/lib/kubelet/pods/257e20d1-c896-4af3-9c16-b518e0e806f9/volumes" Feb 14 01:02:59.868303 sshd[5684]: Accepted publickey for core from 147.75.109.163 port 33278 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 01:02:59.871217 sshd[5684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:02:59.878663 systemd-logind[1490]: New session 14 of user core. Feb 14 01:02:59.886811 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 14 01:03:00.636967 sshd[5684]: pam_unix(sshd:session): session closed for user core Feb 14 01:03:00.642436 systemd[1]: sshd@15-10.230.12.186:22-147.75.109.163:33278.service: Deactivated successfully. Feb 14 01:03:00.645486 systemd[1]: session-14.scope: Deactivated successfully. Feb 14 01:03:00.646714 systemd-logind[1490]: Session 14 logged out. Waiting for processes to exit. Feb 14 01:03:00.648280 systemd-logind[1490]: Removed session 14. Feb 14 01:03:00.795881 systemd[1]: Started sshd@16-10.230.12.186:22-147.75.109.163:39498.service - OpenSSH per-connection server daemon (147.75.109.163:39498). Feb 14 01:03:01.692346 sshd[5749]: Accepted publickey for core from 147.75.109.163 port 39498 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 01:03:01.694700 sshd[5749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:03:01.703765 systemd-logind[1490]: New session 15 of user core. Feb 14 01:03:01.711828 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 14 01:03:02.456917 sshd[5749]: pam_unix(sshd:session): session closed for user core Feb 14 01:03:02.462673 systemd-logind[1490]: Session 15 logged out. Waiting for processes to exit. Feb 14 01:03:02.463234 systemd[1]: sshd@16-10.230.12.186:22-147.75.109.163:39498.service: Deactivated successfully. Feb 14 01:03:02.467626 systemd[1]: session-15.scope: Deactivated successfully. Feb 14 01:03:02.470470 systemd-logind[1490]: Removed session 15. Feb 14 01:03:02.611894 systemd[1]: Started sshd@17-10.230.12.186:22-147.75.109.163:39514.service - OpenSSH per-connection server daemon (147.75.109.163:39514). Feb 14 01:03:03.511369 sshd[5797]: Accepted publickey for core from 147.75.109.163 port 39514 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 01:03:03.512506 sshd[5797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:03:03.527205 systemd-logind[1490]: New session 16 of user core. Feb 14 01:03:03.530763 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 14 01:03:04.233783 sshd[5797]: pam_unix(sshd:session): session closed for user core Feb 14 01:03:04.239854 systemd[1]: sshd@17-10.230.12.186:22-147.75.109.163:39514.service: Deactivated successfully. Feb 14 01:03:04.242212 systemd[1]: session-16.scope: Deactivated successfully. Feb 14 01:03:04.243252 systemd-logind[1490]: Session 16 logged out. Waiting for processes to exit. Feb 14 01:03:04.244897 systemd-logind[1490]: Removed session 16. Feb 14 01:03:06.057992 kubelet[2722]: I0214 01:03:06.057641 2722 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 01:03:09.391816 systemd[1]: Started sshd@18-10.230.12.186:22-147.75.109.163:39522.service - OpenSSH per-connection server daemon (147.75.109.163:39522). Feb 14 01:03:10.301148 sshd[5937]: Accepted publickey for core from 147.75.109.163 port 39522 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 01:03:10.303471 sshd[5937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:03:10.310865 systemd-logind[1490]: New session 17 of user core. Feb 14 01:03:10.319802 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 14 01:03:11.034474 sshd[5937]: pam_unix(sshd:session): session closed for user core Feb 14 01:03:11.040606 systemd-logind[1490]: Session 17 logged out. Waiting for processes to exit. Feb 14 01:03:11.041915 systemd[1]: sshd@18-10.230.12.186:22-147.75.109.163:39522.service: Deactivated successfully. Feb 14 01:03:11.046492 systemd[1]: session-17.scope: Deactivated successfully. Feb 14 01:03:11.049412 systemd-logind[1490]: Removed session 17. Feb 14 01:03:16.194910 systemd[1]: Started sshd@19-10.230.12.186:22-147.75.109.163:44952.service - OpenSSH per-connection server daemon (147.75.109.163:44952). Feb 14 01:03:17.140121 sshd[6102]: Accepted publickey for core from 147.75.109.163 port 44952 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 01:03:17.147124 sshd[6102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:03:17.160603 systemd-logind[1490]: New session 18 of user core. Feb 14 01:03:17.167266 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 14 01:03:17.925889 sshd[6102]: pam_unix(sshd:session): session closed for user core Feb 14 01:03:17.933834 systemd-logind[1490]: Session 18 logged out. Waiting for processes to exit. Feb 14 01:03:17.935818 systemd[1]: sshd@19-10.230.12.186:22-147.75.109.163:44952.service: Deactivated successfully. Feb 14 01:03:17.939335 systemd[1]: session-18.scope: Deactivated successfully. Feb 14 01:03:17.941577 systemd-logind[1490]: Removed session 18. Feb 14 01:03:22.134464 systemd[1]: run-containerd-runc-k8s.io-76e50f992e7628bac0b6b812b4d6f07e42d3e57711bf7718d059fe5ac47ed63c-runc.FGoK2Q.mount: Deactivated successfully. Feb 14 01:03:23.094007 systemd[1]: Started sshd@20-10.230.12.186:22-147.75.109.163:54742.service - OpenSSH per-connection server daemon (147.75.109.163:54742). Feb 14 01:03:24.022506 sshd[6243]: Accepted publickey for core from 147.75.109.163 port 54742 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 01:03:24.030435 sshd[6243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:03:24.039689 systemd-logind[1490]: New session 19 of user core. Feb 14 01:03:24.044413 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 14 01:03:24.821044 sshd[6243]: pam_unix(sshd:session): session closed for user core Feb 14 01:03:24.828041 systemd[1]: sshd@20-10.230.12.186:22-147.75.109.163:54742.service: Deactivated successfully. Feb 14 01:03:24.832380 systemd[1]: session-19.scope: Deactivated successfully. Feb 14 01:03:24.833659 systemd-logind[1490]: Session 19 logged out. Waiting for processes to exit. Feb 14 01:03:24.835269 systemd-logind[1490]: Removed session 19. Feb 14 01:03:29.976929 systemd[1]: Started sshd@21-10.230.12.186:22-147.75.109.163:51638.service - OpenSSH per-connection server daemon (147.75.109.163:51638). Feb 14 01:03:30.871160 sshd[6372]: Accepted publickey for core from 147.75.109.163 port 51638 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 01:03:30.873361 sshd[6372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:03:30.880039 systemd-logind[1490]: New session 20 of user core. Feb 14 01:03:30.886751 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 14 01:03:31.578979 sshd[6372]: pam_unix(sshd:session): session closed for user core Feb 14 01:03:31.585684 systemd-logind[1490]: Session 20 logged out. Waiting for processes to exit. Feb 14 01:03:31.587471 systemd[1]: sshd@21-10.230.12.186:22-147.75.109.163:51638.service: Deactivated successfully. Feb 14 01:03:31.592375 systemd[1]: session-20.scope: Deactivated successfully. Feb 14 01:03:31.593902 systemd-logind[1490]: Removed session 20. Feb 14 01:03:31.736874 systemd[1]: Started sshd@22-10.230.12.186:22-147.75.109.163:51648.service - OpenSSH per-connection server daemon (147.75.109.163:51648). Feb 14 01:03:32.638418 sshd[6420]: Accepted publickey for core from 147.75.109.163 port 51648 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 01:03:32.640803 sshd[6420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:03:32.648593 systemd-logind[1490]: New session 21 of user core. Feb 14 01:03:32.654787 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 14 01:03:33.663854 sshd[6420]: pam_unix(sshd:session): session closed for user core Feb 14 01:03:33.671738 systemd[1]: sshd@22-10.230.12.186:22-147.75.109.163:51648.service: Deactivated successfully. Feb 14 01:03:33.675342 systemd[1]: session-21.scope: Deactivated successfully. Feb 14 01:03:33.677021 systemd-logind[1490]: Session 21 logged out. Waiting for processes to exit. Feb 14 01:03:33.678472 systemd-logind[1490]: Removed session 21. Feb 14 01:03:33.820283 systemd[1]: Started sshd@23-10.230.12.186:22-147.75.109.163:51654.service - OpenSSH per-connection server daemon (147.75.109.163:51654). Feb 14 01:03:34.729767 sshd[6458]: Accepted publickey for core from 147.75.109.163 port 51654 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 01:03:34.731959 sshd[6458]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:03:34.743443 systemd-logind[1490]: New session 22 of user core. Feb 14 01:03:34.750229 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 14 01:03:36.573863 sshd[6458]: pam_unix(sshd:session): session closed for user core Feb 14 01:03:36.580763 systemd[1]: sshd@23-10.230.12.186:22-147.75.109.163:51654.service: Deactivated successfully. Feb 14 01:03:36.583311 systemd[1]: session-22.scope: Deactivated successfully. Feb 14 01:03:36.584561 systemd-logind[1490]: Session 22 logged out. Waiting for processes to exit. Feb 14 01:03:36.586861 systemd-logind[1490]: Removed session 22. Feb 14 01:03:36.726889 systemd[1]: Started sshd@24-10.230.12.186:22-147.75.109.163:51664.service - OpenSSH per-connection server daemon (147.75.109.163:51664). Feb 14 01:03:37.631919 sshd[6527]: Accepted publickey for core from 147.75.109.163 port 51664 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 01:03:37.634110 sshd[6527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:03:37.641637 systemd-logind[1490]: New session 23 of user core. Feb 14 01:03:37.646765 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 14 01:03:38.612039 sshd[6527]: pam_unix(sshd:session): session closed for user core Feb 14 01:03:38.617521 systemd[1]: sshd@24-10.230.12.186:22-147.75.109.163:51664.service: Deactivated successfully. Feb 14 01:03:38.620152 systemd[1]: session-23.scope: Deactivated successfully. Feb 14 01:03:38.621169 systemd-logind[1490]: Session 23 logged out. Waiting for processes to exit. Feb 14 01:03:38.622888 systemd-logind[1490]: Removed session 23. Feb 14 01:03:38.773991 systemd[1]: Started sshd@25-10.230.12.186:22-147.75.109.163:51670.service - OpenSSH per-connection server daemon (147.75.109.163:51670). Feb 14 01:03:39.653866 sshd[6571]: Accepted publickey for core from 147.75.109.163 port 51670 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 01:03:39.656066 sshd[6571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:03:39.662880 systemd-logind[1490]: New session 24 of user core. Feb 14 01:03:39.670144 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 14 01:03:40.367153 sshd[6571]: pam_unix(sshd:session): session closed for user core Feb 14 01:03:40.377487 systemd[1]: sshd@25-10.230.12.186:22-147.75.109.163:51670.service: Deactivated successfully. Feb 14 01:03:40.383229 systemd[1]: session-24.scope: Deactivated successfully. Feb 14 01:03:40.385959 systemd-logind[1490]: Session 24 logged out. Waiting for processes to exit. Feb 14 01:03:40.388125 systemd-logind[1490]: Removed session 24. Feb 14 01:03:41.481137 kubelet[2722]: I0214 01:03:41.481008 2722 scope.go:117] "RemoveContainer" containerID="79b90033fe3c22139d6a107f0ccc060cb9fb43ec4c4d7cd5bb74b1c949101d9b" Feb 14 01:03:41.483197 containerd[1514]: time="2025-02-14T01:03:41.483077814Z" level=info msg="RemoveContainer for \"79b90033fe3c22139d6a107f0ccc060cb9fb43ec4c4d7cd5bb74b1c949101d9b\"" Feb 14 01:03:41.487520 containerd[1514]: time="2025-02-14T01:03:41.487463532Z" level=info msg="RemoveContainer for \"79b90033fe3c22139d6a107f0ccc060cb9fb43ec4c4d7cd5bb74b1c949101d9b\" returns successfully" Feb 14 01:03:41.488994 containerd[1514]: time="2025-02-14T01:03:41.488961277Z" level=info msg="StopPodSandbox for \"1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f\"" Feb 14 01:03:41.615578 containerd[1514]: 2025-02-14 01:03:41.564 [WARNING][6646] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f" WorkloadEndpoint="srv--jzpa0.gb1.brightbox.com-k8s-calico--kube--controllers--6646bd4b95--9q284-eth0" Feb 14 01:03:41.615578 containerd[1514]: 2025-02-14 01:03:41.565 [INFO][6646] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f" Feb 14 01:03:41.615578 containerd[1514]: 2025-02-14 01:03:41.565 [INFO][6646] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f" iface="eth0" netns="" Feb 14 01:03:41.615578 containerd[1514]: 2025-02-14 01:03:41.565 [INFO][6646] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f" Feb 14 01:03:41.615578 containerd[1514]: 2025-02-14 01:03:41.565 [INFO][6646] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f" Feb 14 01:03:41.615578 containerd[1514]: 2025-02-14 01:03:41.599 [INFO][6652] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f" HandleID="k8s-pod-network.1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f" Workload="srv--jzpa0.gb1.brightbox.com-k8s-calico--kube--controllers--6646bd4b95--9q284-eth0" Feb 14 01:03:41.615578 containerd[1514]: 2025-02-14 01:03:41.599 [INFO][6652] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:03:41.615578 containerd[1514]: 2025-02-14 01:03:41.599 [INFO][6652] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:03:41.615578 containerd[1514]: 2025-02-14 01:03:41.608 [WARNING][6652] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f" HandleID="k8s-pod-network.1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f" Workload="srv--jzpa0.gb1.brightbox.com-k8s-calico--kube--controllers--6646bd4b95--9q284-eth0" Feb 14 01:03:41.615578 containerd[1514]: 2025-02-14 01:03:41.608 [INFO][6652] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f" HandleID="k8s-pod-network.1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f" Workload="srv--jzpa0.gb1.brightbox.com-k8s-calico--kube--controllers--6646bd4b95--9q284-eth0" Feb 14 01:03:41.615578 containerd[1514]: 2025-02-14 01:03:41.610 [INFO][6652] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:03:41.615578 containerd[1514]: 2025-02-14 01:03:41.613 [INFO][6646] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f" Feb 14 01:03:41.616245 containerd[1514]: time="2025-02-14T01:03:41.615644116Z" level=info msg="TearDown network for sandbox \"1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f\" successfully" Feb 14 01:03:41.616245 containerd[1514]: time="2025-02-14T01:03:41.615684304Z" level=info msg="StopPodSandbox for \"1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f\" returns successfully" Feb 14 01:03:41.616357 containerd[1514]: time="2025-02-14T01:03:41.616282036Z" level=info msg="RemovePodSandbox for \"1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f\"" Feb 14 01:03:41.616357 containerd[1514]: time="2025-02-14T01:03:41.616328059Z" level=info msg="Forcibly stopping sandbox \"1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f\"" Feb 14 01:03:41.713256 containerd[1514]: 2025-02-14 01:03:41.663 [WARNING][6670] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f" WorkloadEndpoint="srv--jzpa0.gb1.brightbox.com-k8s-calico--kube--controllers--6646bd4b95--9q284-eth0" Feb 14 01:03:41.713256 containerd[1514]: 2025-02-14 01:03:41.663 [INFO][6670] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f" Feb 14 01:03:41.713256 containerd[1514]: 2025-02-14 01:03:41.663 [INFO][6670] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f" iface="eth0" netns="" Feb 14 01:03:41.713256 containerd[1514]: 2025-02-14 01:03:41.663 [INFO][6670] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f" Feb 14 01:03:41.713256 containerd[1514]: 2025-02-14 01:03:41.663 [INFO][6670] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f" Feb 14 01:03:41.713256 containerd[1514]: 2025-02-14 01:03:41.698 [INFO][6677] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f" HandleID="k8s-pod-network.1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f" Workload="srv--jzpa0.gb1.brightbox.com-k8s-calico--kube--controllers--6646bd4b95--9q284-eth0" Feb 14 01:03:41.713256 containerd[1514]: 2025-02-14 01:03:41.699 [INFO][6677] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:03:41.713256 containerd[1514]: 2025-02-14 01:03:41.699 [INFO][6677] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:03:41.713256 containerd[1514]: 2025-02-14 01:03:41.708 [WARNING][6677] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f" HandleID="k8s-pod-network.1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f" Workload="srv--jzpa0.gb1.brightbox.com-k8s-calico--kube--controllers--6646bd4b95--9q284-eth0" Feb 14 01:03:41.713256 containerd[1514]: 2025-02-14 01:03:41.708 [INFO][6677] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f" HandleID="k8s-pod-network.1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f" Workload="srv--jzpa0.gb1.brightbox.com-k8s-calico--kube--controllers--6646bd4b95--9q284-eth0" Feb 14 01:03:41.713256 containerd[1514]: 2025-02-14 01:03:41.709 [INFO][6677] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:03:41.713256 containerd[1514]: 2025-02-14 01:03:41.711 [INFO][6670] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f" Feb 14 01:03:41.713256 containerd[1514]: time="2025-02-14T01:03:41.713163012Z" level=info msg="TearDown network for sandbox \"1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f\" successfully" Feb 14 01:03:41.717107 containerd[1514]: time="2025-02-14T01:03:41.717054763Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 14 01:03:41.717261 containerd[1514]: time="2025-02-14T01:03:41.717136173Z" level=info msg="RemovePodSandbox \"1b2d4e6f017c580b8a8469dbf181efdf3aa3b88cc307c288f8b239e4878c146f\" returns successfully" Feb 14 01:03:41.722886 containerd[1514]: time="2025-02-14T01:03:41.722602701Z" level=info msg="StopPodSandbox for \"b66b87510d6bf37f5be0d9eb4f74f6dbc2738aa8068a0b33521029f807a32b5c\"" Feb 14 01:03:41.722886 containerd[1514]: time="2025-02-14T01:03:41.722773908Z" level=info msg="TearDown network for sandbox \"b66b87510d6bf37f5be0d9eb4f74f6dbc2738aa8068a0b33521029f807a32b5c\" successfully" Feb 14 01:03:41.722886 containerd[1514]: time="2025-02-14T01:03:41.722795707Z" level=info msg="StopPodSandbox for \"b66b87510d6bf37f5be0d9eb4f74f6dbc2738aa8068a0b33521029f807a32b5c\" returns successfully" Feb 14 01:03:41.723440 containerd[1514]: time="2025-02-14T01:03:41.723200865Z" level=info msg="RemovePodSandbox for \"b66b87510d6bf37f5be0d9eb4f74f6dbc2738aa8068a0b33521029f807a32b5c\"" Feb 14 01:03:41.723440 containerd[1514]: time="2025-02-14T01:03:41.723236017Z" level=info msg="Forcibly stopping sandbox \"b66b87510d6bf37f5be0d9eb4f74f6dbc2738aa8068a0b33521029f807a32b5c\"" Feb 14 01:03:41.723440 containerd[1514]: time="2025-02-14T01:03:41.723386315Z" level=info msg="TearDown network for sandbox \"b66b87510d6bf37f5be0d9eb4f74f6dbc2738aa8068a0b33521029f807a32b5c\" successfully" Feb 14 01:03:41.727776 containerd[1514]: time="2025-02-14T01:03:41.727563893Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b66b87510d6bf37f5be0d9eb4f74f6dbc2738aa8068a0b33521029f807a32b5c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 14 01:03:41.727776 containerd[1514]: time="2025-02-14T01:03:41.727638295Z" level=info msg="RemovePodSandbox \"b66b87510d6bf37f5be0d9eb4f74f6dbc2738aa8068a0b33521029f807a32b5c\" returns successfully" Feb 14 01:03:45.524922 systemd[1]: Started sshd@26-10.230.12.186:22-147.75.109.163:59844.service - OpenSSH per-connection server daemon (147.75.109.163:59844). Feb 14 01:03:46.432253 sshd[6779]: Accepted publickey for core from 147.75.109.163 port 59844 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 01:03:46.438371 sshd[6779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:03:46.445159 systemd-logind[1490]: New session 25 of user core. Feb 14 01:03:46.453805 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 14 01:03:47.201795 sshd[6779]: pam_unix(sshd:session): session closed for user core Feb 14 01:03:47.208094 systemd-logind[1490]: Session 25 logged out. Waiting for processes to exit. Feb 14 01:03:47.209491 systemd[1]: sshd@26-10.230.12.186:22-147.75.109.163:59844.service: Deactivated successfully. Feb 14 01:03:47.215980 systemd[1]: session-25.scope: Deactivated successfully. Feb 14 01:03:47.218482 systemd-logind[1490]: Removed session 25. Feb 14 01:03:52.370767 systemd[1]: Started sshd@27-10.230.12.186:22-147.75.109.163:52660.service - OpenSSH per-connection server daemon (147.75.109.163:52660). Feb 14 01:03:53.282054 sshd[6931]: Accepted publickey for core from 147.75.109.163 port 52660 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 01:03:53.284772 sshd[6931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:03:53.294293 systemd-logind[1490]: New session 26 of user core. Feb 14 01:03:53.301776 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 14 01:03:54.017334 sshd[6931]: pam_unix(sshd:session): session closed for user core Feb 14 01:03:54.021689 systemd[1]: sshd@27-10.230.12.186:22-147.75.109.163:52660.service: Deactivated successfully. Feb 14 01:03:54.025924 systemd[1]: session-26.scope: Deactivated successfully. Feb 14 01:03:54.029415 systemd-logind[1490]: Session 26 logged out. Waiting for processes to exit. Feb 14 01:03:54.032148 systemd-logind[1490]: Removed session 26. Feb 14 01:03:59.178907 systemd[1]: Started sshd@28-10.230.12.186:22-147.75.109.163:52670.service - OpenSSH per-connection server daemon (147.75.109.163:52670). Feb 14 01:04:00.098371 sshd[7062]: Accepted publickey for core from 147.75.109.163 port 52670 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 01:04:00.100747 sshd[7062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:04:00.109095 systemd-logind[1490]: New session 27 of user core. Feb 14 01:04:00.117751 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 14 01:04:00.880830 sshd[7062]: pam_unix(sshd:session): session closed for user core Feb 14 01:04:00.886723 systemd-logind[1490]: Session 27 logged out. Waiting for processes to exit. Feb 14 01:04:00.887319 systemd[1]: sshd@28-10.230.12.186:22-147.75.109.163:52670.service: Deactivated successfully. Feb 14 01:04:00.889839 systemd[1]: session-27.scope: Deactivated successfully. Feb 14 01:04:00.891602 systemd-logind[1490]: Removed session 27. Feb 14 01:04:06.080981 systemd[1]: Started sshd@29-10.230.12.186:22-147.75.109.163:56258.service - OpenSSH per-connection server daemon (147.75.109.163:56258). Feb 14 01:04:06.970996 sshd[7189]: Accepted publickey for core from 147.75.109.163 port 56258 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 01:04:06.973858 sshd[7189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:04:06.979856 systemd-logind[1490]: New session 28 of user core. Feb 14 01:04:06.987767 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 14 01:04:07.680139 sshd[7189]: pam_unix(sshd:session): session closed for user core Feb 14 01:04:07.686891 systemd-logind[1490]: Session 28 logged out. Waiting for processes to exit. Feb 14 01:04:07.687984 systemd[1]: sshd@29-10.230.12.186:22-147.75.109.163:56258.service: Deactivated successfully. Feb 14 01:04:07.690981 systemd[1]: session-28.scope: Deactivated successfully. Feb 14 01:04:07.693407 systemd-logind[1490]: Removed session 28. Feb 14 01:04:12.843904 systemd[1]: Started sshd@30-10.230.12.186:22-147.75.109.163:51584.service - OpenSSH per-connection server daemon (147.75.109.163:51584). Feb 14 01:04:13.774924 sshd[7335]: Accepted publickey for core from 147.75.109.163 port 51584 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 01:04:13.778081 sshd[7335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:04:13.786247 systemd-logind[1490]: New session 29 of user core. Feb 14 01:04:13.792818 systemd[1]: Started session-29.scope - Session 29 of User core. Feb 14 01:04:14.694903 sshd[7335]: pam_unix(sshd:session): session closed for user core Feb 14 01:04:14.699860 systemd[1]: sshd@30-10.230.12.186:22-147.75.109.163:51584.service: Deactivated successfully. Feb 14 01:04:14.703452 systemd[1]: session-29.scope: Deactivated successfully. Feb 14 01:04:14.704833 systemd-logind[1490]: Session 29 logged out. Waiting for processes to exit. Feb 14 01:04:14.706305 systemd-logind[1490]: Removed session 29.