Jul 7 02:47:11.936499 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 22:23:50 -00 2025 Jul 7 02:47:11.936532 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 7 02:47:11.936543 kernel: BIOS-provided physical RAM map: Jul 7 02:47:11.936553 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 7 02:47:11.936560 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 7 02:47:11.936567 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 7 02:47:11.936576 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jul 7 02:47:11.936584 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jul 7 02:47:11.936591 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 7 02:47:11.936599 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 7 02:47:11.936606 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 7 02:47:11.936613 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 7 02:47:11.936623 kernel: NX (Execute Disable) protection: active Jul 7 02:47:11.936640 kernel: APIC: Static calls initialized Jul 7 02:47:11.936650 kernel: SMBIOS 2.8 present. Jul 7 02:47:11.936659 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Jul 7 02:47:11.936668 kernel: Hypervisor detected: KVM Jul 7 02:47:11.936678 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 7 02:47:11.936687 kernel: kvm-clock: using sched offset of 3746786185 cycles Jul 7 02:47:11.936696 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 7 02:47:11.936705 kernel: tsc: Detected 2294.576 MHz processor Jul 7 02:47:11.936714 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 7 02:47:11.936723 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 7 02:47:11.936732 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jul 7 02:47:11.936740 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 7 02:47:11.936749 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 7 02:47:11.936759 kernel: Using GB pages for direct mapping Jul 7 02:47:11.936768 kernel: ACPI: Early table checksum verification disabled Jul 7 02:47:11.936776 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Jul 7 02:47:11.936785 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 02:47:11.936793 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 02:47:11.936802 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 02:47:11.936810 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jul 7 02:47:11.936818 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 02:47:11.936827 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 02:47:11.936837 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 02:47:11.936846 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 02:47:11.936854 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jul 7 02:47:11.936863 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jul 7 02:47:11.936871 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jul 7 02:47:11.936884 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jul 7 02:47:11.936893 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jul 7 02:47:11.936904 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jul 7 02:47:11.936913 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jul 7 02:47:11.936922 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 7 02:47:11.936931 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jul 7 02:47:11.936940 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jul 7 02:47:11.936948 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Jul 7 02:47:11.936958 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jul 7 02:47:11.936969 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Jul 7 02:47:11.936978 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jul 7 02:47:11.936987 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Jul 7 02:47:11.936996 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jul 7 02:47:11.937004 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Jul 7 02:47:11.937013 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jul 7 02:47:11.937022 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Jul 7 02:47:11.937031 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jul 7 02:47:11.937040 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Jul 7 02:47:11.937048 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jul 7 02:47:11.937059 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Jul 7 02:47:11.937069 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jul 7 02:47:11.937078 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jul 7 02:47:11.937087 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jul 7 02:47:11.937096 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Jul 7 02:47:11.937105 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Jul 7 02:47:11.937114 kernel: Zone ranges: Jul 7 02:47:11.937123 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 7 02:47:11.937132 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jul 7 02:47:11.937160 kernel: Normal empty Jul 7 02:47:11.937170 kernel: Movable zone start for each node Jul 7 02:47:11.937179 kernel: Early memory node ranges Jul 7 02:47:11.937187 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 7 02:47:11.937196 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jul 7 02:47:11.937205 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jul 7 02:47:11.937214 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 7 02:47:11.937223 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 7 02:47:11.937232 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jul 7 02:47:11.937241 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 7 02:47:11.937252 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 7 02:47:11.937262 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 7 02:47:11.937271 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 7 02:47:11.937280 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 7 02:47:11.937289 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 7 02:47:11.937298 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 7 02:47:11.937307 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 7 02:47:11.937316 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 7 02:47:11.937325 kernel: TSC deadline timer available Jul 7 02:47:11.937337 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Jul 7 02:47:11.937346 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 7 02:47:11.937355 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 7 02:47:11.937364 kernel: Booting paravirtualized kernel on KVM Jul 7 02:47:11.937373 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 7 02:47:11.937382 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jul 7 02:47:11.937391 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u262144 Jul 7 02:47:11.937400 kernel: pcpu-alloc: s197096 r8192 d32280 u262144 alloc=1*2097152 Jul 7 02:47:11.937409 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jul 7 02:47:11.937421 kernel: kvm-guest: PV spinlocks enabled Jul 7 02:47:11.937430 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 7 02:47:11.937440 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 7 02:47:11.937450 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 02:47:11.937459 kernel: random: crng init done Jul 7 02:47:11.937467 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 02:47:11.937477 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 7 02:47:11.937485 kernel: Fallback order for Node 0: 0 Jul 7 02:47:11.937497 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Jul 7 02:47:11.937506 kernel: Policy zone: DMA32 Jul 7 02:47:11.937515 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 02:47:11.937524 kernel: software IO TLB: area num 16. Jul 7 02:47:11.937533 kernel: Memory: 1901516K/2096616K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42868K init, 2324K bss, 194840K reserved, 0K cma-reserved) Jul 7 02:47:11.937543 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jul 7 02:47:11.937552 kernel: ftrace: allocating 37966 entries in 149 pages Jul 7 02:47:11.937561 kernel: ftrace: allocated 149 pages with 4 groups Jul 7 02:47:11.937570 kernel: Dynamic Preempt: voluntary Jul 7 02:47:11.937581 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 02:47:11.937591 kernel: rcu: RCU event tracing is enabled. Jul 7 02:47:11.937600 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jul 7 02:47:11.937609 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 02:47:11.937618 kernel: Rude variant of Tasks RCU enabled. Jul 7 02:47:11.937644 kernel: Tracing variant of Tasks RCU enabled. Jul 7 02:47:11.937656 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 02:47:11.937666 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jul 7 02:47:11.937675 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jul 7 02:47:11.937685 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 02:47:11.937695 kernel: Console: colour VGA+ 80x25 Jul 7 02:47:11.937704 kernel: printk: console [tty0] enabled Jul 7 02:47:11.937716 kernel: printk: console [ttyS0] enabled Jul 7 02:47:11.937726 kernel: ACPI: Core revision 20230628 Jul 7 02:47:11.937735 kernel: APIC: Switch to symmetric I/O mode setup Jul 7 02:47:11.937745 kernel: x2apic enabled Jul 7 02:47:11.937755 kernel: APIC: Switched APIC routing to: physical x2apic Jul 7 02:47:11.937767 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2113312ac93, max_idle_ns: 440795244843 ns Jul 7 02:47:11.937777 kernel: Calibrating delay loop (skipped) preset value.. 4589.15 BogoMIPS (lpj=2294576) Jul 7 02:47:11.937787 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 7 02:47:11.937796 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jul 7 02:47:11.937806 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jul 7 02:47:11.937815 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 7 02:47:11.937825 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jul 7 02:47:11.937834 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jul 7 02:47:11.937844 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jul 7 02:47:11.937856 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jul 7 02:47:11.937866 kernel: RETBleed: Mitigation: Enhanced IBRS Jul 7 02:47:11.937875 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 7 02:47:11.937885 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 7 02:47:11.937894 kernel: TAA: Mitigation: Clear CPU buffers Jul 7 02:47:11.937904 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 7 02:47:11.937914 kernel: GDS: Unknown: Dependent on hypervisor status Jul 7 02:47:11.937923 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 7 02:47:11.937933 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 7 02:47:11.937942 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 7 02:47:11.937951 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 7 02:47:11.937963 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jul 7 02:47:11.937973 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jul 7 02:47:11.937982 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jul 7 02:47:11.937992 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jul 7 02:47:11.938001 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 7 02:47:11.938011 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jul 7 02:47:11.938020 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jul 7 02:47:11.938030 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jul 7 02:47:11.938039 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Jul 7 02:47:11.938049 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Jul 7 02:47:11.938059 kernel: Freeing SMP alternatives memory: 32K Jul 7 02:47:11.938068 kernel: pid_max: default: 32768 minimum: 301 Jul 7 02:47:11.938080 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 7 02:47:11.938089 kernel: landlock: Up and running. Jul 7 02:47:11.938099 kernel: SELinux: Initializing. Jul 7 02:47:11.938109 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 7 02:47:11.938118 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 7 02:47:11.938128 kernel: smpboot: CPU0: Intel Xeon Processor (Cascadelake) (family: 0x6, model: 0x55, stepping: 0x6) Jul 7 02:47:11.938146 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jul 7 02:47:11.938156 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jul 7 02:47:11.938166 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jul 7 02:47:11.938176 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jul 7 02:47:11.938188 kernel: signal: max sigframe size: 3632 Jul 7 02:47:11.938198 kernel: rcu: Hierarchical SRCU implementation. Jul 7 02:47:11.938208 kernel: rcu: Max phase no-delay instances is 400. Jul 7 02:47:11.938217 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 7 02:47:11.938227 kernel: smp: Bringing up secondary CPUs ... Jul 7 02:47:11.938237 kernel: smpboot: x86: Booting SMP configuration: Jul 7 02:47:11.938246 kernel: .... node #0, CPUs: #1 Jul 7 02:47:11.938256 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jul 7 02:47:11.938265 kernel: smp: Brought up 1 node, 2 CPUs Jul 7 02:47:11.938277 kernel: smpboot: Max logical packages: 16 Jul 7 02:47:11.938287 kernel: smpboot: Total of 2 processors activated (9178.30 BogoMIPS) Jul 7 02:47:11.938297 kernel: devtmpfs: initialized Jul 7 02:47:11.938306 kernel: x86/mm: Memory block size: 128MB Jul 7 02:47:11.938316 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 02:47:11.938326 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jul 7 02:47:11.938336 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 02:47:11.938346 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 02:47:11.938355 kernel: audit: initializing netlink subsys (disabled) Jul 7 02:47:11.938367 kernel: audit: type=2000 audit(1751856430.470:1): state=initialized audit_enabled=0 res=1 Jul 7 02:47:11.938377 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 02:47:11.938386 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 7 02:47:11.938396 kernel: cpuidle: using governor menu Jul 7 02:47:11.938406 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 02:47:11.938415 kernel: dca service started, version 1.12.1 Jul 7 02:47:11.938425 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jul 7 02:47:11.938435 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jul 7 02:47:11.938444 kernel: PCI: Using configuration type 1 for base access Jul 7 02:47:11.938456 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 7 02:47:11.938466 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 02:47:11.938476 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 02:47:11.938486 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 02:47:11.938495 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 02:47:11.938505 kernel: ACPI: Added _OSI(Module Device) Jul 7 02:47:11.938514 kernel: ACPI: Added _OSI(Processor Device) Jul 7 02:47:11.938524 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 02:47:11.938534 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 7 02:47:11.938546 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 7 02:47:11.938556 kernel: ACPI: Interpreter enabled Jul 7 02:47:11.938565 kernel: ACPI: PM: (supports S0 S5) Jul 7 02:47:11.938575 kernel: ACPI: Using IOAPIC for interrupt routing Jul 7 02:47:11.938584 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 7 02:47:11.938594 kernel: PCI: Using E820 reservations for host bridge windows Jul 7 02:47:11.938604 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 7 02:47:11.938613 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 7 02:47:11.938774 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 02:47:11.938881 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 7 02:47:11.938974 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 7 02:47:11.938987 kernel: PCI host bridge to bus 0000:00 Jul 7 02:47:11.939109 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 7 02:47:11.939208 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 7 02:47:11.939293 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 7 02:47:11.939380 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jul 7 02:47:11.939470 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 7 02:47:11.939546 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jul 7 02:47:11.939622 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 7 02:47:11.939728 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jul 7 02:47:11.939828 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Jul 7 02:47:11.939913 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Jul 7 02:47:11.940000 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Jul 7 02:47:11.940085 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Jul 7 02:47:11.940756 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 7 02:47:11.940869 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jul 7 02:47:11.942803 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Jul 7 02:47:11.942924 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jul 7 02:47:11.943036 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Jul 7 02:47:11.943130 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jul 7 02:47:11.943301 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Jul 7 02:47:11.943402 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jul 7 02:47:11.943497 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Jul 7 02:47:11.943598 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jul 7 02:47:11.943709 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Jul 7 02:47:11.943807 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jul 7 02:47:11.943902 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Jul 7 02:47:11.944002 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jul 7 02:47:11.944095 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Jul 7 02:47:11.944206 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jul 7 02:47:11.944305 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Jul 7 02:47:11.944409 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jul 7 02:47:11.944504 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Jul 7 02:47:11.944597 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Jul 7 02:47:11.944702 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jul 7 02:47:11.944794 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Jul 7 02:47:11.944892 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jul 7 02:47:11.944990 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jul 7 02:47:11.945082 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Jul 7 02:47:11.948756 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Jul 7 02:47:11.948876 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jul 7 02:47:11.948982 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 7 02:47:11.949074 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jul 7 02:47:11.949169 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Jul 7 02:47:11.949260 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Jul 7 02:47:11.949350 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jul 7 02:47:11.949457 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jul 7 02:47:11.949560 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Jul 7 02:47:11.949667 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Jul 7 02:47:11.949763 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jul 7 02:47:11.949861 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jul 7 02:47:11.949953 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jul 7 02:47:11.950053 kernel: pci_bus 0000:02: extended config space not accessible Jul 7 02:47:11.952231 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Jul 7 02:47:11.952361 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Jul 7 02:47:11.952465 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jul 7 02:47:11.952605 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jul 7 02:47:11.952734 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Jul 7 02:47:11.952831 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Jul 7 02:47:11.952926 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jul 7 02:47:11.953019 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jul 7 02:47:11.953114 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jul 7 02:47:11.953240 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Jul 7 02:47:11.953330 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jul 7 02:47:11.953420 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jul 7 02:47:11.953523 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jul 7 02:47:11.953616 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jul 7 02:47:11.953718 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jul 7 02:47:11.953811 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jul 7 02:47:11.953903 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jul 7 02:47:11.953998 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jul 7 02:47:11.954091 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jul 7 02:47:11.958256 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jul 7 02:47:11.958369 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jul 7 02:47:11.958468 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jul 7 02:47:11.958562 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jul 7 02:47:11.958667 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jul 7 02:47:11.958760 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jul 7 02:47:11.958853 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jul 7 02:47:11.958947 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jul 7 02:47:11.959063 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jul 7 02:47:11.959172 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jul 7 02:47:11.959186 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 7 02:47:11.959197 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 7 02:47:11.959207 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 7 02:47:11.959217 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 7 02:47:11.959227 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 7 02:47:11.959237 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 7 02:47:11.959251 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 7 02:47:11.959261 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 7 02:47:11.959271 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 7 02:47:11.959281 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 7 02:47:11.959291 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 7 02:47:11.959300 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 7 02:47:11.959310 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 7 02:47:11.959320 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 7 02:47:11.959330 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 7 02:47:11.959342 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 7 02:47:11.959352 kernel: iommu: Default domain type: Translated Jul 7 02:47:11.959362 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 7 02:47:11.959372 kernel: PCI: Using ACPI for IRQ routing Jul 7 02:47:11.959382 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 7 02:47:11.959392 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 7 02:47:11.959402 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jul 7 02:47:11.959494 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 7 02:47:11.959587 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 7 02:47:11.959692 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 7 02:47:11.959705 kernel: vgaarb: loaded Jul 7 02:47:11.959716 kernel: clocksource: Switched to clocksource kvm-clock Jul 7 02:47:11.959726 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 02:47:11.959736 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 02:47:11.959746 kernel: pnp: PnP ACPI init Jul 7 02:47:11.959843 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 7 02:47:11.959858 kernel: pnp: PnP ACPI: found 5 devices Jul 7 02:47:11.959871 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 7 02:47:11.959882 kernel: NET: Registered PF_INET protocol family Jul 7 02:47:11.959892 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 7 02:47:11.959902 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 7 02:47:11.959911 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 02:47:11.959922 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 7 02:47:11.959931 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 7 02:47:11.959941 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 7 02:47:11.959954 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 7 02:47:11.959964 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 7 02:47:11.959974 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 02:47:11.959983 kernel: NET: Registered PF_XDP protocol family Jul 7 02:47:11.960076 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Jul 7 02:47:11.962276 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 7 02:47:11.962391 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 7 02:47:11.962491 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jul 7 02:47:11.962595 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jul 7 02:47:11.962703 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jul 7 02:47:11.962800 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jul 7 02:47:11.962901 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jul 7 02:47:11.963015 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jul 7 02:47:11.963110 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jul 7 02:47:11.964244 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jul 7 02:47:11.964343 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jul 7 02:47:11.964436 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jul 7 02:47:11.964529 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jul 7 02:47:11.964622 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jul 7 02:47:11.964726 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jul 7 02:47:11.964826 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jul 7 02:47:11.964925 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jul 7 02:47:11.965026 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jul 7 02:47:11.965120 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jul 7 02:47:11.966532 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jul 7 02:47:11.966654 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jul 7 02:47:11.966753 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jul 7 02:47:11.966849 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jul 7 02:47:11.966948 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jul 7 02:47:11.967044 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jul 7 02:47:11.968156 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jul 7 02:47:11.968261 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jul 7 02:47:11.968356 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jul 7 02:47:11.968451 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jul 7 02:47:11.968546 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jul 7 02:47:11.968649 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jul 7 02:47:11.968751 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jul 7 02:47:11.968844 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jul 7 02:47:11.968940 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jul 7 02:47:11.969033 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jul 7 02:47:11.969127 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jul 7 02:47:11.971255 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jul 7 02:47:11.971352 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jul 7 02:47:11.971446 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jul 7 02:47:11.971538 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jul 7 02:47:11.971638 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jul 7 02:47:11.971750 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jul 7 02:47:11.971834 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jul 7 02:47:11.971918 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jul 7 02:47:11.972003 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jul 7 02:47:11.972089 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jul 7 02:47:11.972184 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jul 7 02:47:11.972270 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jul 7 02:47:11.972359 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jul 7 02:47:11.972442 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 7 02:47:11.972519 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 7 02:47:11.972595 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 7 02:47:11.972680 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jul 7 02:47:11.972756 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 7 02:47:11.972835 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jul 7 02:47:11.972924 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jul 7 02:47:11.973005 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jul 7 02:47:11.973084 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jul 7 02:47:11.974209 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jul 7 02:47:11.974319 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Jul 7 02:47:11.974409 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jul 7 02:47:11.974501 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jul 7 02:47:11.974594 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Jul 7 02:47:11.974690 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jul 7 02:47:11.974778 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jul 7 02:47:11.974869 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jul 7 02:47:11.974957 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jul 7 02:47:11.975049 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jul 7 02:47:11.976202 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Jul 7 02:47:11.976291 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jul 7 02:47:11.976370 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jul 7 02:47:11.976463 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Jul 7 02:47:11.976532 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jul 7 02:47:11.976601 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jul 7 02:47:11.976688 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Jul 7 02:47:11.976758 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jul 7 02:47:11.976826 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jul 7 02:47:11.976898 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Jul 7 02:47:11.976966 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jul 7 02:47:11.977059 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jul 7 02:47:11.977072 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 7 02:47:11.977086 kernel: PCI: CLS 0 bytes, default 64 Jul 7 02:47:11.977096 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 7 02:47:11.977105 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jul 7 02:47:11.977115 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 7 02:47:11.977124 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2113312ac93, max_idle_ns: 440795244843 ns Jul 7 02:47:11.977134 kernel: Initialise system trusted keyrings Jul 7 02:47:11.977143 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 7 02:47:11.979157 kernel: Key type asymmetric registered Jul 7 02:47:11.979170 kernel: Asymmetric key parser 'x509' registered Jul 7 02:47:11.979204 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 7 02:47:11.979233 kernel: io scheduler mq-deadline registered Jul 7 02:47:11.979261 kernel: io scheduler kyber registered Jul 7 02:47:11.979272 kernel: io scheduler bfq registered Jul 7 02:47:11.979380 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jul 7 02:47:11.979477 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jul 7 02:47:11.979571 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 7 02:47:11.979676 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jul 7 02:47:11.979776 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jul 7 02:47:11.979870 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 7 02:47:11.979965 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jul 7 02:47:11.980061 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jul 7 02:47:11.980162 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 7 02:47:11.980250 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jul 7 02:47:11.980339 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jul 7 02:47:11.980423 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 7 02:47:11.980529 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jul 7 02:47:11.980615 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jul 7 02:47:11.980708 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 7 02:47:11.980794 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jul 7 02:47:11.980883 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jul 7 02:47:11.980968 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 7 02:47:11.981054 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jul 7 02:47:11.983147 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jul 7 02:47:11.983228 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 7 02:47:11.983326 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jul 7 02:47:11.983418 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jul 7 02:47:11.983504 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 7 02:47:11.983517 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 7 02:47:11.983527 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 7 02:47:11.983537 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 7 02:47:11.983547 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 02:47:11.983557 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 7 02:47:11.983569 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 7 02:47:11.983580 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 7 02:47:11.983589 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 7 02:47:11.983685 kernel: rtc_cmos 00:03: RTC can wake from S4 Jul 7 02:47:11.983699 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 7 02:47:11.983774 kernel: rtc_cmos 00:03: registered as rtc0 Jul 7 02:47:11.983853 kernel: rtc_cmos 00:03: setting system clock to 2025-07-07T02:47:11 UTC (1751856431) Jul 7 02:47:11.983930 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jul 7 02:47:11.983946 kernel: intel_pstate: CPU model not supported Jul 7 02:47:11.983955 kernel: NET: Registered PF_INET6 protocol family Jul 7 02:47:11.983965 kernel: Segment Routing with IPv6 Jul 7 02:47:11.983974 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 02:47:11.983984 kernel: NET: Registered PF_PACKET protocol family Jul 7 02:47:11.983994 kernel: Key type dns_resolver registered Jul 7 02:47:11.984003 kernel: IPI shorthand broadcast: enabled Jul 7 02:47:11.984012 kernel: sched_clock: Marking stable (876001922, 124008390)->(1173553761, -173543449) Jul 7 02:47:11.984022 kernel: registered taskstats version 1 Jul 7 02:47:11.984050 kernel: Loading compiled-in X.509 certificates Jul 7 02:47:11.984060 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 6372c48ca52cc7f7bbee5675b604584c1c68ec5b' Jul 7 02:47:11.984070 kernel: Key type .fscrypt registered Jul 7 02:47:11.984080 kernel: Key type fscrypt-provisioning registered Jul 7 02:47:11.984091 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 02:47:11.984101 kernel: ima: Allocated hash algorithm: sha1 Jul 7 02:47:11.984111 kernel: ima: No architecture policies found Jul 7 02:47:11.984122 kernel: clk: Disabling unused clocks Jul 7 02:47:11.984132 kernel: Freeing unused kernel image (initmem) memory: 42868K Jul 7 02:47:11.984145 kernel: Write protecting the kernel read-only data: 36864k Jul 7 02:47:11.984170 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Jul 7 02:47:11.984181 kernel: Run /init as init process Jul 7 02:47:11.984191 kernel: with arguments: Jul 7 02:47:11.984202 kernel: /init Jul 7 02:47:11.984212 kernel: with environment: Jul 7 02:47:11.984222 kernel: HOME=/ Jul 7 02:47:11.984232 kernel: TERM=linux Jul 7 02:47:11.984242 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 02:47:11.984258 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 02:47:11.984271 systemd[1]: Detected virtualization kvm. Jul 7 02:47:11.984283 systemd[1]: Detected architecture x86-64. Jul 7 02:47:11.984293 systemd[1]: Running in initrd. Jul 7 02:47:11.984304 systemd[1]: No hostname configured, using default hostname. Jul 7 02:47:11.984314 systemd[1]: Hostname set to . Jul 7 02:47:11.984325 systemd[1]: Initializing machine ID from VM UUID. Jul 7 02:47:11.984339 systemd[1]: Queued start job for default target initrd.target. Jul 7 02:47:11.984350 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 02:47:11.984361 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 02:47:11.984372 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 02:47:11.984383 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 02:47:11.984394 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 02:47:11.984405 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 02:47:11.984420 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 02:47:11.984431 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 02:47:11.984445 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 02:47:11.984455 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 02:47:11.984466 systemd[1]: Reached target paths.target - Path Units. Jul 7 02:47:11.984477 systemd[1]: Reached target slices.target - Slice Units. Jul 7 02:47:11.984488 systemd[1]: Reached target swap.target - Swaps. Jul 7 02:47:11.984499 systemd[1]: Reached target timers.target - Timer Units. Jul 7 02:47:11.984512 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 02:47:11.984523 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 02:47:11.984533 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 02:47:11.984544 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 7 02:47:11.984555 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 02:47:11.984566 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 02:47:11.984576 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 02:47:11.984587 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 02:47:11.984598 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 02:47:11.984612 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 02:47:11.984622 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 02:47:11.984640 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 02:47:11.984651 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 02:47:11.984662 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 02:47:11.984673 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 02:47:11.984683 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 02:47:11.984719 systemd-journald[199]: Collecting audit messages is disabled. Jul 7 02:47:11.984748 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 02:47:11.984759 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 02:47:11.984773 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 02:47:11.984785 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 02:47:11.984797 systemd-journald[199]: Journal started Jul 7 02:47:11.984821 systemd-journald[199]: Runtime Journal (/run/log/journal/b0d1bd46e4034a9eb42927a742ab4d59) is 4.7M, max 38.0M, 33.2M free. Jul 7 02:47:11.956905 systemd-modules-load[200]: Inserted module 'overlay' Jul 7 02:47:12.016480 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 02:47:12.016504 kernel: Bridge firewalling registered Jul 7 02:47:12.016518 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 02:47:11.988061 systemd-modules-load[200]: Inserted module 'br_netfilter' Jul 7 02:47:12.017402 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 02:47:12.018404 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 02:47:12.032320 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 02:47:12.035280 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 02:47:12.036578 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 02:47:12.046284 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 02:47:12.055076 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 02:47:12.057568 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 02:47:12.063323 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 02:47:12.064492 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 02:47:12.065128 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 02:47:12.074475 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 02:47:12.078843 dracut-cmdline[230]: dracut-dracut-053 Jul 7 02:47:12.082503 dracut-cmdline[230]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 7 02:47:12.114016 systemd-resolved[238]: Positive Trust Anchors: Jul 7 02:47:12.114041 systemd-resolved[238]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 02:47:12.114081 systemd-resolved[238]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 02:47:12.120966 systemd-resolved[238]: Defaulting to hostname 'linux'. Jul 7 02:47:12.123166 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 02:47:12.123673 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 02:47:12.184205 kernel: SCSI subsystem initialized Jul 7 02:47:12.197178 kernel: Loading iSCSI transport class v2.0-870. Jul 7 02:47:12.209173 kernel: iscsi: registered transport (tcp) Jul 7 02:47:12.234375 kernel: iscsi: registered transport (qla4xxx) Jul 7 02:47:12.234538 kernel: QLogic iSCSI HBA Driver Jul 7 02:47:12.314762 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 02:47:12.325512 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 02:47:12.357732 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 02:47:12.357851 kernel: device-mapper: uevent: version 1.0.3 Jul 7 02:47:12.357892 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 7 02:47:12.409220 kernel: raid6: avx512x4 gen() 17522 MB/s Jul 7 02:47:12.426212 kernel: raid6: avx512x2 gen() 17384 MB/s Jul 7 02:47:12.443200 kernel: raid6: avx512x1 gen() 17365 MB/s Jul 7 02:47:12.460242 kernel: raid6: avx2x4 gen() 17324 MB/s Jul 7 02:47:12.477209 kernel: raid6: avx2x2 gen() 17229 MB/s Jul 7 02:47:12.494284 kernel: raid6: avx2x1 gen() 13149 MB/s Jul 7 02:47:12.494357 kernel: raid6: using algorithm avx512x4 gen() 17522 MB/s Jul 7 02:47:12.512288 kernel: raid6: .... xor() 7565 MB/s, rmw enabled Jul 7 02:47:12.512368 kernel: raid6: using avx512x2 recovery algorithm Jul 7 02:47:12.540193 kernel: xor: automatically using best checksumming function avx Jul 7 02:47:12.710178 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 02:47:12.722224 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 02:47:12.728306 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 02:47:12.743970 systemd-udevd[419]: Using default interface naming scheme 'v255'. Jul 7 02:47:12.749163 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 02:47:12.759336 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 02:47:12.778439 dracut-pre-trigger[429]: rd.md=0: removing MD RAID activation Jul 7 02:47:12.813984 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 02:47:12.819309 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 02:47:12.883212 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 02:47:12.894316 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 02:47:12.911190 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 02:47:12.914350 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 02:47:12.915824 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 02:47:12.917165 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 02:47:12.923304 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 02:47:12.940945 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 02:47:12.972197 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jul 7 02:47:12.974183 kernel: cryptd: max_cpu_qlen set to 1000 Jul 7 02:47:12.979164 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jul 7 02:47:12.989229 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 7 02:47:12.989276 kernel: GPT:17805311 != 125829119 Jul 7 02:47:12.989290 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 7 02:47:12.990413 kernel: GPT:17805311 != 125829119 Jul 7 02:47:12.990437 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 7 02:47:12.992156 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 02:47:13.003441 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 02:47:13.004333 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 02:47:13.005982 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 02:47:13.006370 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 02:47:13.012611 kernel: ACPI: bus type USB registered Jul 7 02:47:13.012636 kernel: usbcore: registered new interface driver usbfs Jul 7 02:47:13.006937 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 02:47:13.007895 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 02:47:13.018419 kernel: usbcore: registered new interface driver hub Jul 7 02:47:13.018455 kernel: usbcore: registered new device driver usb Jul 7 02:47:13.019388 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 02:47:13.034591 kernel: AVX2 version of gcm_enc/dec engaged. Jul 7 02:47:13.034636 kernel: AES CTR mode by8 optimization enabled Jul 7 02:47:13.067180 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (475) Jul 7 02:47:13.087176 kernel: BTRFS: device fsid 01287863-c21f-4cbb-820d-bbae8208f32f devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (461) Jul 7 02:47:13.089420 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 02:47:13.105427 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jul 7 02:47:13.105624 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jul 7 02:47:13.107693 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 7 02:47:13.110062 kernel: libata version 3.00 loaded. Jul 7 02:47:13.110083 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jul 7 02:47:13.111196 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jul 7 02:47:13.111349 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jul 7 02:47:13.111470 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jul 7 02:47:13.116724 kernel: hub 1-0:1.0: USB hub found Jul 7 02:47:13.117128 kernel: hub 1-0:1.0: 4 ports detected Jul 7 02:47:13.119159 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jul 7 02:47:13.123151 kernel: hub 2-0:1.0: USB hub found Jul 7 02:47:13.123347 kernel: hub 2-0:1.0: 4 ports detected Jul 7 02:47:13.124337 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 7 02:47:13.125624 kernel: ahci 0000:00:1f.2: version 3.0 Jul 7 02:47:13.125778 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 7 02:47:13.129163 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jul 7 02:47:13.129333 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 7 02:47:13.134204 kernel: scsi host0: ahci Jul 7 02:47:13.134358 kernel: scsi host1: ahci Jul 7 02:47:13.135215 kernel: scsi host2: ahci Jul 7 02:47:13.136155 kernel: scsi host3: ahci Jul 7 02:47:13.136306 kernel: scsi host4: ahci Jul 7 02:47:13.137491 kernel: scsi host5: ahci Jul 7 02:47:13.137954 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 02:47:13.153250 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Jul 7 02:47:13.153277 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Jul 7 02:47:13.153292 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Jul 7 02:47:13.153307 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Jul 7 02:47:13.153321 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Jul 7 02:47:13.153335 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Jul 7 02:47:13.157457 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 7 02:47:13.158702 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 7 02:47:13.165382 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 02:47:13.174394 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 02:47:13.181806 disk-uuid[563]: Primary Header is updated. Jul 7 02:47:13.181806 disk-uuid[563]: Secondary Entries is updated. Jul 7 02:47:13.181806 disk-uuid[563]: Secondary Header is updated. Jul 7 02:47:13.189914 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 02:47:13.194374 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 02:47:13.197368 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 02:47:13.359182 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jul 7 02:47:13.454206 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jul 7 02:47:13.454337 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 7 02:47:13.457567 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 7 02:47:13.462599 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 7 02:47:13.462656 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 7 02:47:13.466200 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 7 02:47:13.499158 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 7 02:47:13.504495 kernel: usbcore: registered new interface driver usbhid Jul 7 02:47:13.504559 kernel: usbhid: USB HID core driver Jul 7 02:47:13.509168 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jul 7 02:47:13.509212 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jul 7 02:47:14.203209 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 02:47:14.205783 disk-uuid[564]: The operation has completed successfully. Jul 7 02:47:14.250858 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 02:47:14.250980 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 02:47:14.265271 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 02:47:14.270065 sh[583]: Success Jul 7 02:47:14.283189 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 7 02:47:14.340783 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 02:47:14.343555 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 02:47:14.345271 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 02:47:14.377449 kernel: BTRFS info (device dm-0): first mount of filesystem 01287863-c21f-4cbb-820d-bbae8208f32f Jul 7 02:47:14.377582 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 7 02:47:14.377623 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 7 02:47:14.378795 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 7 02:47:14.379740 kernel: BTRFS info (device dm-0): using free space tree Jul 7 02:47:14.386314 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 02:47:14.387377 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 02:47:14.393546 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 02:47:14.398367 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 02:47:14.405543 kernel: BTRFS info (device vda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 02:47:14.405600 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 02:47:14.405626 kernel: BTRFS info (device vda6): using free space tree Jul 7 02:47:14.408179 kernel: BTRFS info (device vda6): auto enabling async discard Jul 7 02:47:14.417238 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 7 02:47:14.420175 kernel: BTRFS info (device vda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 02:47:14.424174 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 02:47:14.432349 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 02:47:14.521568 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 02:47:14.538014 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 02:47:14.557173 ignition[661]: Ignition 2.19.0 Jul 7 02:47:14.557185 ignition[661]: Stage: fetch-offline Jul 7 02:47:14.557239 ignition[661]: no configs at "/usr/lib/ignition/base.d" Jul 7 02:47:14.557250 ignition[661]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 02:47:14.557378 ignition[661]: parsed url from cmdline: "" Jul 7 02:47:14.557382 ignition[661]: no config URL provided Jul 7 02:47:14.557388 ignition[661]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 02:47:14.557396 ignition[661]: no config at "/usr/lib/ignition/user.ign" Jul 7 02:47:14.557402 ignition[661]: failed to fetch config: resource requires networking Jul 7 02:47:14.558388 ignition[661]: Ignition finished successfully Jul 7 02:47:14.562270 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 02:47:14.568108 systemd-networkd[765]: lo: Link UP Jul 7 02:47:14.568119 systemd-networkd[765]: lo: Gained carrier Jul 7 02:47:14.569488 systemd-networkd[765]: Enumeration completed Jul 7 02:47:14.569803 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 02:47:14.569837 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 02:47:14.569842 systemd-networkd[765]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 02:47:14.570634 systemd[1]: Reached target network.target - Network. Jul 7 02:47:14.570689 systemd-networkd[765]: eth0: Link UP Jul 7 02:47:14.570693 systemd-networkd[765]: eth0: Gained carrier Jul 7 02:47:14.570701 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 02:47:14.578307 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 7 02:47:14.581219 systemd-networkd[765]: eth0: DHCPv4 address 10.244.101.74/30, gateway 10.244.101.73 acquired from 10.244.101.73 Jul 7 02:47:14.597308 ignition[774]: Ignition 2.19.0 Jul 7 02:47:14.597319 ignition[774]: Stage: fetch Jul 7 02:47:14.597523 ignition[774]: no configs at "/usr/lib/ignition/base.d" Jul 7 02:47:14.597535 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 02:47:14.597633 ignition[774]: parsed url from cmdline: "" Jul 7 02:47:14.597637 ignition[774]: no config URL provided Jul 7 02:47:14.597642 ignition[774]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 02:47:14.597649 ignition[774]: no config at "/usr/lib/ignition/user.ign" Jul 7 02:47:14.597768 ignition[774]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jul 7 02:47:14.597801 ignition[774]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jul 7 02:47:14.597825 ignition[774]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jul 7 02:47:14.621977 ignition[774]: GET result: OK Jul 7 02:47:14.622562 ignition[774]: parsing config with SHA512: eb8a907b135b5977cc708a44afda65f926270f479aa22b2fee1153eeae3d0a10df76b83199e608654cf9d2be2b1c983274ecc435fbb3ecbae5119b2a36b088df Jul 7 02:47:14.628318 unknown[774]: fetched base config from "system" Jul 7 02:47:14.628331 unknown[774]: fetched base config from "system" Jul 7 02:47:14.628853 ignition[774]: fetch: fetch complete Jul 7 02:47:14.628338 unknown[774]: fetched user config from "openstack" Jul 7 02:47:14.628858 ignition[774]: fetch: fetch passed Jul 7 02:47:14.628907 ignition[774]: Ignition finished successfully Jul 7 02:47:14.630821 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 7 02:47:14.638390 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 02:47:14.665288 ignition[781]: Ignition 2.19.0 Jul 7 02:47:14.665301 ignition[781]: Stage: kargs Jul 7 02:47:14.665512 ignition[781]: no configs at "/usr/lib/ignition/base.d" Jul 7 02:47:14.665525 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 02:47:14.666618 ignition[781]: kargs: kargs passed Jul 7 02:47:14.666671 ignition[781]: Ignition finished successfully Jul 7 02:47:14.667920 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 02:47:14.673549 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 02:47:14.695430 ignition[787]: Ignition 2.19.0 Jul 7 02:47:14.695455 ignition[787]: Stage: disks Jul 7 02:47:14.695693 ignition[787]: no configs at "/usr/lib/ignition/base.d" Jul 7 02:47:14.698563 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 02:47:14.695707 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 02:47:14.699766 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 02:47:14.696960 ignition[787]: disks: disks passed Jul 7 02:47:14.700693 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 02:47:14.697017 ignition[787]: Ignition finished successfully Jul 7 02:47:14.701926 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 02:47:14.703107 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 02:47:14.704045 systemd[1]: Reached target basic.target - Basic System. Jul 7 02:47:14.714313 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 02:47:14.729853 systemd-fsck[795]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jul 7 02:47:14.734975 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 02:47:14.747306 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 02:47:14.854161 kernel: EXT4-fs (vda9): mounted filesystem c3eefe20-4a42-420d-8034-4d5498275b2f r/w with ordered data mode. Quota mode: none. Jul 7 02:47:14.854594 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 02:47:14.856120 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 02:47:14.866387 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 02:47:14.869823 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 02:47:14.871123 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 7 02:47:14.873312 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jul 7 02:47:14.876339 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 02:47:14.877186 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 02:47:14.881488 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (803) Jul 7 02:47:14.885776 kernel: BTRFS info (device vda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 02:47:14.885814 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 02:47:14.885832 kernel: BTRFS info (device vda6): using free space tree Jul 7 02:47:14.889157 kernel: BTRFS info (device vda6): auto enabling async discard Jul 7 02:47:14.894521 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 02:47:14.895071 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 02:47:14.909176 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 02:47:14.981856 initrd-setup-root[832]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 02:47:14.992446 initrd-setup-root[839]: cut: /sysroot/etc/group: No such file or directory Jul 7 02:47:15.001008 initrd-setup-root[847]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 02:47:15.005026 initrd-setup-root[854]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 02:47:15.117225 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 02:47:15.122248 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 02:47:15.124867 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 02:47:15.136168 kernel: BTRFS info (device vda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 02:47:15.154512 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 02:47:15.164008 ignition[922]: INFO : Ignition 2.19.0 Jul 7 02:47:15.164008 ignition[922]: INFO : Stage: mount Jul 7 02:47:15.165030 ignition[922]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 02:47:15.165030 ignition[922]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 02:47:15.165993 ignition[922]: INFO : mount: mount passed Jul 7 02:47:15.165993 ignition[922]: INFO : Ignition finished successfully Jul 7 02:47:15.166845 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 02:47:15.377852 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 02:47:15.922629 systemd-networkd[765]: eth0: Gained IPv6LL Jul 7 02:47:17.434015 systemd-networkd[765]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:1952:24:19ff:fef4:654a/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:1952:24:19ff:fef4:654a/64 assigned by NDisc. Jul 7 02:47:17.434042 systemd-networkd[765]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jul 7 02:47:22.038721 coreos-metadata[805]: Jul 07 02:47:22.038 WARN failed to locate config-drive, using the metadata service API instead Jul 7 02:47:22.062941 coreos-metadata[805]: Jul 07 02:47:22.062 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jul 7 02:47:22.100284 coreos-metadata[805]: Jul 07 02:47:22.100 INFO Fetch successful Jul 7 02:47:22.101740 coreos-metadata[805]: Jul 07 02:47:22.101 INFO wrote hostname srv-ijdf9.gb1.brightbox.com to /sysroot/etc/hostname Jul 7 02:47:22.105489 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jul 7 02:47:22.105686 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jul 7 02:47:22.116300 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 02:47:22.131490 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 02:47:22.138169 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (938) Jul 7 02:47:22.141861 kernel: BTRFS info (device vda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 02:47:22.141964 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 02:47:22.142000 kernel: BTRFS info (device vda6): using free space tree Jul 7 02:47:22.145179 kernel: BTRFS info (device vda6): auto enabling async discard Jul 7 02:47:22.149626 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 02:47:22.182938 ignition[956]: INFO : Ignition 2.19.0 Jul 7 02:47:22.184211 ignition[956]: INFO : Stage: files Jul 7 02:47:22.184211 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 02:47:22.184211 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 02:47:22.187560 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Jul 7 02:47:22.187560 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 02:47:22.187560 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 02:47:22.191203 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 02:47:22.191203 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 02:47:22.191203 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 02:47:22.190844 unknown[956]: wrote ssh authorized keys file for user: core Jul 7 02:47:22.195603 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 7 02:47:22.195603 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 7 02:47:22.195603 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 7 02:47:22.195603 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 7 02:47:22.447743 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 7 02:47:22.979323 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 7 02:47:22.981265 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 7 02:47:22.981265 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 02:47:22.981265 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 02:47:22.981265 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 02:47:22.981265 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 02:47:22.981265 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 02:47:22.981265 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 02:47:22.981265 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 02:47:22.981265 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 02:47:22.981265 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 02:47:22.991318 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 02:47:22.991318 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 02:47:22.991318 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 02:47:22.991318 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 7 02:47:23.655243 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 7 02:47:25.903821 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 02:47:25.903821 ignition[956]: INFO : files: op(c): [started] processing unit "containerd.service" Jul 7 02:47:25.915718 ignition[956]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 7 02:47:25.915718 ignition[956]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 7 02:47:25.915718 ignition[956]: INFO : files: op(c): [finished] processing unit "containerd.service" Jul 7 02:47:25.915718 ignition[956]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jul 7 02:47:25.915718 ignition[956]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 02:47:25.915718 ignition[956]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 02:47:25.915718 ignition[956]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jul 7 02:47:25.915718 ignition[956]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jul 7 02:47:25.915718 ignition[956]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 02:47:25.915718 ignition[956]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 02:47:25.915718 ignition[956]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 02:47:25.915718 ignition[956]: INFO : files: files passed Jul 7 02:47:25.915718 ignition[956]: INFO : Ignition finished successfully Jul 7 02:47:25.916639 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 02:47:25.926441 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 02:47:25.932087 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 02:47:25.935705 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 02:47:25.935814 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 02:47:25.954707 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 02:47:25.956592 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 02:47:25.956592 initrd-setup-root-after-ignition[984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 02:47:25.957453 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 02:47:25.958489 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 02:47:25.967267 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 02:47:25.998056 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 02:47:25.998202 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 02:47:25.999266 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 02:47:25.999982 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 02:47:26.000835 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 02:47:26.005294 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 02:47:26.020481 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 02:47:26.025287 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 02:47:26.036942 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 02:47:26.038128 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 02:47:26.039244 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 02:47:26.039766 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 02:47:26.039884 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 02:47:26.042113 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 02:47:26.042978 systemd[1]: Stopped target basic.target - Basic System. Jul 7 02:47:26.044792 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 02:47:26.046311 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 02:47:26.047395 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 02:47:26.048599 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 02:47:26.050689 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 02:47:26.052647 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 02:47:26.054393 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 02:47:26.055829 systemd[1]: Stopped target swap.target - Swaps. Jul 7 02:47:26.056826 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 02:47:26.057053 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 02:47:26.058870 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 02:47:26.060154 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 02:47:26.061217 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 02:47:26.061445 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 02:47:26.062355 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 02:47:26.062556 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 02:47:26.063941 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 02:47:26.064211 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 02:47:26.068707 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 02:47:26.068834 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 02:47:26.075745 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 02:47:26.079363 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 02:47:26.080393 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 02:47:26.080555 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 02:47:26.081880 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 02:47:26.082015 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 02:47:26.090792 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 02:47:26.095005 ignition[1008]: INFO : Ignition 2.19.0 Jul 7 02:47:26.095005 ignition[1008]: INFO : Stage: umount Jul 7 02:47:26.099148 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 02:47:26.099148 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 02:47:26.099148 ignition[1008]: INFO : umount: umount passed Jul 7 02:47:26.099148 ignition[1008]: INFO : Ignition finished successfully Jul 7 02:47:26.096518 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 02:47:26.098667 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 02:47:26.098763 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 02:47:26.101588 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 02:47:26.101679 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 02:47:26.103259 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 02:47:26.103304 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 02:47:26.103762 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 7 02:47:26.103798 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 7 02:47:26.104197 systemd[1]: Stopped target network.target - Network. Jul 7 02:47:26.104527 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 02:47:26.104570 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 02:47:26.104950 systemd[1]: Stopped target paths.target - Path Units. Jul 7 02:47:26.107193 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 02:47:26.113176 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 02:47:26.113694 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 02:47:26.115592 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 02:47:26.117906 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 02:47:26.118055 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 02:47:26.119376 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 02:47:26.119483 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 02:47:26.120948 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 02:47:26.121064 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 02:47:26.122442 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 02:47:26.122544 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 02:47:26.124349 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 02:47:26.127092 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 02:47:26.129667 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 02:47:26.130376 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 02:47:26.130494 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 02:47:26.130676 systemd-networkd[765]: eth0: DHCPv6 lease lost Jul 7 02:47:26.133090 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 02:47:26.133239 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 02:47:26.135504 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 02:47:26.135654 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 02:47:26.138750 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 02:47:26.138880 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 02:47:26.140929 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 02:47:26.141011 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 02:47:26.146237 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 02:47:26.146636 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 02:47:26.146688 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 02:47:26.147923 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 02:47:26.147968 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 02:47:26.149670 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 02:47:26.149714 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 02:47:26.151185 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 02:47:26.151229 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 02:47:26.152073 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 02:47:26.165547 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 02:47:26.165699 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 02:47:26.167373 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 02:47:26.167439 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 02:47:26.168500 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 02:47:26.168533 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 02:47:26.169627 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 02:47:26.169671 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 02:47:26.171068 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 02:47:26.171109 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 02:47:26.172061 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 02:47:26.172103 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 02:47:26.178498 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 02:47:26.181511 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 02:47:26.181682 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 02:47:26.183051 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 7 02:47:26.183185 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 02:47:26.185259 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 02:47:26.185307 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 02:47:26.186418 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 02:47:26.186462 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 02:47:26.188477 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 02:47:26.188582 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 02:47:26.189321 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 02:47:26.189406 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 02:47:26.191108 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 02:47:26.198335 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 02:47:26.208348 systemd[1]: Switching root. Jul 7 02:47:26.249951 systemd-journald[199]: Journal stopped Jul 7 02:47:27.308506 systemd-journald[199]: Received SIGTERM from PID 1 (systemd). Jul 7 02:47:27.308593 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 02:47:27.308610 kernel: SELinux: policy capability open_perms=1 Jul 7 02:47:27.308627 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 02:47:27.308640 kernel: SELinux: policy capability always_check_network=0 Jul 7 02:47:27.308655 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 02:47:27.308668 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 02:47:27.308681 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 02:47:27.308693 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 02:47:27.308706 kernel: audit: type=1403 audit(1751856446.463:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 02:47:27.308722 systemd[1]: Successfully loaded SELinux policy in 41.459ms. Jul 7 02:47:27.308747 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.536ms. Jul 7 02:47:27.308763 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 02:47:27.308777 systemd[1]: Detected virtualization kvm. Jul 7 02:47:27.308790 systemd[1]: Detected architecture x86-64. Jul 7 02:47:27.308803 systemd[1]: Detected first boot. Jul 7 02:47:27.308817 systemd[1]: Hostname set to . Jul 7 02:47:27.308830 systemd[1]: Initializing machine ID from VM UUID. Jul 7 02:47:27.308847 zram_generator::config[1072]: No configuration found. Jul 7 02:47:27.308861 systemd[1]: Populated /etc with preset unit settings. Jul 7 02:47:27.308874 systemd[1]: Queued start job for default target multi-user.target. Jul 7 02:47:27.308888 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 7 02:47:27.308901 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 02:47:27.308919 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 02:47:27.308932 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 02:47:27.308945 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 02:47:27.308963 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 02:47:27.308977 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 02:47:27.308991 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 02:47:27.309004 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 02:47:27.309017 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 02:47:27.309030 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 02:47:27.309048 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 02:47:27.309062 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 02:47:27.309093 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 02:47:27.309114 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 02:47:27.309128 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 7 02:47:27.309160 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 02:47:27.309174 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 02:47:27.309187 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 02:47:27.309205 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 02:47:27.309218 systemd[1]: Reached target slices.target - Slice Units. Jul 7 02:47:27.309232 systemd[1]: Reached target swap.target - Swaps. Jul 7 02:47:27.309246 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 02:47:27.309259 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 02:47:27.309273 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 02:47:27.309286 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 7 02:47:27.309303 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 02:47:27.309318 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 02:47:27.309331 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 02:47:27.309344 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 02:47:27.309357 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 02:47:27.309370 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 02:47:27.309384 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 02:47:27.309397 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 02:47:27.309411 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 02:47:27.309431 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 02:47:27.309445 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 02:47:27.309458 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 02:47:27.309472 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 02:47:27.309486 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 02:47:27.309499 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 02:47:27.309517 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 02:47:27.309537 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 02:47:27.309552 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 02:47:27.309572 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 02:47:27.309586 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 02:47:27.309600 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 02:47:27.309613 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 7 02:47:27.309629 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jul 7 02:47:27.309642 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 02:47:27.309655 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 02:47:27.309668 kernel: fuse: init (API version 7.39) Jul 7 02:47:27.309682 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 02:47:27.309697 kernel: loop: module loaded Jul 7 02:47:27.309711 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 02:47:27.309745 systemd-journald[1172]: Collecting audit messages is disabled. Jul 7 02:47:27.309776 systemd-journald[1172]: Journal started Jul 7 02:47:27.309803 systemd-journald[1172]: Runtime Journal (/run/log/journal/b0d1bd46e4034a9eb42927a742ab4d59) is 4.7M, max 38.0M, 33.2M free. Jul 7 02:47:27.322519 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 02:47:27.329501 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 02:47:27.344789 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 02:47:27.348158 kernel: ACPI: bus type drm_connector registered Jul 7 02:47:27.346751 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 02:47:27.347337 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 02:47:27.348348 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 02:47:27.348923 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 02:47:27.350485 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 02:47:27.350990 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 02:47:27.351802 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 02:47:27.353453 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 02:47:27.353636 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 02:47:27.355950 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 02:47:27.356116 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 02:47:27.357579 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 02:47:27.357739 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 02:47:27.359545 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 02:47:27.359698 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 02:47:27.360424 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 02:47:27.360585 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 02:47:27.361771 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 02:47:27.361962 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 02:47:27.364549 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 02:47:27.365302 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 02:47:27.368565 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 02:47:27.369318 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 02:47:27.383238 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 02:47:27.389348 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 02:47:27.395279 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 02:47:27.395785 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 02:47:27.404371 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 02:47:27.406326 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 02:47:27.409230 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 02:47:27.410441 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 02:47:27.413264 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 02:47:27.421718 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 02:47:27.434298 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 02:47:27.437914 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 02:47:27.439398 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 02:47:27.447598 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 02:47:27.448188 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 02:47:27.453277 systemd-journald[1172]: Time spent on flushing to /var/log/journal/b0d1bd46e4034a9eb42927a742ab4d59 is 54.595ms for 1141 entries. Jul 7 02:47:27.453277 systemd-journald[1172]: System Journal (/var/log/journal/b0d1bd46e4034a9eb42927a742ab4d59) is 8.0M, max 584.8M, 576.8M free. Jul 7 02:47:27.531353 systemd-journald[1172]: Received client request to flush runtime journal. Jul 7 02:47:27.480993 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 02:47:27.493808 systemd-tmpfiles[1224]: ACLs are not supported, ignoring. Jul 7 02:47:27.493824 systemd-tmpfiles[1224]: ACLs are not supported, ignoring. Jul 7 02:47:27.509903 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 02:47:27.515247 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 02:47:27.526332 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 02:47:27.535451 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 7 02:47:27.538356 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 02:47:27.562774 udevadm[1240]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 7 02:47:27.570832 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 02:47:27.581367 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 02:47:27.599725 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Jul 7 02:47:27.600131 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Jul 7 02:47:27.606549 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 02:47:28.136617 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 02:47:28.145443 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 02:47:28.171435 systemd-udevd[1252]: Using default interface naming scheme 'v255'. Jul 7 02:47:28.190493 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 02:47:28.204379 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 02:47:28.231040 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 02:47:28.279267 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jul 7 02:47:28.285168 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1256) Jul 7 02:47:28.324073 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 02:47:28.333266 kernel: mousedev: PS/2 mouse device common for all mice Jul 7 02:47:28.336447 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 7 02:47:28.346165 kernel: ACPI: button: Power Button [PWRF] Jul 7 02:47:28.430193 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 7 02:47:28.429155 systemd-networkd[1261]: lo: Link UP Jul 7 02:47:28.430219 systemd-networkd[1261]: lo: Gained carrier Jul 7 02:47:28.433237 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jul 7 02:47:28.433462 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 7 02:47:28.432742 systemd-networkd[1261]: Enumeration completed Jul 7 02:47:28.433119 systemd-networkd[1261]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 02:47:28.433123 systemd-networkd[1261]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 02:47:28.433266 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 02:47:28.435226 systemd-networkd[1261]: eth0: Link UP Jul 7 02:47:28.435236 systemd-networkd[1261]: eth0: Gained carrier Jul 7 02:47:28.435251 systemd-networkd[1261]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 02:47:28.439197 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jul 7 02:47:28.444297 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 02:47:28.446156 systemd-networkd[1261]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 02:47:28.461235 systemd-networkd[1261]: eth0: DHCPv4 address 10.244.101.74/30, gateway 10.244.101.73 acquired from 10.244.101.73 Jul 7 02:47:28.512415 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 02:47:28.521235 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 02:47:28.648680 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 7 02:47:28.652624 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 02:47:28.659307 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 7 02:47:28.674485 lvm[1292]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 7 02:47:28.703907 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 7 02:47:28.706045 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 02:47:28.714347 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 7 02:47:28.719308 lvm[1295]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 7 02:47:28.746397 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 7 02:47:28.748019 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 02:47:28.748813 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 02:47:28.748945 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 02:47:28.749786 systemd[1]: Reached target machines.target - Containers. Jul 7 02:47:28.751992 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 7 02:47:28.757287 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 02:47:28.759302 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 02:47:28.760350 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 02:47:28.769293 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 02:47:28.773302 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 7 02:47:28.775080 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 02:47:28.778037 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 02:47:28.801292 kernel: loop0: detected capacity change from 0 to 221472 Jul 7 02:47:28.800154 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 02:47:28.809288 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 02:47:28.810717 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 7 02:47:28.834171 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 02:47:28.858178 kernel: loop1: detected capacity change from 0 to 140768 Jul 7 02:47:28.910409 kernel: loop2: detected capacity change from 0 to 142488 Jul 7 02:47:28.964371 kernel: loop3: detected capacity change from 0 to 8 Jul 7 02:47:28.992907 kernel: loop4: detected capacity change from 0 to 221472 Jul 7 02:47:29.003320 kernel: loop5: detected capacity change from 0 to 140768 Jul 7 02:47:29.016172 kernel: loop6: detected capacity change from 0 to 142488 Jul 7 02:47:29.029167 kernel: loop7: detected capacity change from 0 to 8 Jul 7 02:47:29.030487 (sd-merge)[1316]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jul 7 02:47:29.030995 (sd-merge)[1316]: Merged extensions into '/usr'. Jul 7 02:47:29.035156 systemd[1]: Reloading requested from client PID 1303 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 02:47:29.035187 systemd[1]: Reloading... Jul 7 02:47:29.134186 zram_generator::config[1356]: No configuration found. Jul 7 02:47:29.265636 ldconfig[1299]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 02:47:29.297570 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 02:47:29.358997 systemd[1]: Reloading finished in 323 ms. Jul 7 02:47:29.372626 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 02:47:29.374733 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 02:47:29.391462 systemd[1]: Starting ensure-sysext.service... Jul 7 02:47:29.395293 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 02:47:29.400351 systemd[1]: Reloading requested from client PID 1407 ('systemctl') (unit ensure-sysext.service)... Jul 7 02:47:29.400375 systemd[1]: Reloading... Jul 7 02:47:29.425785 systemd-tmpfiles[1408]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 02:47:29.426578 systemd-tmpfiles[1408]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 02:47:29.427644 systemd-tmpfiles[1408]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 02:47:29.428025 systemd-tmpfiles[1408]: ACLs are not supported, ignoring. Jul 7 02:47:29.428186 systemd-tmpfiles[1408]: ACLs are not supported, ignoring. Jul 7 02:47:29.431242 systemd-tmpfiles[1408]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 02:47:29.431348 systemd-tmpfiles[1408]: Skipping /boot Jul 7 02:47:29.441852 systemd-tmpfiles[1408]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 02:47:29.441975 systemd-tmpfiles[1408]: Skipping /boot Jul 7 02:47:29.482183 zram_generator::config[1435]: No configuration found. Jul 7 02:47:29.646222 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 02:47:29.711294 systemd[1]: Reloading finished in 310 ms. Jul 7 02:47:29.724656 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 02:47:29.743462 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 7 02:47:29.746346 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 02:47:29.750394 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 02:47:29.754853 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 02:47:29.763563 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 02:47:29.775570 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 02:47:29.776047 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 02:47:29.781676 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 02:47:29.791898 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 02:47:29.795623 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 02:47:29.796845 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 02:47:29.797131 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 02:47:29.803378 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 02:47:29.803623 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 02:47:29.803830 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 02:47:29.803951 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 02:47:29.813046 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 02:47:29.814228 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 02:47:29.814396 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 02:47:29.819660 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 02:47:29.821576 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 02:47:29.827006 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 02:47:29.827574 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 02:47:29.827691 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 02:47:29.827791 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 02:47:29.828596 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 02:47:29.828754 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 02:47:29.836697 systemd[1]: Finished ensure-sysext.service. Jul 7 02:47:29.856031 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 7 02:47:29.858304 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 02:47:29.861483 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 02:47:29.861629 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 02:47:29.867366 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 02:47:29.867554 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 02:47:29.872969 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 02:47:29.874782 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 02:47:29.881489 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 02:47:29.886667 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 02:47:29.911427 systemd-resolved[1505]: Positive Trust Anchors: Jul 7 02:47:29.913132 augenrules[1545]: No rules Jul 7 02:47:29.913385 systemd-resolved[1505]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 02:47:29.913491 systemd-resolved[1505]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 02:47:29.913620 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 7 02:47:29.916368 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 02:47:29.922803 systemd-resolved[1505]: Using system hostname 'srv-ijdf9.gb1.brightbox.com'. Jul 7 02:47:29.924872 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 02:47:29.925589 systemd[1]: Reached target network.target - Network. Jul 7 02:47:29.926062 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 02:47:29.954619 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 7 02:47:29.955948 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 02:47:29.956902 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 02:47:29.957774 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 02:47:29.958603 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 02:47:29.959446 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 02:47:29.959492 systemd[1]: Reached target paths.target - Path Units. Jul 7 02:47:29.960167 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 02:47:29.961121 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 02:47:29.961925 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 02:47:29.962480 systemd[1]: Reached target timers.target - Timer Units. Jul 7 02:47:29.963967 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 02:47:29.966306 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 02:47:29.968834 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 02:47:29.972165 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 02:47:29.978981 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 02:47:29.980591 systemd[1]: Reached target basic.target - Basic System. Jul 7 02:47:29.982230 systemd[1]: System is tainted: cgroupsv1 Jul 7 02:47:29.982298 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 02:47:29.982328 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 02:47:29.989232 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 02:47:29.992284 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 7 02:47:30.004329 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 02:47:30.009383 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 02:47:30.022354 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 02:47:30.024485 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 02:47:30.027003 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 02:47:30.031738 jq[1561]: false Jul 7 02:47:30.039281 dbus-daemon[1560]: [system] SELinux support is enabled Jul 7 02:47:30.042971 dbus-daemon[1560]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1261 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 7 02:47:30.041253 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 02:47:30.047346 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 02:47:30.052707 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 02:47:30.060627 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 02:47:30.063865 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 7 02:47:30.067276 systemd-networkd[1261]: eth0: Gained IPv6LL Jul 7 02:47:30.074943 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 02:47:30.087247 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 02:47:30.090626 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 02:47:30.094939 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 02:47:30.104395 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 02:47:30.104641 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 02:47:30.118960 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 02:47:30.119236 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 02:47:30.122093 update_engine[1578]: I20250707 02:47:30.122010 1578 main.cc:92] Flatcar Update Engine starting Jul 7 02:47:30.128524 update_engine[1578]: I20250707 02:47:30.126219 1578 update_check_scheduler.cc:74] Next update check in 10m37s Jul 7 02:47:30.128560 jq[1582]: true Jul 7 02:47:30.123496 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 02:47:30.131995 extend-filesystems[1564]: Found loop4 Jul 7 02:47:30.131995 extend-filesystems[1564]: Found loop5 Jul 7 02:47:30.131995 extend-filesystems[1564]: Found loop6 Jul 7 02:47:30.131995 extend-filesystems[1564]: Found loop7 Jul 7 02:47:30.131995 extend-filesystems[1564]: Found vda Jul 7 02:47:30.131995 extend-filesystems[1564]: Found vda1 Jul 7 02:47:30.131995 extend-filesystems[1564]: Found vda2 Jul 7 02:47:30.131995 extend-filesystems[1564]: Found vda3 Jul 7 02:47:30.131995 extend-filesystems[1564]: Found usr Jul 7 02:47:30.131995 extend-filesystems[1564]: Found vda4 Jul 7 02:47:30.131995 extend-filesystems[1564]: Found vda6 Jul 7 02:47:30.131995 extend-filesystems[1564]: Found vda7 Jul 7 02:47:30.131995 extend-filesystems[1564]: Found vda9 Jul 7 02:47:30.131995 extend-filesystems[1564]: Checking size of /dev/vda9 Jul 7 02:47:30.123735 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 02:47:30.166726 tar[1587]: linux-amd64/helm Jul 7 02:47:30.168661 dbus-daemon[1560]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 7 02:47:30.171063 (ntainerd)[1598]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 02:47:30.183356 extend-filesystems[1564]: Resized partition /dev/vda9 Jul 7 02:47:30.173501 systemd[1]: Started update-engine.service - Update Engine. Jul 7 02:47:30.183952 extend-filesystems[1606]: resize2fs 1.47.1 (20-May-2024) Jul 7 02:47:30.195413 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Jul 7 02:47:30.178871 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 02:47:30.185257 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 02:47:30.201005 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 02:47:30.202099 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 02:47:30.202182 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 02:47:30.214656 jq[1591]: true Jul 7 02:47:30.216444 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 7 02:47:30.216916 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 02:47:30.216941 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 02:47:30.225342 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 02:47:30.235654 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 02:47:30.253771 systemd-timesyncd[1532]: Contacted time server 162.159.200.1:123 (0.flatcar.pool.ntp.org). Jul 7 02:47:30.253832 systemd-timesyncd[1532]: Initial clock synchronization to Mon 2025-07-07 02:47:30.296409 UTC. Jul 7 02:47:30.306692 systemd-logind[1572]: Watching system buttons on /dev/input/event2 (Power Button) Jul 7 02:47:30.309119 systemd-logind[1572]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 7 02:47:30.311006 systemd-logind[1572]: New seat seat0. Jul 7 02:47:30.317688 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 02:47:30.339557 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 02:47:30.351193 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1255) Jul 7 02:47:30.387642 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jul 7 02:47:30.408222 extend-filesystems[1606]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 7 02:47:30.408222 extend-filesystems[1606]: old_desc_blocks = 1, new_desc_blocks = 8 Jul 7 02:47:30.408222 extend-filesystems[1606]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jul 7 02:47:30.407626 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 02:47:30.444343 dbus-daemon[1560]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 7 02:47:30.454067 extend-filesystems[1564]: Resized filesystem in /dev/vda9 Jul 7 02:47:30.407901 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 02:47:30.455839 dbus-daemon[1560]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1610 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 7 02:47:30.445702 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 7 02:47:30.466420 systemd[1]: Starting polkit.service - Authorization Manager... Jul 7 02:47:30.478332 bash[1641]: Updated "/home/core/.ssh/authorized_keys" Jul 7 02:47:30.483800 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 02:47:30.530033 systemd[1]: Starting sshkeys.service... Jul 7 02:47:30.564734 polkitd[1642]: Started polkitd version 121 Jul 7 02:47:30.578332 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 7 02:47:30.589580 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 7 02:47:30.610571 locksmithd[1612]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 02:47:30.613946 polkitd[1642]: Loading rules from directory /etc/polkit-1/rules.d Jul 7 02:47:30.614017 polkitd[1642]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 7 02:47:30.634566 polkitd[1642]: Finished loading, compiling and executing 2 rules Jul 7 02:47:30.639373 dbus-daemon[1560]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 7 02:47:30.639655 systemd[1]: Started polkit.service - Authorization Manager. Jul 7 02:47:30.640981 polkitd[1642]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 7 02:47:30.683605 systemd-hostnamed[1610]: Hostname set to (static) Jul 7 02:47:30.697340 systemd-networkd[1261]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:1952:24:19ff:fef4:654a/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:1952:24:19ff:fef4:654a/64 assigned by NDisc. Jul 7 02:47:30.697508 systemd-networkd[1261]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jul 7 02:47:30.776487 containerd[1598]: time="2025-07-07T02:47:30.775558011Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 7 02:47:30.811250 containerd[1598]: time="2025-07-07T02:47:30.811150163Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 7 02:47:30.823910 containerd[1598]: time="2025-07-07T02:47:30.823597383Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 7 02:47:30.823910 containerd[1598]: time="2025-07-07T02:47:30.823636210Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 7 02:47:30.823910 containerd[1598]: time="2025-07-07T02:47:30.823653536Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 7 02:47:30.823910 containerd[1598]: time="2025-07-07T02:47:30.823803341Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 7 02:47:30.823910 containerd[1598]: time="2025-07-07T02:47:30.823818977Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 7 02:47:30.823910 containerd[1598]: time="2025-07-07T02:47:30.823874854Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 02:47:30.823910 containerd[1598]: time="2025-07-07T02:47:30.823887250Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 7 02:47:30.824178 containerd[1598]: time="2025-07-07T02:47:30.824096614Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 02:47:30.824178 containerd[1598]: time="2025-07-07T02:47:30.824112544Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 7 02:47:30.824178 containerd[1598]: time="2025-07-07T02:47:30.824125019Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 02:47:30.824178 containerd[1598]: time="2025-07-07T02:47:30.824134439Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 7 02:47:30.824266 containerd[1598]: time="2025-07-07T02:47:30.824214222Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 7 02:47:30.824741 containerd[1598]: time="2025-07-07T02:47:30.824436100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 7 02:47:30.824741 containerd[1598]: time="2025-07-07T02:47:30.824574445Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 02:47:30.824741 containerd[1598]: time="2025-07-07T02:47:30.824588289Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 7 02:47:30.824741 containerd[1598]: time="2025-07-07T02:47:30.824663301Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 7 02:47:30.824741 containerd[1598]: time="2025-07-07T02:47:30.824700442Z" level=info msg="metadata content store policy set" policy=shared Jul 7 02:47:30.834311 containerd[1598]: time="2025-07-07T02:47:30.834267594Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 7 02:47:30.834499 containerd[1598]: time="2025-07-07T02:47:30.834352325Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 7 02:47:30.834499 containerd[1598]: time="2025-07-07T02:47:30.834370737Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 7 02:47:30.834499 containerd[1598]: time="2025-07-07T02:47:30.834385784Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 7 02:47:30.834499 containerd[1598]: time="2025-07-07T02:47:30.834402257Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 7 02:47:30.834606 containerd[1598]: time="2025-07-07T02:47:30.834554363Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 7 02:47:30.835158 containerd[1598]: time="2025-07-07T02:47:30.834914664Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 7 02:47:30.835158 containerd[1598]: time="2025-07-07T02:47:30.835025399Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 7 02:47:30.835158 containerd[1598]: time="2025-07-07T02:47:30.835040688Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 7 02:47:30.835158 containerd[1598]: time="2025-07-07T02:47:30.835054105Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 7 02:47:30.835158 containerd[1598]: time="2025-07-07T02:47:30.835067740Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 7 02:47:30.835158 containerd[1598]: time="2025-07-07T02:47:30.835082126Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 7 02:47:30.835158 containerd[1598]: time="2025-07-07T02:47:30.835095423Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 7 02:47:30.835158 containerd[1598]: time="2025-07-07T02:47:30.835109632Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 7 02:47:30.835158 containerd[1598]: time="2025-07-07T02:47:30.835123858Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 7 02:47:30.835158 containerd[1598]: time="2025-07-07T02:47:30.835152468Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 7 02:47:30.835417 containerd[1598]: time="2025-07-07T02:47:30.835166402Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 7 02:47:30.835417 containerd[1598]: time="2025-07-07T02:47:30.835180543Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 7 02:47:30.835417 containerd[1598]: time="2025-07-07T02:47:30.835210365Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 7 02:47:30.835417 containerd[1598]: time="2025-07-07T02:47:30.835227048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 7 02:47:30.835417 containerd[1598]: time="2025-07-07T02:47:30.835239130Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 7 02:47:30.835417 containerd[1598]: time="2025-07-07T02:47:30.835255342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 7 02:47:30.835417 containerd[1598]: time="2025-07-07T02:47:30.835268180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 7 02:47:30.835417 containerd[1598]: time="2025-07-07T02:47:30.835291208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 7 02:47:30.835417 containerd[1598]: time="2025-07-07T02:47:30.835302755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 7 02:47:30.835417 containerd[1598]: time="2025-07-07T02:47:30.835315109Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 7 02:47:30.835417 containerd[1598]: time="2025-07-07T02:47:30.835327616Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 7 02:47:30.835417 containerd[1598]: time="2025-07-07T02:47:30.835342145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 7 02:47:30.835417 containerd[1598]: time="2025-07-07T02:47:30.835353324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 7 02:47:30.835417 containerd[1598]: time="2025-07-07T02:47:30.835365238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 7 02:47:30.835739 containerd[1598]: time="2025-07-07T02:47:30.835380780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 7 02:47:30.835739 containerd[1598]: time="2025-07-07T02:47:30.835396557Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 7 02:47:30.835739 containerd[1598]: time="2025-07-07T02:47:30.835417637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 7 02:47:30.835739 containerd[1598]: time="2025-07-07T02:47:30.835437050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 7 02:47:30.835739 containerd[1598]: time="2025-07-07T02:47:30.835460407Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 7 02:47:30.835739 containerd[1598]: time="2025-07-07T02:47:30.835518658Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 7 02:47:30.835739 containerd[1598]: time="2025-07-07T02:47:30.835539002Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 7 02:47:30.835739 containerd[1598]: time="2025-07-07T02:47:30.835549907Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 7 02:47:30.835739 containerd[1598]: time="2025-07-07T02:47:30.835562761Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 7 02:47:30.835739 containerd[1598]: time="2025-07-07T02:47:30.835572125Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 7 02:47:30.835739 containerd[1598]: time="2025-07-07T02:47:30.835583233Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 7 02:47:30.835739 containerd[1598]: time="2025-07-07T02:47:30.835597993Z" level=info msg="NRI interface is disabled by configuration." Jul 7 02:47:30.835739 containerd[1598]: time="2025-07-07T02:47:30.835608140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 7 02:47:30.836042 containerd[1598]: time="2025-07-07T02:47:30.835872185Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 7 02:47:30.836042 containerd[1598]: time="2025-07-07T02:47:30.835942875Z" level=info msg="Connect containerd service" Jul 7 02:47:30.836042 containerd[1598]: time="2025-07-07T02:47:30.835982097Z" level=info msg="using legacy CRI server" Jul 7 02:47:30.836042 containerd[1598]: time="2025-07-07T02:47:30.835989620Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 02:47:30.844844 containerd[1598]: time="2025-07-07T02:47:30.836133364Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 7 02:47:30.847548 containerd[1598]: time="2025-07-07T02:47:30.847516992Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 02:47:30.850424 containerd[1598]: time="2025-07-07T02:47:30.850401340Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 02:47:30.850475 containerd[1598]: time="2025-07-07T02:47:30.850458897Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 02:47:30.850639 containerd[1598]: time="2025-07-07T02:47:30.850546040Z" level=info msg="Start subscribing containerd event" Jul 7 02:47:30.850639 containerd[1598]: time="2025-07-07T02:47:30.850609278Z" level=info msg="Start recovering state" Jul 7 02:47:30.850697 containerd[1598]: time="2025-07-07T02:47:30.850681991Z" level=info msg="Start event monitor" Jul 7 02:47:30.850730 containerd[1598]: time="2025-07-07T02:47:30.850698910Z" level=info msg="Start snapshots syncer" Jul 7 02:47:30.850730 containerd[1598]: time="2025-07-07T02:47:30.850708871Z" level=info msg="Start cni network conf syncer for default" Jul 7 02:47:30.850730 containerd[1598]: time="2025-07-07T02:47:30.850716382Z" level=info msg="Start streaming server" Jul 7 02:47:30.850911 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 02:47:30.854403 containerd[1598]: time="2025-07-07T02:47:30.854378615Z" level=info msg="containerd successfully booted in 0.079977s" Jul 7 02:47:31.100913 sshd_keygen[1580]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 02:47:31.129735 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 02:47:31.141542 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 02:47:31.152465 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 02:47:31.152727 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 02:47:31.162333 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 02:47:31.176577 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 02:47:31.183697 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 02:47:31.187553 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 7 02:47:31.189454 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 02:47:31.232803 tar[1587]: linux-amd64/LICENSE Jul 7 02:47:31.232803 tar[1587]: linux-amd64/README.md Jul 7 02:47:31.248640 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 02:47:31.708290 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 02:47:31.715970 (kubelet)[1705]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 02:47:32.283048 kubelet[1705]: E0707 02:47:32.282960 1705 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 02:47:32.286499 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 02:47:32.286682 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 02:47:35.373936 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 02:47:35.388845 systemd[1]: Started sshd@0-10.244.101.74:22-139.178.68.195:48108.service - OpenSSH per-connection server daemon (139.178.68.195:48108). Jul 7 02:47:36.233930 login[1689]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 7 02:47:36.239184 login[1690]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 7 02:47:36.246762 systemd-logind[1572]: New session 1 of user core. Jul 7 02:47:36.248419 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 02:47:36.264006 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 02:47:36.270432 systemd-logind[1572]: New session 2 of user core. Jul 7 02:47:36.285647 sshd[1715]: Accepted publickey for core from 139.178.68.195 port 48108 ssh2: RSA SHA256:OzzIFs54pJXMP2eymQNEzIb/qF+YzQ98zvMT1AG90zI Jul 7 02:47:36.285518 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 02:47:36.284737 sshd[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:47:36.297538 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 02:47:36.305072 (systemd)[1725]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 02:47:36.306385 systemd-logind[1572]: New session 3 of user core. Jul 7 02:47:36.426492 systemd[1725]: Queued start job for default target default.target. Jul 7 02:47:36.427641 systemd[1725]: Created slice app.slice - User Application Slice. Jul 7 02:47:36.427756 systemd[1725]: Reached target paths.target - Paths. Jul 7 02:47:36.427770 systemd[1725]: Reached target timers.target - Timers. Jul 7 02:47:36.433282 systemd[1725]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 02:47:36.459482 systemd[1725]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 02:47:36.459793 systemd[1725]: Reached target sockets.target - Sockets. Jul 7 02:47:36.459930 systemd[1725]: Reached target basic.target - Basic System. Jul 7 02:47:36.460003 systemd[1725]: Reached target default.target - Main User Target. Jul 7 02:47:36.460050 systemd[1725]: Startup finished in 141ms. Jul 7 02:47:36.460535 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 02:47:36.469880 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 02:47:36.474777 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 02:47:36.481227 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 02:47:37.124541 systemd[1]: Started sshd@1-10.244.101.74:22-139.178.68.195:48110.service - OpenSSH per-connection server daemon (139.178.68.195:48110). Jul 7 02:47:37.142965 coreos-metadata[1559]: Jul 07 02:47:37.142 WARN failed to locate config-drive, using the metadata service API instead Jul 7 02:47:37.161685 coreos-metadata[1559]: Jul 07 02:47:37.161 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jul 7 02:47:37.168487 coreos-metadata[1559]: Jul 07 02:47:37.168 INFO Fetch failed with 404: resource not found Jul 7 02:47:37.168616 coreos-metadata[1559]: Jul 07 02:47:37.168 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jul 7 02:47:37.169300 coreos-metadata[1559]: Jul 07 02:47:37.169 INFO Fetch successful Jul 7 02:47:37.169466 coreos-metadata[1559]: Jul 07 02:47:37.169 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jul 7 02:47:37.181736 coreos-metadata[1559]: Jul 07 02:47:37.181 INFO Fetch successful Jul 7 02:47:37.182073 coreos-metadata[1559]: Jul 07 02:47:37.182 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jul 7 02:47:37.204508 coreos-metadata[1559]: Jul 07 02:47:37.204 INFO Fetch successful Jul 7 02:47:37.204859 coreos-metadata[1559]: Jul 07 02:47:37.204 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jul 7 02:47:37.223769 coreos-metadata[1559]: Jul 07 02:47:37.223 INFO Fetch successful Jul 7 02:47:37.224133 coreos-metadata[1559]: Jul 07 02:47:37.224 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jul 7 02:47:37.268529 coreos-metadata[1559]: Jul 07 02:47:37.268 INFO Fetch successful Jul 7 02:47:37.298859 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 7 02:47:37.299860 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 7 02:47:37.729642 coreos-metadata[1657]: Jul 07 02:47:37.729 WARN failed to locate config-drive, using the metadata service API instead Jul 7 02:47:37.746924 coreos-metadata[1657]: Jul 07 02:47:37.746 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jul 7 02:47:37.821757 coreos-metadata[1657]: Jul 07 02:47:37.821 INFO Fetch successful Jul 7 02:47:37.822044 coreos-metadata[1657]: Jul 07 02:47:37.821 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 7 02:47:37.914010 coreos-metadata[1657]: Jul 07 02:47:37.913 INFO Fetch successful Jul 7 02:47:37.916203 unknown[1657]: wrote ssh authorized keys file for user: core Jul 7 02:47:37.936662 update-ssh-keys[1778]: Updated "/home/core/.ssh/authorized_keys" Jul 7 02:47:37.940805 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 7 02:47:37.945600 systemd[1]: Finished sshkeys.service. Jul 7 02:47:37.948053 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 02:47:37.948398 systemd[1]: Startup finished in 15.777s (kernel) + 11.525s (userspace) = 27.303s. Jul 7 02:47:38.022227 sshd[1762]: Accepted publickey for core from 139.178.68.195 port 48110 ssh2: RSA SHA256:OzzIFs54pJXMP2eymQNEzIb/qF+YzQ98zvMT1AG90zI Jul 7 02:47:38.025347 sshd[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:47:38.035270 systemd-logind[1572]: New session 4 of user core. Jul 7 02:47:38.047653 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 02:47:38.653200 sshd[1762]: pam_unix(sshd:session): session closed for user core Jul 7 02:47:38.659960 systemd[1]: sshd@1-10.244.101.74:22-139.178.68.195:48110.service: Deactivated successfully. Jul 7 02:47:38.666287 systemd-logind[1572]: Session 4 logged out. Waiting for processes to exit. Jul 7 02:47:38.668232 systemd[1]: session-4.scope: Deactivated successfully. Jul 7 02:47:38.670414 systemd-logind[1572]: Removed session 4. Jul 7 02:47:38.807731 systemd[1]: Started sshd@2-10.244.101.74:22-139.178.68.195:42832.service - OpenSSH per-connection server daemon (139.178.68.195:42832). Jul 7 02:47:39.710825 sshd[1790]: Accepted publickey for core from 139.178.68.195 port 42832 ssh2: RSA SHA256:OzzIFs54pJXMP2eymQNEzIb/qF+YzQ98zvMT1AG90zI Jul 7 02:47:39.714412 sshd[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:47:39.721554 systemd-logind[1572]: New session 5 of user core. Jul 7 02:47:39.728457 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 02:47:40.334043 sshd[1790]: pam_unix(sshd:session): session closed for user core Jul 7 02:47:40.340094 systemd[1]: sshd@2-10.244.101.74:22-139.178.68.195:42832.service: Deactivated successfully. Jul 7 02:47:40.344435 systemd-logind[1572]: Session 5 logged out. Waiting for processes to exit. Jul 7 02:47:40.344911 systemd[1]: session-5.scope: Deactivated successfully. Jul 7 02:47:40.346673 systemd-logind[1572]: Removed session 5. Jul 7 02:47:40.483735 systemd[1]: Started sshd@3-10.244.101.74:22-139.178.68.195:42844.service - OpenSSH per-connection server daemon (139.178.68.195:42844). Jul 7 02:47:41.377251 sshd[1798]: Accepted publickey for core from 139.178.68.195 port 42844 ssh2: RSA SHA256:OzzIFs54pJXMP2eymQNEzIb/qF+YzQ98zvMT1AG90zI Jul 7 02:47:41.380723 sshd[1798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:47:41.392208 systemd-logind[1572]: New session 6 of user core. Jul 7 02:47:41.396564 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 02:47:41.995586 sshd[1798]: pam_unix(sshd:session): session closed for user core Jul 7 02:47:42.002535 systemd[1]: sshd@3-10.244.101.74:22-139.178.68.195:42844.service: Deactivated successfully. Jul 7 02:47:42.009377 systemd-logind[1572]: Session 6 logged out. Waiting for processes to exit. Jul 7 02:47:42.010377 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 02:47:42.011806 systemd-logind[1572]: Removed session 6. Jul 7 02:47:42.161672 systemd[1]: Started sshd@4-10.244.101.74:22-139.178.68.195:42860.service - OpenSSH per-connection server daemon (139.178.68.195:42860). Jul 7 02:47:42.537284 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 02:47:42.542382 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 02:47:42.711197 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 02:47:42.714046 (kubelet)[1820]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 02:47:42.768350 kubelet[1820]: E0707 02:47:42.768251 1820 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 02:47:42.772757 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 02:47:42.773852 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 02:47:43.045407 sshd[1806]: Accepted publickey for core from 139.178.68.195 port 42860 ssh2: RSA SHA256:OzzIFs54pJXMP2eymQNEzIb/qF+YzQ98zvMT1AG90zI Jul 7 02:47:43.049415 sshd[1806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:47:43.060064 systemd-logind[1572]: New session 7 of user core. Jul 7 02:47:43.069713 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 02:47:43.903827 sudo[1831]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 02:47:43.904133 sudo[1831]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 02:47:43.931185 sudo[1831]: pam_unix(sudo:session): session closed for user root Jul 7 02:47:44.077579 sshd[1806]: pam_unix(sshd:session): session closed for user core Jul 7 02:47:44.082652 systemd[1]: sshd@4-10.244.101.74:22-139.178.68.195:42860.service: Deactivated successfully. Jul 7 02:47:44.084490 systemd-logind[1572]: Session 7 logged out. Waiting for processes to exit. Jul 7 02:47:44.088426 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 02:47:44.090264 systemd-logind[1572]: Removed session 7. Jul 7 02:47:44.232022 systemd[1]: Started sshd@5-10.244.101.74:22-139.178.68.195:42876.service - OpenSSH per-connection server daemon (139.178.68.195:42876). Jul 7 02:47:45.148557 sshd[1836]: Accepted publickey for core from 139.178.68.195 port 42876 ssh2: RSA SHA256:OzzIFs54pJXMP2eymQNEzIb/qF+YzQ98zvMT1AG90zI Jul 7 02:47:45.153386 sshd[1836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:47:45.162123 systemd-logind[1572]: New session 8 of user core. Jul 7 02:47:45.170507 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 02:47:45.633920 sudo[1841]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 02:47:45.634346 sudo[1841]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 02:47:45.640039 sudo[1841]: pam_unix(sudo:session): session closed for user root Jul 7 02:47:45.648820 sudo[1840]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 7 02:47:45.649311 sudo[1840]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 02:47:45.670703 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 7 02:47:45.673608 auditctl[1844]: No rules Jul 7 02:47:45.674009 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 02:47:45.674267 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 7 02:47:45.682885 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 7 02:47:45.710224 augenrules[1863]: No rules Jul 7 02:47:45.712477 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 7 02:47:45.714553 sudo[1840]: pam_unix(sudo:session): session closed for user root Jul 7 02:47:45.860753 sshd[1836]: pam_unix(sshd:session): session closed for user core Jul 7 02:47:45.866961 systemd[1]: sshd@5-10.244.101.74:22-139.178.68.195:42876.service: Deactivated successfully. Jul 7 02:47:45.873801 systemd-logind[1572]: Session 8 logged out. Waiting for processes to exit. Jul 7 02:47:45.873973 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 02:47:45.876865 systemd-logind[1572]: Removed session 8. Jul 7 02:47:46.010416 systemd[1]: Started sshd@6-10.244.101.74:22-139.178.68.195:42886.service - OpenSSH per-connection server daemon (139.178.68.195:42886). Jul 7 02:47:46.910026 sshd[1872]: Accepted publickey for core from 139.178.68.195 port 42886 ssh2: RSA SHA256:OzzIFs54pJXMP2eymQNEzIb/qF+YzQ98zvMT1AG90zI Jul 7 02:47:46.913403 sshd[1872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:47:46.923261 systemd-logind[1572]: New session 9 of user core. Jul 7 02:47:46.931575 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 02:47:47.389769 sudo[1876]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 02:47:47.390260 sudo[1876]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 02:47:47.828433 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 02:47:47.836568 (dockerd)[1893]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 02:47:48.236469 dockerd[1893]: time="2025-07-07T02:47:48.236277581Z" level=info msg="Starting up" Jul 7 02:47:48.346801 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3316991884-merged.mount: Deactivated successfully. Jul 7 02:47:48.398883 dockerd[1893]: time="2025-07-07T02:47:48.398837106Z" level=info msg="Loading containers: start." Jul 7 02:47:48.529191 kernel: Initializing XFRM netlink socket Jul 7 02:47:48.621359 systemd-networkd[1261]: docker0: Link UP Jul 7 02:47:48.638178 dockerd[1893]: time="2025-07-07T02:47:48.637514802Z" level=info msg="Loading containers: done." Jul 7 02:47:48.660189 dockerd[1893]: time="2025-07-07T02:47:48.658984957Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 02:47:48.660189 dockerd[1893]: time="2025-07-07T02:47:48.659406072Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 7 02:47:48.660189 dockerd[1893]: time="2025-07-07T02:47:48.659681946Z" level=info msg="Daemon has completed initialization" Jul 7 02:47:48.710221 dockerd[1893]: time="2025-07-07T02:47:48.710079172Z" level=info msg="API listen on /run/docker.sock" Jul 7 02:47:48.710469 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 02:47:49.392107 containerd[1598]: time="2025-07-07T02:47:49.391411947Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" Jul 7 02:47:50.213602 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3234501847.mount: Deactivated successfully. Jul 7 02:47:51.675078 containerd[1598]: time="2025-07-07T02:47:51.675027348Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:47:51.678177 containerd[1598]: time="2025-07-07T02:47:51.678131774Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=27960995" Jul 7 02:47:51.679146 containerd[1598]: time="2025-07-07T02:47:51.679115914Z" level=info msg="ImageCreate event name:\"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:47:51.682172 containerd[1598]: time="2025-07-07T02:47:51.682130674Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:47:51.684271 containerd[1598]: time="2025-07-07T02:47:51.684185157Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"27957787\" in 2.292672773s" Jul 7 02:47:51.684317 containerd[1598]: time="2025-07-07T02:47:51.684294427Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" Jul 7 02:47:51.684938 containerd[1598]: time="2025-07-07T02:47:51.684893740Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" Jul 7 02:47:52.799264 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 7 02:47:52.810120 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 02:47:52.956390 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 02:47:52.960077 (kubelet)[2102]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 02:47:53.012999 kubelet[2102]: E0707 02:47:53.012947 2102 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 02:47:53.015606 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 02:47:53.015884 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 02:47:54.309526 containerd[1598]: time="2025-07-07T02:47:54.309352892Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:47:54.310609 containerd[1598]: time="2025-07-07T02:47:54.310473909Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=24713784" Jul 7 02:47:54.310946 containerd[1598]: time="2025-07-07T02:47:54.310900332Z" level=info msg="ImageCreate event name:\"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:47:54.313470 containerd[1598]: time="2025-07-07T02:47:54.313428488Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:47:54.314607 containerd[1598]: time="2025-07-07T02:47:54.314495899Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"26202149\" in 2.62957203s" Jul 7 02:47:54.314607 containerd[1598]: time="2025-07-07T02:47:54.314525076Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" Jul 7 02:47:54.315282 containerd[1598]: time="2025-07-07T02:47:54.315260258Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" Jul 7 02:47:56.412225 containerd[1598]: time="2025-07-07T02:47:56.411784784Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:47:56.413067 containerd[1598]: time="2025-07-07T02:47:56.413025411Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=18780394" Jul 7 02:47:56.413400 containerd[1598]: time="2025-07-07T02:47:56.413364966Z" level=info msg="ImageCreate event name:\"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:47:56.416449 containerd[1598]: time="2025-07-07T02:47:56.416410453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:47:56.417629 containerd[1598]: time="2025-07-07T02:47:56.417495822Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"20268777\" in 2.102205969s" Jul 7 02:47:56.417629 containerd[1598]: time="2025-07-07T02:47:56.417529198Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" Jul 7 02:47:56.418782 containerd[1598]: time="2025-07-07T02:47:56.418749764Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" Jul 7 02:47:58.827223 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2632641645.mount: Deactivated successfully. Jul 7 02:47:59.264493 containerd[1598]: time="2025-07-07T02:47:59.264281859Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:47:59.265537 containerd[1598]: time="2025-07-07T02:47:59.265211982Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=30354633" Jul 7 02:47:59.267407 containerd[1598]: time="2025-07-07T02:47:59.266773314Z" level=info msg="ImageCreate event name:\"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:47:59.269731 containerd[1598]: time="2025-07-07T02:47:59.268871690Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"30353644\" in 2.85007214s" Jul 7 02:47:59.269731 containerd[1598]: time="2025-07-07T02:47:59.268951371Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" Jul 7 02:47:59.269731 containerd[1598]: time="2025-07-07T02:47:59.269162564Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:47:59.270017 containerd[1598]: time="2025-07-07T02:47:59.269923893Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 7 02:48:00.550557 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount772739784.mount: Deactivated successfully. Jul 7 02:48:00.748395 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 7 02:48:01.398858 containerd[1598]: time="2025-07-07T02:48:01.398772078Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:48:01.404790 containerd[1598]: time="2025-07-07T02:48:01.404498769Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Jul 7 02:48:01.405614 containerd[1598]: time="2025-07-07T02:48:01.405469385Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:48:01.410857 containerd[1598]: time="2025-07-07T02:48:01.410810548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:48:01.413130 containerd[1598]: time="2025-07-07T02:48:01.412839391Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.142859784s" Jul 7 02:48:01.413130 containerd[1598]: time="2025-07-07T02:48:01.412886149Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 7 02:48:01.414297 containerd[1598]: time="2025-07-07T02:48:01.414195352Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 02:48:02.897845 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3023328378.mount: Deactivated successfully. Jul 7 02:48:02.900621 containerd[1598]: time="2025-07-07T02:48:02.900573382Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:48:02.901459 containerd[1598]: time="2025-07-07T02:48:02.901421432Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jul 7 02:48:02.902163 containerd[1598]: time="2025-07-07T02:48:02.901933413Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:48:02.903905 containerd[1598]: time="2025-07-07T02:48:02.903833134Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:48:02.904816 containerd[1598]: time="2025-07-07T02:48:02.904671164Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.490425154s" Jul 7 02:48:02.904816 containerd[1598]: time="2025-07-07T02:48:02.904704743Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 7 02:48:02.905307 containerd[1598]: time="2025-07-07T02:48:02.905288553Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 7 02:48:03.049465 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 7 02:48:03.061494 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 02:48:03.205336 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 02:48:03.209646 (kubelet)[2198]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 02:48:03.260696 kubelet[2198]: E0707 02:48:03.260531 2198 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 02:48:03.262500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 02:48:03.262665 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 02:48:04.158513 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1852461027.mount: Deactivated successfully. Jul 7 02:48:06.021185 containerd[1598]: time="2025-07-07T02:48:06.019950626Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:48:06.021185 containerd[1598]: time="2025-07-07T02:48:06.020999748Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780021" Jul 7 02:48:06.022034 containerd[1598]: time="2025-07-07T02:48:06.022004475Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:48:06.025261 containerd[1598]: time="2025-07-07T02:48:06.025231601Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:48:06.026486 containerd[1598]: time="2025-07-07T02:48:06.026457502Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.121063865s" Jul 7 02:48:06.026561 containerd[1598]: time="2025-07-07T02:48:06.026501894Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 7 02:48:08.973199 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 02:48:08.983375 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 02:48:09.016626 systemd[1]: Reloading requested from client PID 2288 ('systemctl') (unit session-9.scope)... Jul 7 02:48:09.016645 systemd[1]: Reloading... Jul 7 02:48:09.150191 zram_generator::config[2327]: No configuration found. Jul 7 02:48:09.297487 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 02:48:09.374548 systemd[1]: Reloading finished in 357 ms. Jul 7 02:48:09.431441 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 7 02:48:09.431712 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 7 02:48:09.432257 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 02:48:09.437780 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 02:48:09.571310 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 02:48:09.576685 (kubelet)[2406]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 02:48:09.627574 kubelet[2406]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 02:48:09.629195 kubelet[2406]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 7 02:48:09.629195 kubelet[2406]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 02:48:09.629195 kubelet[2406]: I0707 02:48:09.628065 2406 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 02:48:10.383980 kubelet[2406]: I0707 02:48:10.383878 2406 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 7 02:48:10.383980 kubelet[2406]: I0707 02:48:10.383933 2406 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 02:48:10.384378 kubelet[2406]: I0707 02:48:10.384299 2406 server.go:934] "Client rotation is on, will bootstrap in background" Jul 7 02:48:10.428403 kubelet[2406]: I0707 02:48:10.428297 2406 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 02:48:10.429538 kubelet[2406]: E0707 02:48:10.429486 2406 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.244.101.74:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.244.101.74:6443: connect: connection refused" logger="UnhandledError" Jul 7 02:48:10.437528 kubelet[2406]: E0707 02:48:10.437494 2406 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 7 02:48:10.437528 kubelet[2406]: I0707 02:48:10.437528 2406 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 7 02:48:10.441712 kubelet[2406]: I0707 02:48:10.441698 2406 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 02:48:10.445315 kubelet[2406]: I0707 02:48:10.445285 2406 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 7 02:48:10.445516 kubelet[2406]: I0707 02:48:10.445470 2406 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 02:48:10.445716 kubelet[2406]: I0707 02:48:10.445515 2406 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-ijdf9.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 7 02:48:10.445870 kubelet[2406]: I0707 02:48:10.445738 2406 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 02:48:10.445870 kubelet[2406]: I0707 02:48:10.445749 2406 container_manager_linux.go:300] "Creating device plugin manager" Jul 7 02:48:10.445944 kubelet[2406]: I0707 02:48:10.445882 2406 state_mem.go:36] "Initialized new in-memory state store" Jul 7 02:48:10.451570 kubelet[2406]: I0707 02:48:10.451286 2406 kubelet.go:408] "Attempting to sync node with API server" Jul 7 02:48:10.451570 kubelet[2406]: I0707 02:48:10.451317 2406 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 02:48:10.451570 kubelet[2406]: I0707 02:48:10.451362 2406 kubelet.go:314] "Adding apiserver pod source" Jul 7 02:48:10.451570 kubelet[2406]: I0707 02:48:10.451390 2406 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 02:48:10.460628 kubelet[2406]: W0707 02:48:10.460465 2406 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.244.101.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-ijdf9.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.101.74:6443: connect: connection refused Jul 7 02:48:10.460628 kubelet[2406]: E0707 02:48:10.460559 2406 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.244.101.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-ijdf9.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.101.74:6443: connect: connection refused" logger="UnhandledError" Jul 7 02:48:10.460761 kubelet[2406]: I0707 02:48:10.460689 2406 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 7 02:48:10.463979 kubelet[2406]: W0707 02:48:10.463811 2406 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.244.101.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.244.101.74:6443: connect: connection refused Jul 7 02:48:10.463979 kubelet[2406]: E0707 02:48:10.463858 2406 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.244.101.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.101.74:6443: connect: connection refused" logger="UnhandledError" Jul 7 02:48:10.465345 kubelet[2406]: I0707 02:48:10.465330 2406 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 02:48:10.466035 kubelet[2406]: W0707 02:48:10.466013 2406 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 02:48:10.466851 kubelet[2406]: I0707 02:48:10.466680 2406 server.go:1274] "Started kubelet" Jul 7 02:48:10.467790 kubelet[2406]: I0707 02:48:10.467396 2406 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 02:48:10.468697 kubelet[2406]: I0707 02:48:10.468438 2406 server.go:449] "Adding debug handlers to kubelet server" Jul 7 02:48:10.471755 kubelet[2406]: I0707 02:48:10.471726 2406 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 02:48:10.472344 kubelet[2406]: I0707 02:48:10.472065 2406 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 02:48:10.473665 kubelet[2406]: E0707 02:48:10.472276 2406 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.244.101.74:6443/api/v1/namespaces/default/events\": dial tcp 10.244.101.74:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-ijdf9.gb1.brightbox.com.184fd83243ba38a0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-ijdf9.gb1.brightbox.com,UID:srv-ijdf9.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-ijdf9.gb1.brightbox.com,},FirstTimestamp:2025-07-07 02:48:10.466654368 +0000 UTC m=+0.885409654,LastTimestamp:2025-07-07 02:48:10.466654368 +0000 UTC m=+0.885409654,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-ijdf9.gb1.brightbox.com,}" Jul 7 02:48:10.481691 kubelet[2406]: I0707 02:48:10.479526 2406 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 02:48:10.481691 kubelet[2406]: I0707 02:48:10.481052 2406 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 02:48:10.481691 kubelet[2406]: I0707 02:48:10.481093 2406 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 7 02:48:10.482078 kubelet[2406]: E0707 02:48:10.481454 2406 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-ijdf9.gb1.brightbox.com\" not found" Jul 7 02:48:10.484547 kubelet[2406]: E0707 02:48:10.484387 2406 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.101.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-ijdf9.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.101.74:6443: connect: connection refused" interval="200ms" Jul 7 02:48:10.487104 kubelet[2406]: I0707 02:48:10.487091 2406 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 7 02:48:10.487921 kubelet[2406]: I0707 02:48:10.487904 2406 reconciler.go:26] "Reconciler: start to sync state" Jul 7 02:48:10.488229 kubelet[2406]: I0707 02:48:10.488211 2406 factory.go:221] Registration of the systemd container factory successfully Jul 7 02:48:10.488298 kubelet[2406]: I0707 02:48:10.488282 2406 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 02:48:10.488681 kubelet[2406]: W0707 02:48:10.488643 2406 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.244.101.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.101.74:6443: connect: connection refused Jul 7 02:48:10.488739 kubelet[2406]: E0707 02:48:10.488690 2406 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.244.101.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.101.74:6443: connect: connection refused" logger="UnhandledError" Jul 7 02:48:10.490390 kubelet[2406]: E0707 02:48:10.490373 2406 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 02:48:10.490999 kubelet[2406]: I0707 02:48:10.490987 2406 factory.go:221] Registration of the containerd container factory successfully Jul 7 02:48:10.509831 kubelet[2406]: I0707 02:48:10.509696 2406 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 02:48:10.511513 kubelet[2406]: I0707 02:48:10.511119 2406 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 02:48:10.511513 kubelet[2406]: I0707 02:48:10.511198 2406 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 7 02:48:10.511513 kubelet[2406]: I0707 02:48:10.511230 2406 kubelet.go:2321] "Starting kubelet main sync loop" Jul 7 02:48:10.511513 kubelet[2406]: E0707 02:48:10.511275 2406 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 02:48:10.518830 kubelet[2406]: W0707 02:48:10.518660 2406 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.244.101.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.101.74:6443: connect: connection refused Jul 7 02:48:10.518830 kubelet[2406]: E0707 02:48:10.518695 2406 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.244.101.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.101.74:6443: connect: connection refused" logger="UnhandledError" Jul 7 02:48:10.522531 kubelet[2406]: I0707 02:48:10.522513 2406 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 7 02:48:10.522531 kubelet[2406]: I0707 02:48:10.522527 2406 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 7 02:48:10.522651 kubelet[2406]: I0707 02:48:10.522545 2406 state_mem.go:36] "Initialized new in-memory state store" Jul 7 02:48:10.523657 kubelet[2406]: I0707 02:48:10.523636 2406 policy_none.go:49] "None policy: Start" Jul 7 02:48:10.524116 kubelet[2406]: I0707 02:48:10.524095 2406 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 7 02:48:10.524175 kubelet[2406]: I0707 02:48:10.524120 2406 state_mem.go:35] "Initializing new in-memory state store" Jul 7 02:48:10.527945 kubelet[2406]: I0707 02:48:10.527862 2406 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 02:48:10.528199 kubelet[2406]: I0707 02:48:10.528047 2406 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 02:48:10.528199 kubelet[2406]: I0707 02:48:10.528061 2406 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 02:48:10.529316 kubelet[2406]: I0707 02:48:10.529281 2406 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 02:48:10.533184 kubelet[2406]: E0707 02:48:10.533079 2406 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-ijdf9.gb1.brightbox.com\" not found" Jul 7 02:48:10.630498 kubelet[2406]: I0707 02:48:10.630234 2406 kubelet_node_status.go:72] "Attempting to register node" node="srv-ijdf9.gb1.brightbox.com" Jul 7 02:48:10.631104 kubelet[2406]: E0707 02:48:10.631079 2406 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.244.101.74:6443/api/v1/nodes\": dial tcp 10.244.101.74:6443: connect: connection refused" node="srv-ijdf9.gb1.brightbox.com" Jul 7 02:48:10.686120 kubelet[2406]: E0707 02:48:10.685857 2406 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.101.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-ijdf9.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.101.74:6443: connect: connection refused" interval="400ms" Jul 7 02:48:10.688494 kubelet[2406]: I0707 02:48:10.688401 2406 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0859971c98ea195c8c8f4bafe5fa4bac-ca-certs\") pod \"kube-apiserver-srv-ijdf9.gb1.brightbox.com\" (UID: \"0859971c98ea195c8c8f4bafe5fa4bac\") " pod="kube-system/kube-apiserver-srv-ijdf9.gb1.brightbox.com" Jul 7 02:48:10.688899 kubelet[2406]: I0707 02:48:10.688481 2406 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0859971c98ea195c8c8f4bafe5fa4bac-usr-share-ca-certificates\") pod \"kube-apiserver-srv-ijdf9.gb1.brightbox.com\" (UID: \"0859971c98ea195c8c8f4bafe5fa4bac\") " pod="kube-system/kube-apiserver-srv-ijdf9.gb1.brightbox.com" Jul 7 02:48:10.688899 kubelet[2406]: I0707 02:48:10.688570 2406 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/edc5c68e701729e2c2a3de5d1b320c23-flexvolume-dir\") pod \"kube-controller-manager-srv-ijdf9.gb1.brightbox.com\" (UID: \"edc5c68e701729e2c2a3de5d1b320c23\") " pod="kube-system/kube-controller-manager-srv-ijdf9.gb1.brightbox.com" Jul 7 02:48:10.688899 kubelet[2406]: I0707 02:48:10.688609 2406 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/edc5c68e701729e2c2a3de5d1b320c23-k8s-certs\") pod \"kube-controller-manager-srv-ijdf9.gb1.brightbox.com\" (UID: \"edc5c68e701729e2c2a3de5d1b320c23\") " pod="kube-system/kube-controller-manager-srv-ijdf9.gb1.brightbox.com" Jul 7 02:48:10.688899 kubelet[2406]: I0707 02:48:10.688652 2406 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/edc5c68e701729e2c2a3de5d1b320c23-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-ijdf9.gb1.brightbox.com\" (UID: \"edc5c68e701729e2c2a3de5d1b320c23\") " pod="kube-system/kube-controller-manager-srv-ijdf9.gb1.brightbox.com" Jul 7 02:48:10.688899 kubelet[2406]: I0707 02:48:10.688693 2406 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6764d5a8f1746fffd467af4b50240d8d-kubeconfig\") pod \"kube-scheduler-srv-ijdf9.gb1.brightbox.com\" (UID: \"6764d5a8f1746fffd467af4b50240d8d\") " pod="kube-system/kube-scheduler-srv-ijdf9.gb1.brightbox.com" Jul 7 02:48:10.689394 kubelet[2406]: I0707 02:48:10.688729 2406 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0859971c98ea195c8c8f4bafe5fa4bac-k8s-certs\") pod \"kube-apiserver-srv-ijdf9.gb1.brightbox.com\" (UID: \"0859971c98ea195c8c8f4bafe5fa4bac\") " pod="kube-system/kube-apiserver-srv-ijdf9.gb1.brightbox.com" Jul 7 02:48:10.689394 kubelet[2406]: I0707 02:48:10.688770 2406 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/edc5c68e701729e2c2a3de5d1b320c23-ca-certs\") pod \"kube-controller-manager-srv-ijdf9.gb1.brightbox.com\" (UID: \"edc5c68e701729e2c2a3de5d1b320c23\") " pod="kube-system/kube-controller-manager-srv-ijdf9.gb1.brightbox.com" Jul 7 02:48:10.689394 kubelet[2406]: I0707 02:48:10.688826 2406 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/edc5c68e701729e2c2a3de5d1b320c23-kubeconfig\") pod \"kube-controller-manager-srv-ijdf9.gb1.brightbox.com\" (UID: \"edc5c68e701729e2c2a3de5d1b320c23\") " pod="kube-system/kube-controller-manager-srv-ijdf9.gb1.brightbox.com" Jul 7 02:48:10.836327 kubelet[2406]: I0707 02:48:10.836273 2406 kubelet_node_status.go:72] "Attempting to register node" node="srv-ijdf9.gb1.brightbox.com" Jul 7 02:48:10.836897 kubelet[2406]: E0707 02:48:10.836847 2406 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.244.101.74:6443/api/v1/nodes\": dial tcp 10.244.101.74:6443: connect: connection refused" node="srv-ijdf9.gb1.brightbox.com" Jul 7 02:48:10.921721 containerd[1598]: time="2025-07-07T02:48:10.921613982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-ijdf9.gb1.brightbox.com,Uid:6764d5a8f1746fffd467af4b50240d8d,Namespace:kube-system,Attempt:0,}" Jul 7 02:48:10.934216 containerd[1598]: time="2025-07-07T02:48:10.933840308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-ijdf9.gb1.brightbox.com,Uid:edc5c68e701729e2c2a3de5d1b320c23,Namespace:kube-system,Attempt:0,}" Jul 7 02:48:10.934216 containerd[1598]: time="2025-07-07T02:48:10.933844685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-ijdf9.gb1.brightbox.com,Uid:0859971c98ea195c8c8f4bafe5fa4bac,Namespace:kube-system,Attempt:0,}" Jul 7 02:48:11.087045 kubelet[2406]: E0707 02:48:11.086927 2406 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.101.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-ijdf9.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.101.74:6443: connect: connection refused" interval="800ms" Jul 7 02:48:11.242307 kubelet[2406]: I0707 02:48:11.242216 2406 kubelet_node_status.go:72] "Attempting to register node" node="srv-ijdf9.gb1.brightbox.com" Jul 7 02:48:11.243102 kubelet[2406]: E0707 02:48:11.242956 2406 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.244.101.74:6443/api/v1/nodes\": dial tcp 10.244.101.74:6443: connect: connection refused" node="srv-ijdf9.gb1.brightbox.com" Jul 7 02:48:11.421379 kubelet[2406]: W0707 02:48:11.420961 2406 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.244.101.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-ijdf9.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.101.74:6443: connect: connection refused Jul 7 02:48:11.421379 kubelet[2406]: E0707 02:48:11.421264 2406 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.244.101.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-ijdf9.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.101.74:6443: connect: connection refused" logger="UnhandledError" Jul 7 02:48:11.428114 kubelet[2406]: W0707 02:48:11.428006 2406 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.244.101.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.101.74:6443: connect: connection refused Jul 7 02:48:11.428367 kubelet[2406]: E0707 02:48:11.428194 2406 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.244.101.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.101.74:6443: connect: connection refused" logger="UnhandledError" Jul 7 02:48:11.717723 kubelet[2406]: W0707 02:48:11.717376 2406 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.244.101.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.101.74:6443: connect: connection refused Jul 7 02:48:11.717723 kubelet[2406]: E0707 02:48:11.717485 2406 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.244.101.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.101.74:6443: connect: connection refused" logger="UnhandledError" Jul 7 02:48:11.780381 kubelet[2406]: W0707 02:48:11.780078 2406 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.244.101.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.244.101.74:6443: connect: connection refused Jul 7 02:48:11.780381 kubelet[2406]: E0707 02:48:11.780300 2406 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.244.101.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.101.74:6443: connect: connection refused" logger="UnhandledError" Jul 7 02:48:11.888210 kubelet[2406]: E0707 02:48:11.888087 2406 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.101.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-ijdf9.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.101.74:6443: connect: connection refused" interval="1.6s" Jul 7 02:48:12.048542 kubelet[2406]: I0707 02:48:12.048033 2406 kubelet_node_status.go:72] "Attempting to register node" node="srv-ijdf9.gb1.brightbox.com" Jul 7 02:48:12.048753 kubelet[2406]: E0707 02:48:12.048712 2406 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.244.101.74:6443/api/v1/nodes\": dial tcp 10.244.101.74:6443: connect: connection refused" node="srv-ijdf9.gb1.brightbox.com" Jul 7 02:48:12.154847 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount314328809.mount: Deactivated successfully. Jul 7 02:48:12.158219 containerd[1598]: time="2025-07-07T02:48:12.158065889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 02:48:12.159453 containerd[1598]: time="2025-07-07T02:48:12.159404067Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jul 7 02:48:12.160305 containerd[1598]: time="2025-07-07T02:48:12.160274448Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 02:48:12.162069 containerd[1598]: time="2025-07-07T02:48:12.162038477Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 02:48:12.162995 containerd[1598]: time="2025-07-07T02:48:12.162957543Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 7 02:48:12.163201 containerd[1598]: time="2025-07-07T02:48:12.163154389Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 7 02:48:12.163606 containerd[1598]: time="2025-07-07T02:48:12.163577479Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 02:48:12.166115 containerd[1598]: time="2025-07-07T02:48:12.166011175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 02:48:12.168131 containerd[1598]: time="2025-07-07T02:48:12.168091852Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.234142659s" Jul 7 02:48:12.171858 containerd[1598]: time="2025-07-07T02:48:12.171820094Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.250000356s" Jul 7 02:48:12.172554 containerd[1598]: time="2025-07-07T02:48:12.172423878Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.238333847s" Jul 7 02:48:12.340530 containerd[1598]: time="2025-07-07T02:48:12.339725833Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 02:48:12.340530 containerd[1598]: time="2025-07-07T02:48:12.340249212Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 02:48:12.340530 containerd[1598]: time="2025-07-07T02:48:12.340284813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:48:12.340530 containerd[1598]: time="2025-07-07T02:48:12.340394604Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:48:12.352378 containerd[1598]: time="2025-07-07T02:48:12.351908448Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 02:48:12.352378 containerd[1598]: time="2025-07-07T02:48:12.351976227Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 02:48:12.352378 containerd[1598]: time="2025-07-07T02:48:12.351994805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:48:12.352931 containerd[1598]: time="2025-07-07T02:48:12.352079966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:48:12.355602 containerd[1598]: time="2025-07-07T02:48:12.355311593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 02:48:12.355602 containerd[1598]: time="2025-07-07T02:48:12.355355567Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 02:48:12.355602 containerd[1598]: time="2025-07-07T02:48:12.355367153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:48:12.355602 containerd[1598]: time="2025-07-07T02:48:12.355441532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:48:12.458774 containerd[1598]: time="2025-07-07T02:48:12.458603074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-ijdf9.gb1.brightbox.com,Uid:6764d5a8f1746fffd467af4b50240d8d,Namespace:kube-system,Attempt:0,} returns sandbox id \"dddacb054eb9960ed8083d12306b2cc4ce3b0b9029fe0c6def0f6f993cd26b7f\"" Jul 7 02:48:12.466700 containerd[1598]: time="2025-07-07T02:48:12.466235337Z" level=info msg="CreateContainer within sandbox \"dddacb054eb9960ed8083d12306b2cc4ce3b0b9029fe0c6def0f6f993cd26b7f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 02:48:12.469123 containerd[1598]: time="2025-07-07T02:48:12.469032027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-ijdf9.gb1.brightbox.com,Uid:0859971c98ea195c8c8f4bafe5fa4bac,Namespace:kube-system,Attempt:0,} returns sandbox id \"b8da46e1bdf083f328597f3971bbcf8db39846d93f54dbc19b65bca74a70aff5\"" Jul 7 02:48:12.471367 containerd[1598]: time="2025-07-07T02:48:12.471218342Z" level=info msg="CreateContainer within sandbox \"b8da46e1bdf083f328597f3971bbcf8db39846d93f54dbc19b65bca74a70aff5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 02:48:12.474398 containerd[1598]: time="2025-07-07T02:48:12.474364283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-ijdf9.gb1.brightbox.com,Uid:edc5c68e701729e2c2a3de5d1b320c23,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff7559b339185e2815c7d370ab1969dc0b2a1aa0fb807795f2eefa6b5c470df8\"" Jul 7 02:48:12.476753 containerd[1598]: time="2025-07-07T02:48:12.476677681Z" level=info msg="CreateContainer within sandbox \"ff7559b339185e2815c7d370ab1969dc0b2a1aa0fb807795f2eefa6b5c470df8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 02:48:12.482057 containerd[1598]: time="2025-07-07T02:48:12.481856456Z" level=info msg="CreateContainer within sandbox \"dddacb054eb9960ed8083d12306b2cc4ce3b0b9029fe0c6def0f6f993cd26b7f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9c14bfad387907774193bb1614bd054bccfb3324759a268bf85e4c7536378e16\"" Jul 7 02:48:12.482374 containerd[1598]: time="2025-07-07T02:48:12.482346205Z" level=info msg="CreateContainer within sandbox \"b8da46e1bdf083f328597f3971bbcf8db39846d93f54dbc19b65bca74a70aff5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3306e0bc5c594dca6af114299bf2618a94c3bc93045e517be0e21036ff7b9f24\"" Jul 7 02:48:12.483028 containerd[1598]: time="2025-07-07T02:48:12.482996391Z" level=info msg="StartContainer for \"3306e0bc5c594dca6af114299bf2618a94c3bc93045e517be0e21036ff7b9f24\"" Jul 7 02:48:12.483912 containerd[1598]: time="2025-07-07T02:48:12.483488092Z" level=info msg="StartContainer for \"9c14bfad387907774193bb1614bd054bccfb3324759a268bf85e4c7536378e16\"" Jul 7 02:48:12.492199 kubelet[2406]: E0707 02:48:12.492165 2406 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.244.101.74:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.244.101.74:6443: connect: connection refused" logger="UnhandledError" Jul 7 02:48:12.501435 containerd[1598]: time="2025-07-07T02:48:12.501347579Z" level=info msg="CreateContainer within sandbox \"ff7559b339185e2815c7d370ab1969dc0b2a1aa0fb807795f2eefa6b5c470df8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e8bfc2ec1b65fe5fdd3de0c0d72adfc5016dac797bee89a2ceb04ba04fdc043a\"" Jul 7 02:48:12.502063 containerd[1598]: time="2025-07-07T02:48:12.502039795Z" level=info msg="StartContainer for \"e8bfc2ec1b65fe5fdd3de0c0d72adfc5016dac797bee89a2ceb04ba04fdc043a\"" Jul 7 02:48:12.602569 containerd[1598]: time="2025-07-07T02:48:12.602371117Z" level=info msg="StartContainer for \"3306e0bc5c594dca6af114299bf2618a94c3bc93045e517be0e21036ff7b9f24\" returns successfully" Jul 7 02:48:12.623758 containerd[1598]: time="2025-07-07T02:48:12.623692571Z" level=info msg="StartContainer for \"e8bfc2ec1b65fe5fdd3de0c0d72adfc5016dac797bee89a2ceb04ba04fdc043a\" returns successfully" Jul 7 02:48:12.629956 containerd[1598]: time="2025-07-07T02:48:12.629713200Z" level=info msg="StartContainer for \"9c14bfad387907774193bb1614bd054bccfb3324759a268bf85e4c7536378e16\" returns successfully" Jul 7 02:48:13.652752 kubelet[2406]: I0707 02:48:13.652432 2406 kubelet_node_status.go:72] "Attempting to register node" node="srv-ijdf9.gb1.brightbox.com" Jul 7 02:48:14.694040 kubelet[2406]: E0707 02:48:14.693729 2406 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-ijdf9.gb1.brightbox.com\" not found" node="srv-ijdf9.gb1.brightbox.com" Jul 7 02:48:14.722714 kubelet[2406]: E0707 02:48:14.722055 2406 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{srv-ijdf9.gb1.brightbox.com.184fd83243ba38a0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-ijdf9.gb1.brightbox.com,UID:srv-ijdf9.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-ijdf9.gb1.brightbox.com,},FirstTimestamp:2025-07-07 02:48:10.466654368 +0000 UTC m=+0.885409654,LastTimestamp:2025-07-07 02:48:10.466654368 +0000 UTC m=+0.885409654,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-ijdf9.gb1.brightbox.com,}" Jul 7 02:48:14.779179 kubelet[2406]: E0707 02:48:14.776877 2406 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{srv-ijdf9.gb1.brightbox.com.184fd832447f11ca default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-ijdf9.gb1.brightbox.com,UID:srv-ijdf9.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:CgroupV1,Message:Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.,Source:EventSource{Component:kubelet,Host:srv-ijdf9.gb1.brightbox.com,},FirstTimestamp:2025-07-07 02:48:10.479555018 +0000 UTC m=+0.898310296,LastTimestamp:2025-07-07 02:48:10.479555018 +0000 UTC m=+0.898310296,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-ijdf9.gb1.brightbox.com,}" Jul 7 02:48:14.787172 kubelet[2406]: I0707 02:48:14.786223 2406 kubelet_node_status.go:75] "Successfully registered node" node="srv-ijdf9.gb1.brightbox.com" Jul 7 02:48:14.787172 kubelet[2406]: E0707 02:48:14.786262 2406 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"srv-ijdf9.gb1.brightbox.com\": node \"srv-ijdf9.gb1.brightbox.com\" not found" Jul 7 02:48:15.466218 kubelet[2406]: I0707 02:48:15.464634 2406 apiserver.go:52] "Watching apiserver" Jul 7 02:48:15.487859 kubelet[2406]: I0707 02:48:15.487770 2406 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 7 02:48:15.577692 kubelet[2406]: W0707 02:48:15.577354 2406 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 02:48:15.577692 kubelet[2406]: W0707 02:48:15.577455 2406 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 02:48:15.659939 update_engine[1578]: I20250707 02:48:15.659721 1578 update_attempter.cc:509] Updating boot flags... Jul 7 02:48:15.721629 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2688) Jul 7 02:48:15.787168 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2690) Jul 7 02:48:17.047858 systemd[1]: Reloading requested from client PID 2696 ('systemctl') (unit session-9.scope)... Jul 7 02:48:17.047900 systemd[1]: Reloading... Jul 7 02:48:17.141184 zram_generator::config[2736]: No configuration found. Jul 7 02:48:17.294264 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 02:48:17.377413 systemd[1]: Reloading finished in 328 ms. Jul 7 02:48:17.419070 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 02:48:17.437841 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 02:48:17.439109 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 02:48:17.446497 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 02:48:17.631748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 02:48:17.646563 (kubelet)[2809]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 02:48:17.718016 kubelet[2809]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 02:48:17.718524 kubelet[2809]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 7 02:48:17.718524 kubelet[2809]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 02:48:17.719554 kubelet[2809]: I0707 02:48:17.718608 2809 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 02:48:17.734192 kubelet[2809]: I0707 02:48:17.733048 2809 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 7 02:48:17.734192 kubelet[2809]: I0707 02:48:17.733077 2809 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 02:48:17.734192 kubelet[2809]: I0707 02:48:17.733453 2809 server.go:934] "Client rotation is on, will bootstrap in background" Jul 7 02:48:17.737037 kubelet[2809]: I0707 02:48:17.737008 2809 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 7 02:48:17.742489 kubelet[2809]: I0707 02:48:17.742469 2809 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 02:48:17.753473 kubelet[2809]: E0707 02:48:17.753449 2809 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 7 02:48:17.753589 kubelet[2809]: I0707 02:48:17.753580 2809 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 7 02:48:17.757105 kubelet[2809]: I0707 02:48:17.757087 2809 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 02:48:17.757520 kubelet[2809]: I0707 02:48:17.757508 2809 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 7 02:48:17.757758 kubelet[2809]: I0707 02:48:17.757735 2809 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 02:48:17.757961 kubelet[2809]: I0707 02:48:17.757808 2809 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-ijdf9.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 7 02:48:17.758099 kubelet[2809]: I0707 02:48:17.758091 2809 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 02:48:17.758157 kubelet[2809]: I0707 02:48:17.758151 2809 container_manager_linux.go:300] "Creating device plugin manager" Jul 7 02:48:17.758229 kubelet[2809]: I0707 02:48:17.758223 2809 state_mem.go:36] "Initialized new in-memory state store" Jul 7 02:48:17.758397 kubelet[2809]: I0707 02:48:17.758388 2809 kubelet.go:408] "Attempting to sync node with API server" Jul 7 02:48:17.758474 kubelet[2809]: I0707 02:48:17.758468 2809 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 02:48:17.758545 kubelet[2809]: I0707 02:48:17.758540 2809 kubelet.go:314] "Adding apiserver pod source" Jul 7 02:48:17.758605 kubelet[2809]: I0707 02:48:17.758599 2809 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 02:48:17.762059 kubelet[2809]: I0707 02:48:17.760128 2809 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 7 02:48:17.764775 kubelet[2809]: I0707 02:48:17.764736 2809 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 02:48:17.765414 kubelet[2809]: I0707 02:48:17.765388 2809 server.go:1274] "Started kubelet" Jul 7 02:48:17.777370 kubelet[2809]: I0707 02:48:17.777349 2809 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 02:48:17.784023 kubelet[2809]: I0707 02:48:17.782358 2809 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 02:48:17.784255 kubelet[2809]: I0707 02:48:17.784228 2809 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 02:48:17.784619 kubelet[2809]: I0707 02:48:17.784606 2809 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 02:48:17.785437 kubelet[2809]: I0707 02:48:17.785421 2809 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 02:48:17.788934 kubelet[2809]: I0707 02:48:17.788916 2809 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 7 02:48:17.789506 kubelet[2809]: E0707 02:48:17.789473 2809 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-ijdf9.gb1.brightbox.com\" not found" Jul 7 02:48:17.794134 kubelet[2809]: I0707 02:48:17.794116 2809 server.go:449] "Adding debug handlers to kubelet server" Jul 7 02:48:17.798718 kubelet[2809]: I0707 02:48:17.798701 2809 reconciler.go:26] "Reconciler: start to sync state" Jul 7 02:48:17.800178 kubelet[2809]: I0707 02:48:17.800158 2809 factory.go:221] Registration of the systemd container factory successfully Jul 7 02:48:17.800258 kubelet[2809]: I0707 02:48:17.800238 2809 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 02:48:17.800643 kubelet[2809]: I0707 02:48:17.800630 2809 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 7 02:48:17.802535 kubelet[2809]: E0707 02:48:17.802176 2809 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 02:48:17.803183 kubelet[2809]: I0707 02:48:17.802976 2809 factory.go:221] Registration of the containerd container factory successfully Jul 7 02:48:17.812052 kubelet[2809]: I0707 02:48:17.812021 2809 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 02:48:17.813298 kubelet[2809]: I0707 02:48:17.813280 2809 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 02:48:17.813713 kubelet[2809]: I0707 02:48:17.813412 2809 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 7 02:48:17.813713 kubelet[2809]: I0707 02:48:17.813438 2809 kubelet.go:2321] "Starting kubelet main sync loop" Jul 7 02:48:17.813713 kubelet[2809]: E0707 02:48:17.813476 2809 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 02:48:17.887185 kubelet[2809]: I0707 02:48:17.887061 2809 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 7 02:48:17.887185 kubelet[2809]: I0707 02:48:17.887083 2809 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 7 02:48:17.887185 kubelet[2809]: I0707 02:48:17.887105 2809 state_mem.go:36] "Initialized new in-memory state store" Jul 7 02:48:17.888728 kubelet[2809]: I0707 02:48:17.888705 2809 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 02:48:17.888812 kubelet[2809]: I0707 02:48:17.888727 2809 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 02:48:17.888812 kubelet[2809]: I0707 02:48:17.888752 2809 policy_none.go:49] "None policy: Start" Jul 7 02:48:17.889547 kubelet[2809]: I0707 02:48:17.889530 2809 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 7 02:48:17.889621 kubelet[2809]: I0707 02:48:17.889553 2809 state_mem.go:35] "Initializing new in-memory state store" Jul 7 02:48:17.889744 kubelet[2809]: I0707 02:48:17.889730 2809 state_mem.go:75] "Updated machine memory state" Jul 7 02:48:17.895716 kubelet[2809]: I0707 02:48:17.895495 2809 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 02:48:17.895716 kubelet[2809]: I0707 02:48:17.895708 2809 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 02:48:17.895861 kubelet[2809]: I0707 02:48:17.895724 2809 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 02:48:17.896953 kubelet[2809]: I0707 02:48:17.896003 2809 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 02:48:17.919312 kubelet[2809]: W0707 02:48:17.919214 2809 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 02:48:17.922783 kubelet[2809]: W0707 02:48:17.922671 2809 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 02:48:17.923002 kubelet[2809]: W0707 02:48:17.922809 2809 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 02:48:17.923002 kubelet[2809]: E0707 02:48:17.922861 2809 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-srv-ijdf9.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-ijdf9.gb1.brightbox.com" Jul 7 02:48:17.923002 kubelet[2809]: E0707 02:48:17.922927 2809 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-srv-ijdf9.gb1.brightbox.com\" already exists" pod="kube-system/kube-controller-manager-srv-ijdf9.gb1.brightbox.com" Jul 7 02:48:18.012823 kubelet[2809]: I0707 02:48:18.012752 2809 kubelet_node_status.go:72] "Attempting to register node" node="srv-ijdf9.gb1.brightbox.com" Jul 7 02:48:18.025952 kubelet[2809]: I0707 02:48:18.025912 2809 kubelet_node_status.go:111] "Node was previously registered" node="srv-ijdf9.gb1.brightbox.com" Jul 7 02:48:18.026280 kubelet[2809]: I0707 02:48:18.026089 2809 kubelet_node_status.go:75] "Successfully registered node" node="srv-ijdf9.gb1.brightbox.com" Jul 7 02:48:18.102189 kubelet[2809]: I0707 02:48:18.102006 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/edc5c68e701729e2c2a3de5d1b320c23-ca-certs\") pod \"kube-controller-manager-srv-ijdf9.gb1.brightbox.com\" (UID: \"edc5c68e701729e2c2a3de5d1b320c23\") " pod="kube-system/kube-controller-manager-srv-ijdf9.gb1.brightbox.com" Jul 7 02:48:18.102189 kubelet[2809]: I0707 02:48:18.102059 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/edc5c68e701729e2c2a3de5d1b320c23-flexvolume-dir\") pod \"kube-controller-manager-srv-ijdf9.gb1.brightbox.com\" (UID: \"edc5c68e701729e2c2a3de5d1b320c23\") " pod="kube-system/kube-controller-manager-srv-ijdf9.gb1.brightbox.com" Jul 7 02:48:18.102189 kubelet[2809]: I0707 02:48:18.102088 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/edc5c68e701729e2c2a3de5d1b320c23-k8s-certs\") pod \"kube-controller-manager-srv-ijdf9.gb1.brightbox.com\" (UID: \"edc5c68e701729e2c2a3de5d1b320c23\") " pod="kube-system/kube-controller-manager-srv-ijdf9.gb1.brightbox.com" Jul 7 02:48:18.102189 kubelet[2809]: I0707 02:48:18.102109 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/edc5c68e701729e2c2a3de5d1b320c23-kubeconfig\") pod \"kube-controller-manager-srv-ijdf9.gb1.brightbox.com\" (UID: \"edc5c68e701729e2c2a3de5d1b320c23\") " pod="kube-system/kube-controller-manager-srv-ijdf9.gb1.brightbox.com" Jul 7 02:48:18.102189 kubelet[2809]: I0707 02:48:18.102131 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6764d5a8f1746fffd467af4b50240d8d-kubeconfig\") pod \"kube-scheduler-srv-ijdf9.gb1.brightbox.com\" (UID: \"6764d5a8f1746fffd467af4b50240d8d\") " pod="kube-system/kube-scheduler-srv-ijdf9.gb1.brightbox.com" Jul 7 02:48:18.102445 kubelet[2809]: I0707 02:48:18.102173 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0859971c98ea195c8c8f4bafe5fa4bac-ca-certs\") pod \"kube-apiserver-srv-ijdf9.gb1.brightbox.com\" (UID: \"0859971c98ea195c8c8f4bafe5fa4bac\") " pod="kube-system/kube-apiserver-srv-ijdf9.gb1.brightbox.com" Jul 7 02:48:18.102445 kubelet[2809]: I0707 02:48:18.102195 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0859971c98ea195c8c8f4bafe5fa4bac-k8s-certs\") pod \"kube-apiserver-srv-ijdf9.gb1.brightbox.com\" (UID: \"0859971c98ea195c8c8f4bafe5fa4bac\") " pod="kube-system/kube-apiserver-srv-ijdf9.gb1.brightbox.com" Jul 7 02:48:18.102445 kubelet[2809]: I0707 02:48:18.102217 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0859971c98ea195c8c8f4bafe5fa4bac-usr-share-ca-certificates\") pod \"kube-apiserver-srv-ijdf9.gb1.brightbox.com\" (UID: \"0859971c98ea195c8c8f4bafe5fa4bac\") " pod="kube-system/kube-apiserver-srv-ijdf9.gb1.brightbox.com" Jul 7 02:48:18.102445 kubelet[2809]: I0707 02:48:18.102241 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/edc5c68e701729e2c2a3de5d1b320c23-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-ijdf9.gb1.brightbox.com\" (UID: \"edc5c68e701729e2c2a3de5d1b320c23\") " pod="kube-system/kube-controller-manager-srv-ijdf9.gb1.brightbox.com" Jul 7 02:48:18.772952 kubelet[2809]: I0707 02:48:18.772884 2809 apiserver.go:52] "Watching apiserver" Jul 7 02:48:18.801565 kubelet[2809]: I0707 02:48:18.801492 2809 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 7 02:48:18.870020 kubelet[2809]: W0707 02:48:18.869968 2809 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 02:48:18.870615 kubelet[2809]: E0707 02:48:18.870377 2809 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-srv-ijdf9.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-ijdf9.gb1.brightbox.com" Jul 7 02:48:18.913452 kubelet[2809]: I0707 02:48:18.912924 2809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-ijdf9.gb1.brightbox.com" podStartSLOduration=1.912885498 podStartE2EDuration="1.912885498s" podCreationTimestamp="2025-07-07 02:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 02:48:18.892103898 +0000 UTC m=+1.235448699" watchObservedRunningTime="2025-07-07 02:48:18.912885498 +0000 UTC m=+1.256230244" Jul 7 02:48:18.937632 kubelet[2809]: I0707 02:48:18.937097 2809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-ijdf9.gb1.brightbox.com" podStartSLOduration=3.93704125 podStartE2EDuration="3.93704125s" podCreationTimestamp="2025-07-07 02:48:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 02:48:18.914793197 +0000 UTC m=+1.258137946" watchObservedRunningTime="2025-07-07 02:48:18.93704125 +0000 UTC m=+1.280386002" Jul 7 02:48:18.956328 kubelet[2809]: I0707 02:48:18.956111 2809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-ijdf9.gb1.brightbox.com" podStartSLOduration=3.956084111 podStartE2EDuration="3.956084111s" podCreationTimestamp="2025-07-07 02:48:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 02:48:18.937493364 +0000 UTC m=+1.280838135" watchObservedRunningTime="2025-07-07 02:48:18.956084111 +0000 UTC m=+1.299428972" Jul 7 02:48:23.180799 kubelet[2809]: I0707 02:48:23.180719 2809 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 02:48:23.182947 containerd[1598]: time="2025-07-07T02:48:23.182252144Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 02:48:23.183898 kubelet[2809]: I0707 02:48:23.183552 2809 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 02:48:24.040103 kubelet[2809]: I0707 02:48:24.039716 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a1ce0633-1d91-430a-8720-b12d993e59b2-lib-modules\") pod \"kube-proxy-7p889\" (UID: \"a1ce0633-1d91-430a-8720-b12d993e59b2\") " pod="kube-system/kube-proxy-7p889" Jul 7 02:48:24.040103 kubelet[2809]: I0707 02:48:24.039760 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cnsk\" (UniqueName: \"kubernetes.io/projected/a1ce0633-1d91-430a-8720-b12d993e59b2-kube-api-access-9cnsk\") pod \"kube-proxy-7p889\" (UID: \"a1ce0633-1d91-430a-8720-b12d993e59b2\") " pod="kube-system/kube-proxy-7p889" Jul 7 02:48:24.040103 kubelet[2809]: I0707 02:48:24.039790 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a1ce0633-1d91-430a-8720-b12d993e59b2-kube-proxy\") pod \"kube-proxy-7p889\" (UID: \"a1ce0633-1d91-430a-8720-b12d993e59b2\") " pod="kube-system/kube-proxy-7p889" Jul 7 02:48:24.040103 kubelet[2809]: I0707 02:48:24.039810 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a1ce0633-1d91-430a-8720-b12d993e59b2-xtables-lock\") pod \"kube-proxy-7p889\" (UID: \"a1ce0633-1d91-430a-8720-b12d993e59b2\") " pod="kube-system/kube-proxy-7p889" Jul 7 02:48:24.140992 kubelet[2809]: I0707 02:48:24.140719 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/60e5b3c6-b084-4500-acaf-c2d03693d57e-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-kpd2k\" (UID: \"60e5b3c6-b084-4500-acaf-c2d03693d57e\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-kpd2k" Jul 7 02:48:24.140992 kubelet[2809]: I0707 02:48:24.140767 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps9nr\" (UniqueName: \"kubernetes.io/projected/60e5b3c6-b084-4500-acaf-c2d03693d57e-kube-api-access-ps9nr\") pod \"tigera-operator-5bf8dfcb4-kpd2k\" (UID: \"60e5b3c6-b084-4500-acaf-c2d03693d57e\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-kpd2k" Jul 7 02:48:24.254477 containerd[1598]: time="2025-07-07T02:48:24.254013923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7p889,Uid:a1ce0633-1d91-430a-8720-b12d993e59b2,Namespace:kube-system,Attempt:0,}" Jul 7 02:48:24.297037 containerd[1598]: time="2025-07-07T02:48:24.296677237Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 02:48:24.297037 containerd[1598]: time="2025-07-07T02:48:24.296774523Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 02:48:24.297037 containerd[1598]: time="2025-07-07T02:48:24.296792753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:48:24.297888 containerd[1598]: time="2025-07-07T02:48:24.296903594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:48:24.354464 containerd[1598]: time="2025-07-07T02:48:24.354353089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7p889,Uid:a1ce0633-1d91-430a-8720-b12d993e59b2,Namespace:kube-system,Attempt:0,} returns sandbox id \"53e8851e1b45028695c9692dc9b9bb8caa0742136fd2c7bee9a1c40f911f895c\"" Jul 7 02:48:24.358849 containerd[1598]: time="2025-07-07T02:48:24.358808971Z" level=info msg="CreateContainer within sandbox \"53e8851e1b45028695c9692dc9b9bb8caa0742136fd2c7bee9a1c40f911f895c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 02:48:24.372127 containerd[1598]: time="2025-07-07T02:48:24.371895628Z" level=info msg="CreateContainer within sandbox \"53e8851e1b45028695c9692dc9b9bb8caa0742136fd2c7bee9a1c40f911f895c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7206cb95eb294c7cadba030a18a46bbfe97caeb30d44f80ef9ecd48538497080\"" Jul 7 02:48:24.374229 containerd[1598]: time="2025-07-07T02:48:24.374150748Z" level=info msg="StartContainer for \"7206cb95eb294c7cadba030a18a46bbfe97caeb30d44f80ef9ecd48538497080\"" Jul 7 02:48:24.404436 containerd[1598]: time="2025-07-07T02:48:24.403878788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-kpd2k,Uid:60e5b3c6-b084-4500-acaf-c2d03693d57e,Namespace:tigera-operator,Attempt:0,}" Jul 7 02:48:24.436109 containerd[1598]: time="2025-07-07T02:48:24.431091808Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 02:48:24.436109 containerd[1598]: time="2025-07-07T02:48:24.434671835Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 02:48:24.436109 containerd[1598]: time="2025-07-07T02:48:24.434696301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:48:24.436109 containerd[1598]: time="2025-07-07T02:48:24.435198107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:48:24.448594 containerd[1598]: time="2025-07-07T02:48:24.448527815Z" level=info msg="StartContainer for \"7206cb95eb294c7cadba030a18a46bbfe97caeb30d44f80ef9ecd48538497080\" returns successfully" Jul 7 02:48:24.515729 containerd[1598]: time="2025-07-07T02:48:24.515687576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-kpd2k,Uid:60e5b3c6-b084-4500-acaf-c2d03693d57e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2c4297b080279a4f4602db09d6d073f8be5cd32511e6df9c2c13fa3bc00a85e5\"" Jul 7 02:48:24.521083 containerd[1598]: time="2025-07-07T02:48:24.521054704Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 7 02:48:24.897537 kubelet[2809]: I0707 02:48:24.896840 2809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7p889" podStartSLOduration=1.896793833 podStartE2EDuration="1.896793833s" podCreationTimestamp="2025-07-07 02:48:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 02:48:24.896744353 +0000 UTC m=+7.240089119" watchObservedRunningTime="2025-07-07 02:48:24.896793833 +0000 UTC m=+7.240138599" Jul 7 02:48:26.598673 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount646058164.mount: Deactivated successfully. Jul 7 02:48:27.176358 containerd[1598]: time="2025-07-07T02:48:27.176316619Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:48:27.176924 containerd[1598]: time="2025-07-07T02:48:27.176874459Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 7 02:48:27.179162 containerd[1598]: time="2025-07-07T02:48:27.179045202Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:48:27.180091 containerd[1598]: time="2025-07-07T02:48:27.179826360Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.658521748s" Jul 7 02:48:27.180091 containerd[1598]: time="2025-07-07T02:48:27.179857270Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 7 02:48:27.180684 containerd[1598]: time="2025-07-07T02:48:27.180661321Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:48:27.183686 containerd[1598]: time="2025-07-07T02:48:27.183658702Z" level=info msg="CreateContainer within sandbox \"2c4297b080279a4f4602db09d6d073f8be5cd32511e6df9c2c13fa3bc00a85e5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 7 02:48:27.193559 containerd[1598]: time="2025-07-07T02:48:27.193389425Z" level=info msg="CreateContainer within sandbox \"2c4297b080279a4f4602db09d6d073f8be5cd32511e6df9c2c13fa3bc00a85e5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4ee2e407f65f20faada43388a342ef4c82a5ce9425300f07c5d4b961448fb1c6\"" Jul 7 02:48:27.195381 containerd[1598]: time="2025-07-07T02:48:27.195346797Z" level=info msg="StartContainer for \"4ee2e407f65f20faada43388a342ef4c82a5ce9425300f07c5d4b961448fb1c6\"" Jul 7 02:48:27.257767 containerd[1598]: time="2025-07-07T02:48:27.257724871Z" level=info msg="StartContainer for \"4ee2e407f65f20faada43388a342ef4c82a5ce9425300f07c5d4b961448fb1c6\" returns successfully" Jul 7 02:48:27.907789 kubelet[2809]: I0707 02:48:27.907571 2809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-kpd2k" podStartSLOduration=1.245043098 podStartE2EDuration="3.907536229s" podCreationTimestamp="2025-07-07 02:48:24 +0000 UTC" firstStartedPulling="2025-07-07 02:48:24.519041778 +0000 UTC m=+6.862386525" lastFinishedPulling="2025-07-07 02:48:27.181534907 +0000 UTC m=+9.524879656" observedRunningTime="2025-07-07 02:48:27.907178157 +0000 UTC m=+10.250523048" watchObservedRunningTime="2025-07-07 02:48:27.907536229 +0000 UTC m=+10.250881034" Jul 7 02:48:33.941662 sudo[1876]: pam_unix(sudo:session): session closed for user root Jul 7 02:48:34.088730 sshd[1872]: pam_unix(sshd:session): session closed for user core Jul 7 02:48:34.100235 systemd-logind[1572]: Session 9 logged out. Waiting for processes to exit. Jul 7 02:48:34.101691 systemd[1]: sshd@6-10.244.101.74:22-139.178.68.195:42886.service: Deactivated successfully. Jul 7 02:48:34.114339 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 02:48:34.115489 systemd-logind[1572]: Removed session 9. Jul 7 02:48:37.721571 kubelet[2809]: I0707 02:48:37.721324 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/83822812-7fbe-488c-978c-f42c86f96fa7-typha-certs\") pod \"calico-typha-55cf5f5588-dnc2d\" (UID: \"83822812-7fbe-488c-978c-f42c86f96fa7\") " pod="calico-system/calico-typha-55cf5f5588-dnc2d" Jul 7 02:48:37.721571 kubelet[2809]: I0707 02:48:37.721411 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-872c9\" (UniqueName: \"kubernetes.io/projected/83822812-7fbe-488c-978c-f42c86f96fa7-kube-api-access-872c9\") pod \"calico-typha-55cf5f5588-dnc2d\" (UID: \"83822812-7fbe-488c-978c-f42c86f96fa7\") " pod="calico-system/calico-typha-55cf5f5588-dnc2d" Jul 7 02:48:37.721571 kubelet[2809]: I0707 02:48:37.721451 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83822812-7fbe-488c-978c-f42c86f96fa7-tigera-ca-bundle\") pod \"calico-typha-55cf5f5588-dnc2d\" (UID: \"83822812-7fbe-488c-978c-f42c86f96fa7\") " pod="calico-system/calico-typha-55cf5f5588-dnc2d" Jul 7 02:48:37.885721 containerd[1598]: time="2025-07-07T02:48:37.885516908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55cf5f5588-dnc2d,Uid:83822812-7fbe-488c-978c-f42c86f96fa7,Namespace:calico-system,Attempt:0,}" Jul 7 02:48:37.951838 containerd[1598]: time="2025-07-07T02:48:37.949824407Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 02:48:37.951838 containerd[1598]: time="2025-07-07T02:48:37.949960595Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 02:48:37.951838 containerd[1598]: time="2025-07-07T02:48:37.950010281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:48:37.951838 containerd[1598]: time="2025-07-07T02:48:37.950156138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:48:38.023024 kubelet[2809]: I0707 02:48:38.022932 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6341948f-80d9-4c1c-9023-47f591ddd50b-var-lib-calico\") pod \"calico-node-2xqkz\" (UID: \"6341948f-80d9-4c1c-9023-47f591ddd50b\") " pod="calico-system/calico-node-2xqkz" Jul 7 02:48:38.023857 kubelet[2809]: I0707 02:48:38.023182 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6341948f-80d9-4c1c-9023-47f591ddd50b-cni-log-dir\") pod \"calico-node-2xqkz\" (UID: \"6341948f-80d9-4c1c-9023-47f591ddd50b\") " pod="calico-system/calico-node-2xqkz" Jul 7 02:48:38.023857 kubelet[2809]: I0707 02:48:38.023220 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6341948f-80d9-4c1c-9023-47f591ddd50b-lib-modules\") pod \"calico-node-2xqkz\" (UID: \"6341948f-80d9-4c1c-9023-47f591ddd50b\") " pod="calico-system/calico-node-2xqkz" Jul 7 02:48:38.023857 kubelet[2809]: I0707 02:48:38.023238 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6341948f-80d9-4c1c-9023-47f591ddd50b-tigera-ca-bundle\") pod \"calico-node-2xqkz\" (UID: \"6341948f-80d9-4c1c-9023-47f591ddd50b\") " pod="calico-system/calico-node-2xqkz" Jul 7 02:48:38.023857 kubelet[2809]: I0707 02:48:38.023254 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6341948f-80d9-4c1c-9023-47f591ddd50b-xtables-lock\") pod \"calico-node-2xqkz\" (UID: \"6341948f-80d9-4c1c-9023-47f591ddd50b\") " pod="calico-system/calico-node-2xqkz" Jul 7 02:48:38.023857 kubelet[2809]: I0707 02:48:38.023271 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6341948f-80d9-4c1c-9023-47f591ddd50b-cni-bin-dir\") pod \"calico-node-2xqkz\" (UID: \"6341948f-80d9-4c1c-9023-47f591ddd50b\") " pod="calico-system/calico-node-2xqkz" Jul 7 02:48:38.024097 kubelet[2809]: I0707 02:48:38.023293 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6341948f-80d9-4c1c-9023-47f591ddd50b-policysync\") pod \"calico-node-2xqkz\" (UID: \"6341948f-80d9-4c1c-9023-47f591ddd50b\") " pod="calico-system/calico-node-2xqkz" Jul 7 02:48:38.024097 kubelet[2809]: I0707 02:48:38.023308 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6341948f-80d9-4c1c-9023-47f591ddd50b-var-run-calico\") pod \"calico-node-2xqkz\" (UID: \"6341948f-80d9-4c1c-9023-47f591ddd50b\") " pod="calico-system/calico-node-2xqkz" Jul 7 02:48:38.024097 kubelet[2809]: I0707 02:48:38.023330 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6341948f-80d9-4c1c-9023-47f591ddd50b-cni-net-dir\") pod \"calico-node-2xqkz\" (UID: \"6341948f-80d9-4c1c-9023-47f591ddd50b\") " pod="calico-system/calico-node-2xqkz" Jul 7 02:48:38.024097 kubelet[2809]: I0707 02:48:38.023347 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6341948f-80d9-4c1c-9023-47f591ddd50b-flexvol-driver-host\") pod \"calico-node-2xqkz\" (UID: \"6341948f-80d9-4c1c-9023-47f591ddd50b\") " pod="calico-system/calico-node-2xqkz" Jul 7 02:48:38.024097 kubelet[2809]: I0707 02:48:38.023364 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlb9p\" (UniqueName: \"kubernetes.io/projected/6341948f-80d9-4c1c-9023-47f591ddd50b-kube-api-access-mlb9p\") pod \"calico-node-2xqkz\" (UID: \"6341948f-80d9-4c1c-9023-47f591ddd50b\") " pod="calico-system/calico-node-2xqkz" Jul 7 02:48:38.024258 kubelet[2809]: I0707 02:48:38.023380 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6341948f-80d9-4c1c-9023-47f591ddd50b-node-certs\") pod \"calico-node-2xqkz\" (UID: \"6341948f-80d9-4c1c-9023-47f591ddd50b\") " pod="calico-system/calico-node-2xqkz" Jul 7 02:48:38.061555 containerd[1598]: time="2025-07-07T02:48:38.060338023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55cf5f5588-dnc2d,Uid:83822812-7fbe-488c-978c-f42c86f96fa7,Namespace:calico-system,Attempt:0,} returns sandbox id \"422b171d15dbaba81ef8c1f4c8548219846c09ff4a81896950098ed68277b7e3\"" Jul 7 02:48:38.067831 containerd[1598]: time="2025-07-07T02:48:38.066836278Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 7 02:48:38.117723 kubelet[2809]: E0707 02:48:38.117061 2809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fwz77" podUID="6d6134fd-985e-434b-964a-df65b698ac32" Jul 7 02:48:38.134737 kubelet[2809]: E0707 02:48:38.134570 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.134737 kubelet[2809]: W0707 02:48:38.134592 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.134737 kubelet[2809]: E0707 02:48:38.134622 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.145659 kubelet[2809]: E0707 02:48:38.145304 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.145659 kubelet[2809]: W0707 02:48:38.145322 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.145659 kubelet[2809]: E0707 02:48:38.145343 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.148392 kubelet[2809]: E0707 02:48:38.148306 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.148392 kubelet[2809]: W0707 02:48:38.148323 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.148392 kubelet[2809]: E0707 02:48:38.148341 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.148818 kubelet[2809]: E0707 02:48:38.148698 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.148818 kubelet[2809]: W0707 02:48:38.148710 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.148818 kubelet[2809]: E0707 02:48:38.148721 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.149095 kubelet[2809]: E0707 02:48:38.148980 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.149095 kubelet[2809]: W0707 02:48:38.148989 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.149095 kubelet[2809]: E0707 02:48:38.148999 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.149361 kubelet[2809]: E0707 02:48:38.149239 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.149361 kubelet[2809]: W0707 02:48:38.149248 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.149361 kubelet[2809]: E0707 02:48:38.149258 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.149704 kubelet[2809]: E0707 02:48:38.149600 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.149704 kubelet[2809]: W0707 02:48:38.149616 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.149704 kubelet[2809]: E0707 02:48:38.149626 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.149918 kubelet[2809]: E0707 02:48:38.149860 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.149918 kubelet[2809]: W0707 02:48:38.149870 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.149918 kubelet[2809]: E0707 02:48:38.149880 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.150301 kubelet[2809]: E0707 02:48:38.150186 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.150301 kubelet[2809]: W0707 02:48:38.150196 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.150301 kubelet[2809]: E0707 02:48:38.150213 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.150563 kubelet[2809]: E0707 02:48:38.150468 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.150563 kubelet[2809]: W0707 02:48:38.150476 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.150563 kubelet[2809]: E0707 02:48:38.150485 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.150917 kubelet[2809]: E0707 02:48:38.150814 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.150917 kubelet[2809]: W0707 02:48:38.150830 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.150917 kubelet[2809]: E0707 02:48:38.150840 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.151239 kubelet[2809]: E0707 02:48:38.151111 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.151239 kubelet[2809]: W0707 02:48:38.151121 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.151239 kubelet[2809]: E0707 02:48:38.151131 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.151527 kubelet[2809]: E0707 02:48:38.151453 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.151527 kubelet[2809]: W0707 02:48:38.151462 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.151527 kubelet[2809]: E0707 02:48:38.151472 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.151821 kubelet[2809]: E0707 02:48:38.151760 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.151821 kubelet[2809]: W0707 02:48:38.151769 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.151821 kubelet[2809]: E0707 02:48:38.151781 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.152198 kubelet[2809]: E0707 02:48:38.152188 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.152356 kubelet[2809]: W0707 02:48:38.152257 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.152356 kubelet[2809]: E0707 02:48:38.152270 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.152880 kubelet[2809]: E0707 02:48:38.152768 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.152880 kubelet[2809]: W0707 02:48:38.152780 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.152880 kubelet[2809]: E0707 02:48:38.152793 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.153391 kubelet[2809]: E0707 02:48:38.153283 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.153391 kubelet[2809]: W0707 02:48:38.153294 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.153391 kubelet[2809]: E0707 02:48:38.153305 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.153786 kubelet[2809]: E0707 02:48:38.153671 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.153786 kubelet[2809]: W0707 02:48:38.153682 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.153786 kubelet[2809]: E0707 02:48:38.153692 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.154257 kubelet[2809]: E0707 02:48:38.154102 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.154257 kubelet[2809]: W0707 02:48:38.154113 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.154257 kubelet[2809]: E0707 02:48:38.154124 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.154480 kubelet[2809]: E0707 02:48:38.154431 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.154480 kubelet[2809]: W0707 02:48:38.154440 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.154480 kubelet[2809]: E0707 02:48:38.154450 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.155298 kubelet[2809]: E0707 02:48:38.154897 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.155298 kubelet[2809]: W0707 02:48:38.154935 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.155298 kubelet[2809]: E0707 02:48:38.154984 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.156556 kubelet[2809]: E0707 02:48:38.155986 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.156556 kubelet[2809]: W0707 02:48:38.156028 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.156556 kubelet[2809]: E0707 02:48:38.156055 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.195336 containerd[1598]: time="2025-07-07T02:48:38.194865234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2xqkz,Uid:6341948f-80d9-4c1c-9023-47f591ddd50b,Namespace:calico-system,Attempt:0,}" Jul 7 02:48:38.226400 kubelet[2809]: E0707 02:48:38.226372 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.226663 kubelet[2809]: W0707 02:48:38.226646 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.227342 kubelet[2809]: E0707 02:48:38.227179 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.227342 kubelet[2809]: I0707 02:48:38.227237 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6d6134fd-985e-434b-964a-df65b698ac32-kubelet-dir\") pod \"csi-node-driver-fwz77\" (UID: \"6d6134fd-985e-434b-964a-df65b698ac32\") " pod="calico-system/csi-node-driver-fwz77" Jul 7 02:48:38.229253 kubelet[2809]: E0707 02:48:38.229179 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.229253 kubelet[2809]: W0707 02:48:38.229206 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.229253 kubelet[2809]: E0707 02:48:38.229248 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.229639 kubelet[2809]: I0707 02:48:38.229271 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6d6134fd-985e-434b-964a-df65b698ac32-registration-dir\") pod \"csi-node-driver-fwz77\" (UID: \"6d6134fd-985e-434b-964a-df65b698ac32\") " pod="calico-system/csi-node-driver-fwz77" Jul 7 02:48:38.231777 kubelet[2809]: E0707 02:48:38.230835 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.231777 kubelet[2809]: W0707 02:48:38.230853 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.231777 kubelet[2809]: E0707 02:48:38.231045 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.231777 kubelet[2809]: W0707 02:48:38.231053 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.231777 kubelet[2809]: E0707 02:48:38.231236 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.231777 kubelet[2809]: W0707 02:48:38.231244 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.231777 kubelet[2809]: E0707 02:48:38.231256 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.231777 kubelet[2809]: E0707 02:48:38.231399 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.231777 kubelet[2809]: W0707 02:48:38.231406 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.231777 kubelet[2809]: E0707 02:48:38.231415 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.232130 kubelet[2809]: E0707 02:48:38.231436 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.232130 kubelet[2809]: I0707 02:48:38.231459 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6d6134fd-985e-434b-964a-df65b698ac32-varrun\") pod \"csi-node-driver-fwz77\" (UID: \"6d6134fd-985e-434b-964a-df65b698ac32\") " pod="calico-system/csi-node-driver-fwz77" Jul 7 02:48:38.232130 kubelet[2809]: E0707 02:48:38.231642 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.232130 kubelet[2809]: W0707 02:48:38.231652 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.232130 kubelet[2809]: E0707 02:48:38.231662 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.232130 kubelet[2809]: I0707 02:48:38.231676 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlf7q\" (UniqueName: \"kubernetes.io/projected/6d6134fd-985e-434b-964a-df65b698ac32-kube-api-access-mlf7q\") pod \"csi-node-driver-fwz77\" (UID: \"6d6134fd-985e-434b-964a-df65b698ac32\") " pod="calico-system/csi-node-driver-fwz77" Jul 7 02:48:38.232130 kubelet[2809]: E0707 02:48:38.232102 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.232130 kubelet[2809]: W0707 02:48:38.232114 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.232443 kubelet[2809]: E0707 02:48:38.232126 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.232443 kubelet[2809]: I0707 02:48:38.232168 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6d6134fd-985e-434b-964a-df65b698ac32-socket-dir\") pod \"csi-node-driver-fwz77\" (UID: \"6d6134fd-985e-434b-964a-df65b698ac32\") " pod="calico-system/csi-node-driver-fwz77" Jul 7 02:48:38.232443 kubelet[2809]: E0707 02:48:38.232354 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.232443 kubelet[2809]: W0707 02:48:38.232373 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.232443 kubelet[2809]: E0707 02:48:38.232382 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.232595 kubelet[2809]: E0707 02:48:38.232506 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.232595 kubelet[2809]: W0707 02:48:38.232513 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.232595 kubelet[2809]: E0707 02:48:38.232520 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.232595 kubelet[2809]: E0707 02:48:38.232533 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.232766 kubelet[2809]: E0707 02:48:38.232684 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.232766 kubelet[2809]: W0707 02:48:38.232692 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.232766 kubelet[2809]: E0707 02:48:38.232699 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.232859 kubelet[2809]: E0707 02:48:38.232832 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.232859 kubelet[2809]: W0707 02:48:38.232840 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.232859 kubelet[2809]: E0707 02:48:38.232847 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.233660 kubelet[2809]: E0707 02:48:38.232965 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.233660 kubelet[2809]: W0707 02:48:38.232978 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.233660 kubelet[2809]: E0707 02:48:38.232985 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.233660 kubelet[2809]: E0707 02:48:38.233121 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.233660 kubelet[2809]: W0707 02:48:38.233128 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.233660 kubelet[2809]: E0707 02:48:38.233134 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.233660 kubelet[2809]: E0707 02:48:38.233287 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.233660 kubelet[2809]: W0707 02:48:38.233295 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.233660 kubelet[2809]: E0707 02:48:38.233302 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.247464 containerd[1598]: time="2025-07-07T02:48:38.247068281Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 02:48:38.247464 containerd[1598]: time="2025-07-07T02:48:38.247201072Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 02:48:38.247770 containerd[1598]: time="2025-07-07T02:48:38.247254400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:48:38.248084 containerd[1598]: time="2025-07-07T02:48:38.247912521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:48:38.310786 containerd[1598]: time="2025-07-07T02:48:38.310715601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2xqkz,Uid:6341948f-80d9-4c1c-9023-47f591ddd50b,Namespace:calico-system,Attempt:0,} returns sandbox id \"48d822131f4c03c8bc20228e2b04a4c5b8c829ce384272f3c984e1cd38c04219\"" Jul 7 02:48:38.333780 kubelet[2809]: E0707 02:48:38.333339 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.333780 kubelet[2809]: W0707 02:48:38.333361 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.333780 kubelet[2809]: E0707 02:48:38.333381 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.334275 kubelet[2809]: E0707 02:48:38.334077 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.334275 kubelet[2809]: W0707 02:48:38.334089 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.334275 kubelet[2809]: E0707 02:48:38.334165 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.336290 kubelet[2809]: E0707 02:48:38.335698 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.336290 kubelet[2809]: W0707 02:48:38.335711 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.336290 kubelet[2809]: E0707 02:48:38.335752 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.336290 kubelet[2809]: E0707 02:48:38.335956 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.336290 kubelet[2809]: W0707 02:48:38.335970 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.336290 kubelet[2809]: E0707 02:48:38.336111 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.336290 kubelet[2809]: E0707 02:48:38.336169 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.336290 kubelet[2809]: W0707 02:48:38.336176 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.336290 kubelet[2809]: E0707 02:48:38.336192 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.336954 kubelet[2809]: E0707 02:48:38.336851 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.336954 kubelet[2809]: W0707 02:48:38.336864 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.336954 kubelet[2809]: E0707 02:48:38.336927 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.337497 kubelet[2809]: E0707 02:48:38.337343 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.337497 kubelet[2809]: W0707 02:48:38.337354 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.337497 kubelet[2809]: E0707 02:48:38.337394 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.337928 kubelet[2809]: E0707 02:48:38.337917 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.338036 kubelet[2809]: W0707 02:48:38.337994 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.338135 kubelet[2809]: E0707 02:48:38.338093 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.338471 kubelet[2809]: E0707 02:48:38.338391 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.338471 kubelet[2809]: W0707 02:48:38.338402 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.338595 kubelet[2809]: E0707 02:48:38.338565 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.338863 kubelet[2809]: E0707 02:48:38.338756 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.338863 kubelet[2809]: W0707 02:48:38.338766 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.338964 kubelet[2809]: E0707 02:48:38.338954 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.339165 kubelet[2809]: E0707 02:48:38.339132 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.339328 kubelet[2809]: W0707 02:48:38.339152 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.339423 kubelet[2809]: E0707 02:48:38.339381 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.339656 kubelet[2809]: E0707 02:48:38.339580 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.339656 kubelet[2809]: W0707 02:48:38.339590 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.339795 kubelet[2809]: E0707 02:48:38.339748 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.339930 kubelet[2809]: E0707 02:48:38.339908 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.339930 kubelet[2809]: W0707 02:48:38.339917 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.340182 kubelet[2809]: E0707 02:48:38.340077 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.340493 kubelet[2809]: E0707 02:48:38.340458 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.340493 kubelet[2809]: W0707 02:48:38.340469 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.340677 kubelet[2809]: E0707 02:48:38.340645 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.340902 kubelet[2809]: E0707 02:48:38.340853 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.340902 kubelet[2809]: W0707 02:48:38.340863 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.341052 kubelet[2809]: E0707 02:48:38.340991 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.341200 kubelet[2809]: E0707 02:48:38.341187 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.341312 kubelet[2809]: W0707 02:48:38.341268 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.341383 kubelet[2809]: E0707 02:48:38.341345 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.341757 kubelet[2809]: E0707 02:48:38.341675 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.341757 kubelet[2809]: W0707 02:48:38.341685 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.341854 kubelet[2809]: E0707 02:48:38.341844 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.342073 kubelet[2809]: E0707 02:48:38.342026 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.342073 kubelet[2809]: W0707 02:48:38.342035 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.342284 kubelet[2809]: E0707 02:48:38.342203 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.344475 kubelet[2809]: E0707 02:48:38.344380 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.344475 kubelet[2809]: W0707 02:48:38.344393 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.344690 kubelet[2809]: E0707 02:48:38.344587 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.345409 kubelet[2809]: E0707 02:48:38.345366 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.345409 kubelet[2809]: W0707 02:48:38.345378 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.347247 kubelet[2809]: E0707 02:48:38.347068 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.347247 kubelet[2809]: W0707 02:48:38.347081 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.347703 kubelet[2809]: E0707 02:48:38.347578 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.347703 kubelet[2809]: W0707 02:48:38.347590 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.347703 kubelet[2809]: E0707 02:48:38.347607 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.347703 kubelet[2809]: E0707 02:48:38.347640 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.348601 kubelet[2809]: E0707 02:48:38.348474 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.348601 kubelet[2809]: W0707 02:48:38.348486 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.348601 kubelet[2809]: E0707 02:48:38.348500 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.349481 kubelet[2809]: E0707 02:48:38.349331 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.349481 kubelet[2809]: W0707 02:48:38.349343 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.349481 kubelet[2809]: E0707 02:48:38.349354 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.349481 kubelet[2809]: E0707 02:48:38.349372 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.350906 kubelet[2809]: E0707 02:48:38.350586 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.350906 kubelet[2809]: W0707 02:48:38.350613 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.350906 kubelet[2809]: E0707 02:48:38.350625 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:38.373371 kubelet[2809]: E0707 02:48:38.373349 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:38.373500 kubelet[2809]: W0707 02:48:38.373487 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:38.373577 kubelet[2809]: E0707 02:48:38.373565 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:39.630821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4105540204.mount: Deactivated successfully. Jul 7 02:48:39.815205 kubelet[2809]: E0707 02:48:39.814766 2809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fwz77" podUID="6d6134fd-985e-434b-964a-df65b698ac32" Jul 7 02:48:40.823672 containerd[1598]: time="2025-07-07T02:48:40.822336049Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:48:40.824662 containerd[1598]: time="2025-07-07T02:48:40.824557241Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 7 02:48:40.825993 containerd[1598]: time="2025-07-07T02:48:40.825228720Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:48:40.828295 containerd[1598]: time="2025-07-07T02:48:40.828249649Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:48:40.829637 containerd[1598]: time="2025-07-07T02:48:40.829444928Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.762551277s" Jul 7 02:48:40.829637 containerd[1598]: time="2025-07-07T02:48:40.829553514Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 7 02:48:40.831685 containerd[1598]: time="2025-07-07T02:48:40.830725576Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 7 02:48:40.846663 containerd[1598]: time="2025-07-07T02:48:40.846544002Z" level=info msg="CreateContainer within sandbox \"422b171d15dbaba81ef8c1f4c8548219846c09ff4a81896950098ed68277b7e3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 7 02:48:40.884679 containerd[1598]: time="2025-07-07T02:48:40.884572119Z" level=info msg="CreateContainer within sandbox \"422b171d15dbaba81ef8c1f4c8548219846c09ff4a81896950098ed68277b7e3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"81f9b685d6d30cf70bc7d440f20b23e096222a125cf0b6e3846dfc497def354c\"" Jul 7 02:48:40.886222 containerd[1598]: time="2025-07-07T02:48:40.885345993Z" level=info msg="StartContainer for \"81f9b685d6d30cf70bc7d440f20b23e096222a125cf0b6e3846dfc497def354c\"" Jul 7 02:48:40.981854 containerd[1598]: time="2025-07-07T02:48:40.981819241Z" level=info msg="StartContainer for \"81f9b685d6d30cf70bc7d440f20b23e096222a125cf0b6e3846dfc497def354c\" returns successfully" Jul 7 02:48:41.817049 kubelet[2809]: E0707 02:48:41.816197 2809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fwz77" podUID="6d6134fd-985e-434b-964a-df65b698ac32" Jul 7 02:48:41.837972 systemd[1]: run-containerd-runc-k8s.io-81f9b685d6d30cf70bc7d440f20b23e096222a125cf0b6e3846dfc497def354c-runc.VElrxv.mount: Deactivated successfully. Jul 7 02:48:41.993596 kubelet[2809]: E0707 02:48:41.993508 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:41.994918 kubelet[2809]: W0707 02:48:41.994182 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:41.994918 kubelet[2809]: E0707 02:48:41.994258 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:41.996990 kubelet[2809]: E0707 02:48:41.995315 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:41.996990 kubelet[2809]: W0707 02:48:41.995333 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:41.996990 kubelet[2809]: E0707 02:48:41.995351 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:41.996990 kubelet[2809]: I0707 02:48:41.995599 2809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-55cf5f5588-dnc2d" podStartSLOduration=2.231345534 podStartE2EDuration="4.99557822s" podCreationTimestamp="2025-07-07 02:48:37 +0000 UTC" firstStartedPulling="2025-07-07 02:48:38.066277546 +0000 UTC m=+20.409622294" lastFinishedPulling="2025-07-07 02:48:40.830510227 +0000 UTC m=+23.173854980" observedRunningTime="2025-07-07 02:48:41.994704423 +0000 UTC m=+24.338049193" watchObservedRunningTime="2025-07-07 02:48:41.99557822 +0000 UTC m=+24.338922984" Jul 7 02:48:41.998426 kubelet[2809]: E0707 02:48:41.998397 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:41.998426 kubelet[2809]: W0707 02:48:41.998416 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:41.998816 kubelet[2809]: E0707 02:48:41.998433 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:41.999232 kubelet[2809]: E0707 02:48:41.999214 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:41.999232 kubelet[2809]: W0707 02:48:41.999230 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:41.999351 kubelet[2809]: E0707 02:48:41.999243 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:41.999712 kubelet[2809]: E0707 02:48:41.999692 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:41.999712 kubelet[2809]: W0707 02:48:41.999706 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:41.999921 kubelet[2809]: E0707 02:48:41.999718 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:42.001863 kubelet[2809]: E0707 02:48:42.001804 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:42.001863 kubelet[2809]: W0707 02:48:42.001828 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:42.002174 kubelet[2809]: E0707 02:48:42.001852 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:42.002512 kubelet[2809]: E0707 02:48:42.002499 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:42.002679 kubelet[2809]: W0707 02:48:42.002571 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:42.002679 kubelet[2809]: E0707 02:48:42.002587 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:42.003036 kubelet[2809]: E0707 02:48:42.002934 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:42.003036 kubelet[2809]: W0707 02:48:42.002946 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:42.003036 kubelet[2809]: E0707 02:48:42.002957 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:42.003508 kubelet[2809]: E0707 02:48:42.003352 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:42.003508 kubelet[2809]: W0707 02:48:42.003364 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:42.003508 kubelet[2809]: E0707 02:48:42.003375 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:42.004372 kubelet[2809]: E0707 02:48:42.004003 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:42.004372 kubelet[2809]: W0707 02:48:42.004016 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:42.004372 kubelet[2809]: E0707 02:48:42.004028 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:42.004756 kubelet[2809]: E0707 02:48:42.004679 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:42.004756 kubelet[2809]: W0707 02:48:42.004692 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:42.004756 kubelet[2809]: E0707 02:48:42.004707 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:42.005476 kubelet[2809]: E0707 02:48:42.005399 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:42.005476 kubelet[2809]: W0707 02:48:42.005413 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:42.005476 kubelet[2809]: E0707 02:48:42.005425 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:42.005940 kubelet[2809]: E0707 02:48:42.005841 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:42.005940 kubelet[2809]: W0707 02:48:42.005853 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:42.005940 kubelet[2809]: E0707 02:48:42.005864 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:42.006260 kubelet[2809]: E0707 02:48:42.006250 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:42.006388 kubelet[2809]: W0707 02:48:42.006306 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:42.006388 kubelet[2809]: E0707 02:48:42.006324 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:42.007227 kubelet[2809]: E0707 02:48:42.007033 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:42.007227 kubelet[2809]: W0707 02:48:42.007047 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:42.007227 kubelet[2809]: E0707 02:48:42.007059 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:42.066465 kubelet[2809]: E0707 02:48:42.066018 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:42.066465 kubelet[2809]: W0707 02:48:42.066072 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:42.066465 kubelet[2809]: E0707 02:48:42.066121 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:42.067715 kubelet[2809]: E0707 02:48:42.067276 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:42.067715 kubelet[2809]: W0707 02:48:42.067316 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:42.067715 kubelet[2809]: E0707 02:48:42.067353 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:42.070119 kubelet[2809]: E0707 02:48:42.069393 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:42.070119 kubelet[2809]: W0707 02:48:42.069430 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:42.070622 kubelet[2809]: E0707 02:48:42.070573 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:42.070938 kubelet[2809]: E0707 02:48:42.070836 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:42.070938 kubelet[2809]: W0707 02:48:42.070893 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:42.071384 kubelet[2809]: E0707 02:48:42.071348 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:42.071747 kubelet[2809]: E0707 02:48:42.071711 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:42.071747 kubelet[2809]: W0707 02:48:42.071725 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:42.072914 kubelet[2809]: E0707 02:48:42.072843 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:42.073533 kubelet[2809]: E0707 02:48:42.073393 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:42.073533 kubelet[2809]: W0707 02:48:42.073411 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:42.074262 kubelet[2809]: E0707 02:48:42.074223 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:42.074870 kubelet[2809]: E0707 02:48:42.074400 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:42.074870 kubelet[2809]: W0707 02:48:42.074410 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:42.074870 kubelet[2809]: E0707 02:48:42.074443 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:42.075724 kubelet[2809]: E0707 02:48:42.075364 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:42.075724 kubelet[2809]: W0707 02:48:42.075385 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:42.075724 kubelet[2809]: E0707 02:48:42.075649 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:42.076104 kubelet[2809]: E0707 02:48:42.075886 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:42.076104 kubelet[2809]: W0707 02:48:42.075905 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:42.076104 kubelet[2809]: E0707 02:48:42.075988 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:42.076406 kubelet[2809]: E0707 02:48:42.076212 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:42.076406 kubelet[2809]: W0707 02:48:42.076223 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:42.076406 kubelet[2809]: E0707 02:48:42.076279 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:42.076501 kubelet[2809]: E0707 02:48:42.076466 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:42.076501 kubelet[2809]: W0707 02:48:42.076476 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:42.076574 kubelet[2809]: E0707 02:48:42.076507 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:42.077473 kubelet[2809]: E0707 02:48:42.076815 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:42.077473 kubelet[2809]: W0707 02:48:42.076833 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:42.077473 kubelet[2809]: E0707 02:48:42.076857 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:42.078737 kubelet[2809]: E0707 02:48:42.077442 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:42.078737 kubelet[2809]: W0707 02:48:42.078330 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:42.078737 kubelet[2809]: E0707 02:48:42.078395 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:42.079255 kubelet[2809]: E0707 02:48:42.079231 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:42.079966 kubelet[2809]: W0707 02:48:42.079378 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:42.079966 kubelet[2809]: E0707 02:48:42.079432 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:42.080819 kubelet[2809]: E0707 02:48:42.080790 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:42.080819 kubelet[2809]: W0707 02:48:42.080815 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:42.081030 kubelet[2809]: E0707 02:48:42.080836 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:42.082917 kubelet[2809]: E0707 02:48:42.082886 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:42.082917 kubelet[2809]: W0707 02:48:42.082908 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:42.083189 kubelet[2809]: E0707 02:48:42.082934 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:42.084891 kubelet[2809]: E0707 02:48:42.084865 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:42.084891 kubelet[2809]: W0707 02:48:42.084883 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:42.085130 kubelet[2809]: E0707 02:48:42.084967 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:42.085546 kubelet[2809]: E0707 02:48:42.085512 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:48:42.085546 kubelet[2809]: W0707 02:48:42.085528 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:48:42.085546 kubelet[2809]: E0707 02:48:42.085541 2809 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:48:42.371490 containerd[1598]: time="2025-07-07T02:48:42.371376362Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:48:42.386903 containerd[1598]: time="2025-07-07T02:48:42.386692188Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 7 02:48:42.390223 containerd[1598]: time="2025-07-07T02:48:42.389929089Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:48:42.391465 containerd[1598]: time="2025-07-07T02:48:42.391434892Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:48:42.392235 containerd[1598]: time="2025-07-07T02:48:42.392211355Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.561455363s" Jul 7 02:48:42.392363 containerd[1598]: time="2025-07-07T02:48:42.392323872Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 7 02:48:42.394642 containerd[1598]: time="2025-07-07T02:48:42.394618059Z" level=info msg="CreateContainer within sandbox \"48d822131f4c03c8bc20228e2b04a4c5b8c829ce384272f3c984e1cd38c04219\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 7 02:48:42.432720 containerd[1598]: time="2025-07-07T02:48:42.432541284Z" level=info msg="CreateContainer within sandbox \"48d822131f4c03c8bc20228e2b04a4c5b8c829ce384272f3c984e1cd38c04219\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"12e35b5529c384ae816353183333e4cbe26240e378292d389d35d5c700c34fd8\"" Jul 7 02:48:42.433552 containerd[1598]: time="2025-07-07T02:48:42.433530280Z" level=info msg="StartContainer for \"12e35b5529c384ae816353183333e4cbe26240e378292d389d35d5c700c34fd8\"" Jul 7 02:48:42.474583 systemd[1]: run-containerd-runc-k8s.io-12e35b5529c384ae816353183333e4cbe26240e378292d389d35d5c700c34fd8-runc.HYa4FB.mount: Deactivated successfully. Jul 7 02:48:42.513246 containerd[1598]: time="2025-07-07T02:48:42.513209522Z" level=info msg="StartContainer for \"12e35b5529c384ae816353183333e4cbe26240e378292d389d35d5c700c34fd8\" returns successfully" Jul 7 02:48:42.625983 containerd[1598]: time="2025-07-07T02:48:42.595965107Z" level=info msg="shim disconnected" id=12e35b5529c384ae816353183333e4cbe26240e378292d389d35d5c700c34fd8 namespace=k8s.io Jul 7 02:48:42.625983 containerd[1598]: time="2025-07-07T02:48:42.625890325Z" level=warning msg="cleaning up after shim disconnected" id=12e35b5529c384ae816353183333e4cbe26240e378292d389d35d5c700c34fd8 namespace=k8s.io Jul 7 02:48:42.625983 containerd[1598]: time="2025-07-07T02:48:42.625910768Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 02:48:42.840873 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-12e35b5529c384ae816353183333e4cbe26240e378292d389d35d5c700c34fd8-rootfs.mount: Deactivated successfully. Jul 7 02:48:42.974912 kubelet[2809]: I0707 02:48:42.974792 2809 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 02:48:42.980499 containerd[1598]: time="2025-07-07T02:48:42.980059212Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 7 02:48:43.816061 kubelet[2809]: E0707 02:48:43.815528 2809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fwz77" podUID="6d6134fd-985e-434b-964a-df65b698ac32" Jul 7 02:48:45.814207 kubelet[2809]: E0707 02:48:45.814162 2809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fwz77" podUID="6d6134fd-985e-434b-964a-df65b698ac32" Jul 7 02:48:46.832672 containerd[1598]: time="2025-07-07T02:48:46.832293894Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:48:46.833279 containerd[1598]: time="2025-07-07T02:48:46.832840970Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 7 02:48:46.835000 containerd[1598]: time="2025-07-07T02:48:46.833872914Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:48:46.852960 containerd[1598]: time="2025-07-07T02:48:46.852918211Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:48:46.853472 containerd[1598]: time="2025-07-07T02:48:46.853439450Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 3.873342574s" Jul 7 02:48:46.853791 containerd[1598]: time="2025-07-07T02:48:46.853474923Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 7 02:48:46.858387 containerd[1598]: time="2025-07-07T02:48:46.858350570Z" level=info msg="CreateContainer within sandbox \"48d822131f4c03c8bc20228e2b04a4c5b8c829ce384272f3c984e1cd38c04219\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 7 02:48:46.869037 containerd[1598]: time="2025-07-07T02:48:46.869006676Z" level=info msg="CreateContainer within sandbox \"48d822131f4c03c8bc20228e2b04a4c5b8c829ce384272f3c984e1cd38c04219\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c45f50b75e9c563d1f15e81cf973d60d01fdf336f9b4b82759bafe45c2010863\"" Jul 7 02:48:46.871430 containerd[1598]: time="2025-07-07T02:48:46.870432830Z" level=info msg="StartContainer for \"c45f50b75e9c563d1f15e81cf973d60d01fdf336f9b4b82759bafe45c2010863\"" Jul 7 02:48:46.945707 containerd[1598]: time="2025-07-07T02:48:46.945588384Z" level=info msg="StartContainer for \"c45f50b75e9c563d1f15e81cf973d60d01fdf336f9b4b82759bafe45c2010863\" returns successfully" Jul 7 02:48:47.601112 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c45f50b75e9c563d1f15e81cf973d60d01fdf336f9b4b82759bafe45c2010863-rootfs.mount: Deactivated successfully. Jul 7 02:48:47.614396 containerd[1598]: time="2025-07-07T02:48:47.601759083Z" level=info msg="shim disconnected" id=c45f50b75e9c563d1f15e81cf973d60d01fdf336f9b4b82759bafe45c2010863 namespace=k8s.io Jul 7 02:48:47.614396 containerd[1598]: time="2025-07-07T02:48:47.613634003Z" level=warning msg="cleaning up after shim disconnected" id=c45f50b75e9c563d1f15e81cf973d60d01fdf336f9b4b82759bafe45c2010863 namespace=k8s.io Jul 7 02:48:47.614396 containerd[1598]: time="2025-07-07T02:48:47.613650309Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 02:48:47.614835 kubelet[2809]: I0707 02:48:47.614786 2809 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 7 02:48:47.812997 kubelet[2809]: I0707 02:48:47.812292 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/71e788e3-7905-4ea7-85e7-4768ecd449f5-calico-apiserver-certs\") pod \"calico-apiserver-549c48f8dd-dvzbn\" (UID: \"71e788e3-7905-4ea7-85e7-4768ecd449f5\") " pod="calico-apiserver/calico-apiserver-549c48f8dd-dvzbn" Jul 7 02:48:47.812997 kubelet[2809]: I0707 02:48:47.812355 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/b1996f07-5d79-4f58-bbb7-a4685ca36d35-goldmane-key-pair\") pod \"goldmane-58fd7646b9-nnz8r\" (UID: \"b1996f07-5d79-4f58-bbb7-a4685ca36d35\") " pod="calico-system/goldmane-58fd7646b9-nnz8r" Jul 7 02:48:47.812997 kubelet[2809]: I0707 02:48:47.812392 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a24b7f2-ddb0-42b0-bd4a-3ac6f41d83cd-config-volume\") pod \"coredns-7c65d6cfc9-ddnt5\" (UID: \"8a24b7f2-ddb0-42b0-bd4a-3ac6f41d83cd\") " pod="kube-system/coredns-7c65d6cfc9-ddnt5" Jul 7 02:48:47.812997 kubelet[2809]: I0707 02:48:47.812423 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx9sh\" (UniqueName: \"kubernetes.io/projected/8a24b7f2-ddb0-42b0-bd4a-3ac6f41d83cd-kube-api-access-jx9sh\") pod \"coredns-7c65d6cfc9-ddnt5\" (UID: \"8a24b7f2-ddb0-42b0-bd4a-3ac6f41d83cd\") " pod="kube-system/coredns-7c65d6cfc9-ddnt5" Jul 7 02:48:47.812997 kubelet[2809]: I0707 02:48:47.812455 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cvw7\" (UniqueName: \"kubernetes.io/projected/bb5b56a2-f557-4e2e-85d3-68d9ce53c0bd-kube-api-access-7cvw7\") pod \"calico-kube-controllers-84b8488869-mvrcm\" (UID: \"bb5b56a2-f557-4e2e-85d3-68d9ce53c0bd\") " pod="calico-system/calico-kube-controllers-84b8488869-mvrcm" Jul 7 02:48:47.813556 kubelet[2809]: I0707 02:48:47.812482 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k9rv\" (UniqueName: \"kubernetes.io/projected/1732a1fd-7638-44c8-9b73-7935edb7cb78-kube-api-access-2k9rv\") pod \"whisker-6bb4cf4bdb-rwl2k\" (UID: \"1732a1fd-7638-44c8-9b73-7935edb7cb78\") " pod="calico-system/whisker-6bb4cf4bdb-rwl2k" Jul 7 02:48:47.813556 kubelet[2809]: I0707 02:48:47.812516 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2gvx\" (UniqueName: \"kubernetes.io/projected/71e788e3-7905-4ea7-85e7-4768ecd449f5-kube-api-access-d2gvx\") pod \"calico-apiserver-549c48f8dd-dvzbn\" (UID: \"71e788e3-7905-4ea7-85e7-4768ecd449f5\") " pod="calico-apiserver/calico-apiserver-549c48f8dd-dvzbn" Jul 7 02:48:47.813556 kubelet[2809]: I0707 02:48:47.812543 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl6fn\" (UniqueName: \"kubernetes.io/projected/d34f9b09-d10e-4fb0-9e83-4c578ee4bf07-kube-api-access-zl6fn\") pod \"calico-apiserver-549c48f8dd-b5dg9\" (UID: \"d34f9b09-d10e-4fb0-9e83-4c578ee4bf07\") " pod="calico-apiserver/calico-apiserver-549c48f8dd-b5dg9" Jul 7 02:48:47.813556 kubelet[2809]: I0707 02:48:47.812573 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1996f07-5d79-4f58-bbb7-a4685ca36d35-config\") pod \"goldmane-58fd7646b9-nnz8r\" (UID: \"b1996f07-5d79-4f58-bbb7-a4685ca36d35\") " pod="calico-system/goldmane-58fd7646b9-nnz8r" Jul 7 02:48:47.813556 kubelet[2809]: I0707 02:48:47.812600 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1732a1fd-7638-44c8-9b73-7935edb7cb78-whisker-backend-key-pair\") pod \"whisker-6bb4cf4bdb-rwl2k\" (UID: \"1732a1fd-7638-44c8-9b73-7935edb7cb78\") " pod="calico-system/whisker-6bb4cf4bdb-rwl2k" Jul 7 02:48:47.813984 kubelet[2809]: I0707 02:48:47.812630 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cxnm\" (UniqueName: \"kubernetes.io/projected/6e39c44d-9500-40c6-a8ed-19a5f1bb302d-kube-api-access-4cxnm\") pod \"coredns-7c65d6cfc9-grxdk\" (UID: \"6e39c44d-9500-40c6-a8ed-19a5f1bb302d\") " pod="kube-system/coredns-7c65d6cfc9-grxdk" Jul 7 02:48:47.813984 kubelet[2809]: I0707 02:48:47.812658 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1732a1fd-7638-44c8-9b73-7935edb7cb78-whisker-ca-bundle\") pod \"whisker-6bb4cf4bdb-rwl2k\" (UID: \"1732a1fd-7638-44c8-9b73-7935edb7cb78\") " pod="calico-system/whisker-6bb4cf4bdb-rwl2k" Jul 7 02:48:47.813984 kubelet[2809]: I0707 02:48:47.812710 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf7qv\" (UniqueName: \"kubernetes.io/projected/b1996f07-5d79-4f58-bbb7-a4685ca36d35-kube-api-access-nf7qv\") pod \"goldmane-58fd7646b9-nnz8r\" (UID: \"b1996f07-5d79-4f58-bbb7-a4685ca36d35\") " pod="calico-system/goldmane-58fd7646b9-nnz8r" Jul 7 02:48:47.813984 kubelet[2809]: I0707 02:48:47.812742 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6e39c44d-9500-40c6-a8ed-19a5f1bb302d-config-volume\") pod \"coredns-7c65d6cfc9-grxdk\" (UID: \"6e39c44d-9500-40c6-a8ed-19a5f1bb302d\") " pod="kube-system/coredns-7c65d6cfc9-grxdk" Jul 7 02:48:47.813984 kubelet[2809]: I0707 02:48:47.812872 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d34f9b09-d10e-4fb0-9e83-4c578ee4bf07-calico-apiserver-certs\") pod \"calico-apiserver-549c48f8dd-b5dg9\" (UID: \"d34f9b09-d10e-4fb0-9e83-4c578ee4bf07\") " pod="calico-apiserver/calico-apiserver-549c48f8dd-b5dg9" Jul 7 02:48:47.814426 kubelet[2809]: I0707 02:48:47.812932 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1996f07-5d79-4f58-bbb7-a4685ca36d35-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-nnz8r\" (UID: \"b1996f07-5d79-4f58-bbb7-a4685ca36d35\") " pod="calico-system/goldmane-58fd7646b9-nnz8r" Jul 7 02:48:47.814426 kubelet[2809]: I0707 02:48:47.813041 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb5b56a2-f557-4e2e-85d3-68d9ce53c0bd-tigera-ca-bundle\") pod \"calico-kube-controllers-84b8488869-mvrcm\" (UID: \"bb5b56a2-f557-4e2e-85d3-68d9ce53c0bd\") " pod="calico-system/calico-kube-controllers-84b8488869-mvrcm" Jul 7 02:48:47.831616 containerd[1598]: time="2025-07-07T02:48:47.831562293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fwz77,Uid:6d6134fd-985e-434b-964a-df65b698ac32,Namespace:calico-system,Attempt:0,}" Jul 7 02:48:47.990636 containerd[1598]: time="2025-07-07T02:48:47.989086186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bb4cf4bdb-rwl2k,Uid:1732a1fd-7638-44c8-9b73-7935edb7cb78,Namespace:calico-system,Attempt:0,}" Jul 7 02:48:47.997515 containerd[1598]: time="2025-07-07T02:48:47.997473114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-nnz8r,Uid:b1996f07-5d79-4f58-bbb7-a4685ca36d35,Namespace:calico-system,Attempt:0,}" Jul 7 02:48:48.001570 containerd[1598]: time="2025-07-07T02:48:48.001363199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-grxdk,Uid:6e39c44d-9500-40c6-a8ed-19a5f1bb302d,Namespace:kube-system,Attempt:0,}" Jul 7 02:48:48.007751 containerd[1598]: time="2025-07-07T02:48:48.007454606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ddnt5,Uid:8a24b7f2-ddb0-42b0-bd4a-3ac6f41d83cd,Namespace:kube-system,Attempt:0,}" Jul 7 02:48:48.008920 containerd[1598]: time="2025-07-07T02:48:48.008722009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84b8488869-mvrcm,Uid:bb5b56a2-f557-4e2e-85d3-68d9ce53c0bd,Namespace:calico-system,Attempt:0,}" Jul 7 02:48:48.040906 containerd[1598]: time="2025-07-07T02:48:48.037676647Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 7 02:48:48.210780 containerd[1598]: time="2025-07-07T02:48:48.210274467Z" level=error msg="Failed to destroy network for sandbox \"0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:48:48.221786 containerd[1598]: time="2025-07-07T02:48:48.221731631Z" level=error msg="encountered an error cleaning up failed sandbox \"0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:48:48.231951 containerd[1598]: time="2025-07-07T02:48:48.231899276Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fwz77,Uid:6d6134fd-985e-434b-964a-df65b698ac32,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:48:48.244869 containerd[1598]: time="2025-07-07T02:48:48.244276603Z" level=error msg="Failed to destroy network for sandbox \"69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:48:48.245417 containerd[1598]: time="2025-07-07T02:48:48.245386655Z" level=error msg="encountered an error cleaning up failed sandbox \"69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:48:48.245588 containerd[1598]: time="2025-07-07T02:48:48.245565468Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bb4cf4bdb-rwl2k,Uid:1732a1fd-7638-44c8-9b73-7935edb7cb78,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:48:48.246223 kubelet[2809]: E0707 02:48:48.246177 2809 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:48:48.246362 kubelet[2809]: E0707 02:48:48.246266 2809 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6bb4cf4bdb-rwl2k" Jul 7 02:48:48.246362 kubelet[2809]: E0707 02:48:48.246292 2809 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6bb4cf4bdb-rwl2k" Jul 7 02:48:48.246362 kubelet[2809]: E0707 02:48:48.246343 2809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6bb4cf4bdb-rwl2k_calico-system(1732a1fd-7638-44c8-9b73-7935edb7cb78)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6bb4cf4bdb-rwl2k_calico-system(1732a1fd-7638-44c8-9b73-7935edb7cb78)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6bb4cf4bdb-rwl2k" podUID="1732a1fd-7638-44c8-9b73-7935edb7cb78" Jul 7 02:48:48.247883 kubelet[2809]: E0707 02:48:48.246553 2809 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:48:48.247883 kubelet[2809]: E0707 02:48:48.246580 2809 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fwz77" Jul 7 02:48:48.247883 kubelet[2809]: E0707 02:48:48.246600 2809 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fwz77" Jul 7 02:48:48.248076 kubelet[2809]: E0707 02:48:48.246626 2809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fwz77_calico-system(6d6134fd-985e-434b-964a-df65b698ac32)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fwz77_calico-system(6d6134fd-985e-434b-964a-df65b698ac32)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fwz77" podUID="6d6134fd-985e-434b-964a-df65b698ac32" Jul 7 02:48:48.265575 containerd[1598]: time="2025-07-07T02:48:48.265529302Z" level=error msg="Failed to destroy network for sandbox \"7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:48:48.266080 containerd[1598]: time="2025-07-07T02:48:48.266049133Z" level=error msg="encountered an error cleaning up failed sandbox \"7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:48:48.266252 containerd[1598]: time="2025-07-07T02:48:48.266228914Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-nnz8r,Uid:b1996f07-5d79-4f58-bbb7-a4685ca36d35,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:48:48.266613 kubelet[2809]: E0707 02:48:48.266571 2809 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:48:48.266704 kubelet[2809]: E0707 02:48:48.266645 2809 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-nnz8r" Jul 7 02:48:48.266704 kubelet[2809]: E0707 02:48:48.266667 2809 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-nnz8r" Jul 7 02:48:48.269252 kubelet[2809]: E0707 02:48:48.266722 2809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-nnz8r_calico-system(b1996f07-5d79-4f58-bbb7-a4685ca36d35)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-nnz8r_calico-system(b1996f07-5d79-4f58-bbb7-a4685ca36d35)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-nnz8r" podUID="b1996f07-5d79-4f58-bbb7-a4685ca36d35" Jul 7 02:48:48.284856 containerd[1598]: time="2025-07-07T02:48:48.284808387Z" level=error msg="Failed to destroy network for sandbox \"e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:48:48.285389 containerd[1598]: time="2025-07-07T02:48:48.285358560Z" level=error msg="encountered an error cleaning up failed sandbox \"e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:48:48.285520 containerd[1598]: time="2025-07-07T02:48:48.285497186Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84b8488869-mvrcm,Uid:bb5b56a2-f557-4e2e-85d3-68d9ce53c0bd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:48:48.286365 kubelet[2809]: E0707 02:48:48.286321 2809 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:48:48.286501 kubelet[2809]: E0707 02:48:48.286390 2809 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84b8488869-mvrcm" Jul 7 02:48:48.286501 kubelet[2809]: E0707 02:48:48.286411 2809 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84b8488869-mvrcm" Jul 7 02:48:48.286501 kubelet[2809]: E0707 02:48:48.286459 2809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-84b8488869-mvrcm_calico-system(bb5b56a2-f557-4e2e-85d3-68d9ce53c0bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-84b8488869-mvrcm_calico-system(bb5b56a2-f557-4e2e-85d3-68d9ce53c0bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-84b8488869-mvrcm" podUID="bb5b56a2-f557-4e2e-85d3-68d9ce53c0bd" Jul 7 02:48:48.290565 containerd[1598]: time="2025-07-07T02:48:48.290532640Z" level=error msg="Failed to destroy network for sandbox \"ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:48:48.291089 containerd[1598]: time="2025-07-07T02:48:48.290933646Z" level=error msg="encountered an error cleaning up failed sandbox \"ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:48:48.291089 containerd[1598]: time="2025-07-07T02:48:48.291000216Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-grxdk,Uid:6e39c44d-9500-40c6-a8ed-19a5f1bb302d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:48:48.291286 kubelet[2809]: E0707 02:48:48.291250 2809 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:48:48.291426 kubelet[2809]: E0707 02:48:48.291299 2809 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-grxdk" Jul 7 02:48:48.291426 kubelet[2809]: E0707 02:48:48.291318 2809 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-grxdk" Jul 7 02:48:48.291426 kubelet[2809]: E0707 02:48:48.291353 2809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-grxdk_kube-system(6e39c44d-9500-40c6-a8ed-19a5f1bb302d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-grxdk_kube-system(6e39c44d-9500-40c6-a8ed-19a5f1bb302d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-grxdk" podUID="6e39c44d-9500-40c6-a8ed-19a5f1bb302d" Jul 7 02:48:48.301669 containerd[1598]: time="2025-07-07T02:48:48.301631791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-549c48f8dd-b5dg9,Uid:d34f9b09-d10e-4fb0-9e83-4c578ee4bf07,Namespace:calico-apiserver,Attempt:0,}" Jul 7 02:48:48.302073 containerd[1598]: time="2025-07-07T02:48:48.301971069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-549c48f8dd-dvzbn,Uid:71e788e3-7905-4ea7-85e7-4768ecd449f5,Namespace:calico-apiserver,Attempt:0,}" Jul 7 02:48:48.325211 containerd[1598]: time="2025-07-07T02:48:48.325164882Z" level=error msg="Failed to destroy network for sandbox \"84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:48:48.326160 containerd[1598]: time="2025-07-07T02:48:48.326077748Z" level=error msg="encountered an error cleaning up failed sandbox \"84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:48:48.326909 containerd[1598]: time="2025-07-07T02:48:48.326668678Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ddnt5,Uid:8a24b7f2-ddb0-42b0-bd4a-3ac6f41d83cd,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:48:48.327036 kubelet[2809]: E0707 02:48:48.326940 2809 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:48:48.327036 kubelet[2809]: E0707 02:48:48.327013 2809 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-ddnt5" Jul 7 02:48:48.327036 kubelet[2809]: E0707 02:48:48.327034 2809 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-ddnt5" Jul 7 02:48:48.327281 kubelet[2809]: E0707 02:48:48.327085 2809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-ddnt5_kube-system(8a24b7f2-ddb0-42b0-bd4a-3ac6f41d83cd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-ddnt5_kube-system(8a24b7f2-ddb0-42b0-bd4a-3ac6f41d83cd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-ddnt5" podUID="8a24b7f2-ddb0-42b0-bd4a-3ac6f41d83cd" Jul 7 02:48:48.392070 containerd[1598]: time="2025-07-07T02:48:48.392004476Z" level=error msg="Failed to destroy network for sandbox \"de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:48:48.392795 containerd[1598]: time="2025-07-07T02:48:48.392673666Z" level=error msg="encountered an error cleaning up failed sandbox \"de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:48:48.392941 containerd[1598]: time="2025-07-07T02:48:48.392825273Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-549c48f8dd-b5dg9,Uid:d34f9b09-d10e-4fb0-9e83-4c578ee4bf07,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:48:48.394294 kubelet[2809]: E0707 02:48:48.393197 2809 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:48:48.394294 kubelet[2809]: E0707 02:48:48.393264 2809 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-549c48f8dd-b5dg9" Jul 7 02:48:48.394294 kubelet[2809]: E0707 02:48:48.393285 2809 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-549c48f8dd-b5dg9" Jul 7 02:48:48.394502 kubelet[2809]: E0707 02:48:48.393338 2809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-549c48f8dd-b5dg9_calico-apiserver(d34f9b09-d10e-4fb0-9e83-4c578ee4bf07)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-549c48f8dd-b5dg9_calico-apiserver(d34f9b09-d10e-4fb0-9e83-4c578ee4bf07)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-549c48f8dd-b5dg9" podUID="d34f9b09-d10e-4fb0-9e83-4c578ee4bf07" Jul 7 02:48:48.401978 containerd[1598]: time="2025-07-07T02:48:48.401910750Z" level=error msg="Failed to destroy network for sandbox \"745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:48:48.402284 containerd[1598]: time="2025-07-07T02:48:48.402254270Z" level=error msg="encountered an error cleaning up failed sandbox \"745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:48:48.402342 containerd[1598]: time="2025-07-07T02:48:48.402311706Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-549c48f8dd-dvzbn,Uid:71e788e3-7905-4ea7-85e7-4768ecd449f5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:48:48.402651 kubelet[2809]: E0707 02:48:48.402522 2809 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:48:48.402651 kubelet[2809]: E0707 02:48:48.402587 2809 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-549c48f8dd-dvzbn" Jul 7 02:48:48.402651 kubelet[2809]: E0707 02:48:48.402605 2809 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-549c48f8dd-dvzbn" Jul 7 02:48:48.403045 kubelet[2809]: E0707 02:48:48.402851 2809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-549c48f8dd-dvzbn_calico-apiserver(71e788e3-7905-4ea7-85e7-4768ecd449f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-549c48f8dd-dvzbn_calico-apiserver(71e788e3-7905-4ea7-85e7-4768ecd449f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-549c48f8dd-dvzbn" podUID="71e788e3-7905-4ea7-85e7-4768ecd449f5" Jul 7 02:48:48.876185 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6-shm.mount: Deactivated successfully. Jul 7 02:48:49.033365 kubelet[2809]: I0707 02:48:49.032347 2809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa" Jul 7 02:48:49.041801 containerd[1598]: time="2025-07-07T02:48:49.041554314Z" level=info msg="StopPodSandbox for \"69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa\"" Jul 7 02:48:49.043906 kubelet[2809]: I0707 02:48:49.042739 2809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6" Jul 7 02:48:49.044042 containerd[1598]: time="2025-07-07T02:48:49.042984192Z" level=info msg="Ensure that sandbox 69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa in task-service has been cleanup successfully" Jul 7 02:48:49.044042 containerd[1598]: time="2025-07-07T02:48:49.043517210Z" level=info msg="StopPodSandbox for \"0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6\"" Jul 7 02:48:49.044042 containerd[1598]: time="2025-07-07T02:48:49.043682332Z" level=info msg="Ensure that sandbox 0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6 in task-service has been cleanup successfully" Jul 7 02:48:49.047548 kubelet[2809]: I0707 02:48:49.047418 2809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069" Jul 7 02:48:49.048399 containerd[1598]: time="2025-07-07T02:48:49.048304346Z" level=info msg="StopPodSandbox for \"de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069\"" Jul 7 02:48:49.048964 containerd[1598]: time="2025-07-07T02:48:49.048684872Z" level=info msg="Ensure that sandbox de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069 in task-service has been cleanup successfully" Jul 7 02:48:49.050174 kubelet[2809]: I0707 02:48:49.049333 2809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57" Jul 7 02:48:49.050579 containerd[1598]: time="2025-07-07T02:48:49.049794077Z" level=info msg="StopPodSandbox for \"e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57\"" Jul 7 02:48:49.050579 containerd[1598]: time="2025-07-07T02:48:49.049952252Z" level=info msg="Ensure that sandbox e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57 in task-service has been cleanup successfully" Jul 7 02:48:49.054752 kubelet[2809]: I0707 02:48:49.054729 2809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275" Jul 7 02:48:49.055556 containerd[1598]: time="2025-07-07T02:48:49.055528699Z" level=info msg="StopPodSandbox for \"84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275\"" Jul 7 02:48:49.056161 containerd[1598]: time="2025-07-07T02:48:49.055688948Z" level=info msg="Ensure that sandbox 84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275 in task-service has been cleanup successfully" Jul 7 02:48:49.059735 kubelet[2809]: I0707 02:48:49.059653 2809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094" Jul 7 02:48:49.061178 containerd[1598]: time="2025-07-07T02:48:49.060899696Z" level=info msg="StopPodSandbox for \"745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094\"" Jul 7 02:48:49.062792 containerd[1598]: time="2025-07-07T02:48:49.061874702Z" level=info msg="Ensure that sandbox 745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094 in task-service has been cleanup successfully" Jul 7 02:48:49.062855 kubelet[2809]: I0707 02:48:49.062434 2809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c" Jul 7 02:48:49.063049 containerd[1598]: time="2025-07-07T02:48:49.063018699Z" level=info msg="StopPodSandbox for \"ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c\"" Jul 7 02:48:49.063351 containerd[1598]: time="2025-07-07T02:48:49.063334598Z" level=info msg="Ensure that sandbox ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c in task-service has been cleanup successfully" Jul 7 02:48:49.073223 kubelet[2809]: I0707 02:48:49.073194 2809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656" Jul 7 02:48:49.076399 containerd[1598]: time="2025-07-07T02:48:49.076267922Z" level=info msg="StopPodSandbox for \"7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656\"" Jul 7 02:48:49.077064 containerd[1598]: time="2025-07-07T02:48:49.076447740Z" level=info msg="Ensure that sandbox 7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656 in task-service has been cleanup successfully" Jul 7 02:48:49.134978 containerd[1598]: time="2025-07-07T02:48:49.134705210Z" level=error msg="StopPodSandbox for \"e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57\" failed" error="failed to destroy network for sandbox \"e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:48:49.137293 kubelet[2809]: E0707 02:48:49.137051 2809 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57" Jul 7 02:48:49.137293 kubelet[2809]: E0707 02:48:49.137121 2809 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57"} Jul 7 02:48:49.137293 kubelet[2809]: E0707 02:48:49.137224 2809 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bb5b56a2-f557-4e2e-85d3-68d9ce53c0bd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 02:48:49.137293 kubelet[2809]: E0707 02:48:49.137259 2809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bb5b56a2-f557-4e2e-85d3-68d9ce53c0bd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-84b8488869-mvrcm" podUID="bb5b56a2-f557-4e2e-85d3-68d9ce53c0bd" Jul 7 02:48:49.153571 containerd[1598]: time="2025-07-07T02:48:49.153483934Z" level=error msg="StopPodSandbox for \"84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275\" failed" error="failed to destroy network for sandbox \"84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:48:49.154271 kubelet[2809]: E0707 02:48:49.154122 2809 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275" Jul 7 02:48:49.154271 kubelet[2809]: E0707 02:48:49.154183 2809 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275"} Jul 7 02:48:49.154271 kubelet[2809]: E0707 02:48:49.154217 2809 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8a24b7f2-ddb0-42b0-bd4a-3ac6f41d83cd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 02:48:49.154271 kubelet[2809]: E0707 02:48:49.154241 2809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8a24b7f2-ddb0-42b0-bd4a-3ac6f41d83cd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-ddnt5" podUID="8a24b7f2-ddb0-42b0-bd4a-3ac6f41d83cd" Jul 7 02:48:49.177237 containerd[1598]: time="2025-07-07T02:48:49.177134654Z" level=error msg="StopPodSandbox for \"745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094\" failed" error="failed to destroy network for sandbox \"745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:48:49.177542 kubelet[2809]: E0707 02:48:49.177410 2809 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094" Jul 7 02:48:49.177542 kubelet[2809]: E0707 02:48:49.177503 2809 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094"} Jul 7 02:48:49.178239 kubelet[2809]: E0707 02:48:49.177567 2809 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"71e788e3-7905-4ea7-85e7-4768ecd449f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 02:48:49.178239 kubelet[2809]: E0707 02:48:49.177632 2809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"71e788e3-7905-4ea7-85e7-4768ecd449f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-549c48f8dd-dvzbn" podUID="71e788e3-7905-4ea7-85e7-4768ecd449f5" Jul 7 02:48:49.192180 containerd[1598]: time="2025-07-07T02:48:49.191989941Z" level=error msg="StopPodSandbox for \"69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa\" failed" error="failed to destroy network for sandbox \"69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:48:49.192859 kubelet[2809]: E0707 02:48:49.192684 2809 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa" Jul 7 02:48:49.192859 kubelet[2809]: E0707 02:48:49.192741 2809 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa"} Jul 7 02:48:49.192859 kubelet[2809]: E0707 02:48:49.192786 2809 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1732a1fd-7638-44c8-9b73-7935edb7cb78\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 02:48:49.192859 kubelet[2809]: E0707 02:48:49.192818 2809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1732a1fd-7638-44c8-9b73-7935edb7cb78\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6bb4cf4bdb-rwl2k" podUID="1732a1fd-7638-44c8-9b73-7935edb7cb78" Jul 7 02:48:49.195796 containerd[1598]: time="2025-07-07T02:48:49.195665996Z" level=error msg="StopPodSandbox for \"ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c\" failed" error="failed to destroy network for sandbox \"ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:48:49.196127 kubelet[2809]: E0707 02:48:49.195883 2809 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c" Jul 7 02:48:49.196127 kubelet[2809]: E0707 02:48:49.195925 2809 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c"} Jul 7 02:48:49.196127 kubelet[2809]: E0707 02:48:49.195955 2809 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6e39c44d-9500-40c6-a8ed-19a5f1bb302d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 02:48:49.196127 kubelet[2809]: E0707 02:48:49.195979 2809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6e39c44d-9500-40c6-a8ed-19a5f1bb302d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-grxdk" podUID="6e39c44d-9500-40c6-a8ed-19a5f1bb302d" Jul 7 02:48:49.198320 containerd[1598]: time="2025-07-07T02:48:49.198162462Z" level=error msg="StopPodSandbox for \"0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6\" failed" error="failed to destroy network for sandbox \"0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:48:49.198816 kubelet[2809]: E0707 02:48:49.198665 2809 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6" Jul 7 02:48:49.198816 kubelet[2809]: E0707 02:48:49.198722 2809 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6"} Jul 7 02:48:49.198816 kubelet[2809]: E0707 02:48:49.198750 2809 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6d6134fd-985e-434b-964a-df65b698ac32\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 02:48:49.198816 kubelet[2809]: E0707 02:48:49.198771 2809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6d6134fd-985e-434b-964a-df65b698ac32\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fwz77" podUID="6d6134fd-985e-434b-964a-df65b698ac32" Jul 7 02:48:49.205186 containerd[1598]: time="2025-07-07T02:48:49.205117733Z" level=error msg="StopPodSandbox for \"de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069\" failed" error="failed to destroy network for sandbox \"de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:48:49.205639 kubelet[2809]: E0707 02:48:49.205501 2809 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069" Jul 7 02:48:49.205639 kubelet[2809]: E0707 02:48:49.205545 2809 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069"} Jul 7 02:48:49.205639 kubelet[2809]: E0707 02:48:49.205586 2809 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d34f9b09-d10e-4fb0-9e83-4c578ee4bf07\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 02:48:49.205639 kubelet[2809]: E0707 02:48:49.205609 2809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d34f9b09-d10e-4fb0-9e83-4c578ee4bf07\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-549c48f8dd-b5dg9" podUID="d34f9b09-d10e-4fb0-9e83-4c578ee4bf07" Jul 7 02:48:49.216180 containerd[1598]: time="2025-07-07T02:48:49.216071652Z" level=error msg="StopPodSandbox for \"7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656\" failed" error="failed to destroy network for sandbox \"7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:48:49.216682 kubelet[2809]: E0707 02:48:49.216419 2809 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656" Jul 7 02:48:49.216682 kubelet[2809]: E0707 02:48:49.216474 2809 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656"} Jul 7 02:48:49.216682 kubelet[2809]: E0707 02:48:49.216517 2809 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b1996f07-5d79-4f58-bbb7-a4685ca36d35\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 02:48:49.216682 kubelet[2809]: E0707 02:48:49.216550 2809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b1996f07-5d79-4f58-bbb7-a4685ca36d35\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-nnz8r" podUID="b1996f07-5d79-4f58-bbb7-a4685ca36d35" Jul 7 02:48:55.538993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1494446791.mount: Deactivated successfully. Jul 7 02:48:55.634821 containerd[1598]: time="2025-07-07T02:48:55.611020146Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 7 02:48:55.645674 containerd[1598]: time="2025-07-07T02:48:55.645628193Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 7.593098554s" Jul 7 02:48:55.645847 containerd[1598]: time="2025-07-07T02:48:55.645831111Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 7 02:48:55.650985 containerd[1598]: time="2025-07-07T02:48:55.650942818Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:48:55.696104 containerd[1598]: time="2025-07-07T02:48:55.695898198Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:48:55.696526 containerd[1598]: time="2025-07-07T02:48:55.696501214Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:48:55.728851 containerd[1598]: time="2025-07-07T02:48:55.728762490Z" level=info msg="CreateContainer within sandbox \"48d822131f4c03c8bc20228e2b04a4c5b8c829ce384272f3c984e1cd38c04219\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 7 02:48:55.785578 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount905444925.mount: Deactivated successfully. Jul 7 02:48:55.793005 containerd[1598]: time="2025-07-07T02:48:55.792909182Z" level=info msg="CreateContainer within sandbox \"48d822131f4c03c8bc20228e2b04a4c5b8c829ce384272f3c984e1cd38c04219\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0d9de881d7922dfe1edff4e0d13f53cb0bd54665bf492e29886da9d72ca0541b\"" Jul 7 02:48:55.794776 containerd[1598]: time="2025-07-07T02:48:55.794605171Z" level=info msg="StartContainer for \"0d9de881d7922dfe1edff4e0d13f53cb0bd54665bf492e29886da9d72ca0541b\"" Jul 7 02:48:55.981650 containerd[1598]: time="2025-07-07T02:48:55.981588014Z" level=info msg="StartContainer for \"0d9de881d7922dfe1edff4e0d13f53cb0bd54665bf492e29886da9d72ca0541b\" returns successfully" Jul 7 02:48:56.215487 kubelet[2809]: I0707 02:48:56.203824 2809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-2xqkz" podStartSLOduration=1.853185384 podStartE2EDuration="19.186805992s" podCreationTimestamp="2025-07-07 02:48:37 +0000 UTC" firstStartedPulling="2025-07-07 02:48:38.314362254 +0000 UTC m=+20.657707000" lastFinishedPulling="2025-07-07 02:48:55.647982857 +0000 UTC m=+37.991327608" observedRunningTime="2025-07-07 02:48:56.17518857 +0000 UTC m=+38.518533341" watchObservedRunningTime="2025-07-07 02:48:56.186805992 +0000 UTC m=+38.530150793" Jul 7 02:48:56.231516 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 7 02:48:56.232509 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 7 02:48:56.453902 containerd[1598]: time="2025-07-07T02:48:56.452711972Z" level=info msg="StopPodSandbox for \"69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa\"" Jul 7 02:48:56.609177 systemd-journald[1172]: Under memory pressure, flushing caches. Jul 7 02:48:56.613461 systemd-resolved[1505]: Under memory pressure, flushing caches. Jul 7 02:48:56.613617 systemd-resolved[1505]: Flushed all caches. Jul 7 02:48:56.816269 containerd[1598]: 2025-07-07 02:48:56.667 [INFO][3982] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa" Jul 7 02:48:56.816269 containerd[1598]: 2025-07-07 02:48:56.669 [INFO][3982] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa" iface="eth0" netns="/var/run/netns/cni-d6749a31-753e-fd54-5748-f98447be678d" Jul 7 02:48:56.816269 containerd[1598]: 2025-07-07 02:48:56.669 [INFO][3982] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa" iface="eth0" netns="/var/run/netns/cni-d6749a31-753e-fd54-5748-f98447be678d" Jul 7 02:48:56.816269 containerd[1598]: 2025-07-07 02:48:56.670 [INFO][3982] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa" iface="eth0" netns="/var/run/netns/cni-d6749a31-753e-fd54-5748-f98447be678d" Jul 7 02:48:56.816269 containerd[1598]: 2025-07-07 02:48:56.670 [INFO][3982] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa" Jul 7 02:48:56.816269 containerd[1598]: 2025-07-07 02:48:56.670 [INFO][3982] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa" Jul 7 02:48:56.816269 containerd[1598]: 2025-07-07 02:48:56.797 [INFO][3989] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa" HandleID="k8s-pod-network.69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa" Workload="srv--ijdf9.gb1.brightbox.com-k8s-whisker--6bb4cf4bdb--rwl2k-eth0" Jul 7 02:48:56.816269 containerd[1598]: 2025-07-07 02:48:56.799 [INFO][3989] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:48:56.816269 containerd[1598]: 2025-07-07 02:48:56.800 [INFO][3989] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:48:56.816269 containerd[1598]: 2025-07-07 02:48:56.809 [WARNING][3989] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa" HandleID="k8s-pod-network.69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa" Workload="srv--ijdf9.gb1.brightbox.com-k8s-whisker--6bb4cf4bdb--rwl2k-eth0" Jul 7 02:48:56.816269 containerd[1598]: 2025-07-07 02:48:56.809 [INFO][3989] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa" HandleID="k8s-pod-network.69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa" Workload="srv--ijdf9.gb1.brightbox.com-k8s-whisker--6bb4cf4bdb--rwl2k-eth0" Jul 7 02:48:56.816269 containerd[1598]: 2025-07-07 02:48:56.811 [INFO][3989] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:48:56.816269 containerd[1598]: 2025-07-07 02:48:56.813 [INFO][3982] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa" Jul 7 02:48:56.822850 systemd[1]: run-netns-cni\x2dd6749a31\x2d753e\x2dfd54\x2d5748\x2df98447be678d.mount: Deactivated successfully. Jul 7 02:48:56.830493 containerd[1598]: time="2025-07-07T02:48:56.830441494Z" level=info msg="TearDown network for sandbox \"69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa\" successfully" Jul 7 02:48:56.830569 containerd[1598]: time="2025-07-07T02:48:56.830500753Z" level=info msg="StopPodSandbox for \"69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa\" returns successfully" Jul 7 02:48:57.030068 kubelet[2809]: I0707 02:48:57.029925 2809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1732a1fd-7638-44c8-9b73-7935edb7cb78-whisker-backend-key-pair\") pod \"1732a1fd-7638-44c8-9b73-7935edb7cb78\" (UID: \"1732a1fd-7638-44c8-9b73-7935edb7cb78\") " Jul 7 02:48:57.036041 kubelet[2809]: I0707 02:48:57.035656 2809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1732a1fd-7638-44c8-9b73-7935edb7cb78-whisker-ca-bundle\") pod \"1732a1fd-7638-44c8-9b73-7935edb7cb78\" (UID: \"1732a1fd-7638-44c8-9b73-7935edb7cb78\") " Jul 7 02:48:57.036041 kubelet[2809]: I0707 02:48:57.035718 2809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2k9rv\" (UniqueName: \"kubernetes.io/projected/1732a1fd-7638-44c8-9b73-7935edb7cb78-kube-api-access-2k9rv\") pod \"1732a1fd-7638-44c8-9b73-7935edb7cb78\" (UID: \"1732a1fd-7638-44c8-9b73-7935edb7cb78\") " Jul 7 02:48:57.068728 kubelet[2809]: I0707 02:48:57.067798 2809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1732a1fd-7638-44c8-9b73-7935edb7cb78-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "1732a1fd-7638-44c8-9b73-7935edb7cb78" (UID: "1732a1fd-7638-44c8-9b73-7935edb7cb78"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 7 02:48:57.071663 kubelet[2809]: I0707 02:48:57.071616 2809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1732a1fd-7638-44c8-9b73-7935edb7cb78-kube-api-access-2k9rv" (OuterVolumeSpecName: "kube-api-access-2k9rv") pod "1732a1fd-7638-44c8-9b73-7935edb7cb78" (UID: "1732a1fd-7638-44c8-9b73-7935edb7cb78"). InnerVolumeSpecName "kube-api-access-2k9rv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 7 02:48:57.072107 kubelet[2809]: I0707 02:48:57.072059 2809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1732a1fd-7638-44c8-9b73-7935edb7cb78-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "1732a1fd-7638-44c8-9b73-7935edb7cb78" (UID: "1732a1fd-7638-44c8-9b73-7935edb7cb78"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 7 02:48:57.074582 systemd[1]: var-lib-kubelet-pods-1732a1fd\x2d7638\x2d44c8\x2d9b73\x2d7935edb7cb78-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2k9rv.mount: Deactivated successfully. Jul 7 02:48:57.078197 systemd[1]: var-lib-kubelet-pods-1732a1fd\x2d7638\x2d44c8\x2d9b73\x2d7935edb7cb78-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 7 02:48:57.127628 kubelet[2809]: I0707 02:48:57.126183 2809 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 02:48:57.138923 kubelet[2809]: I0707 02:48:57.138888 2809 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1732a1fd-7638-44c8-9b73-7935edb7cb78-whisker-backend-key-pair\") on node \"srv-ijdf9.gb1.brightbox.com\" DevicePath \"\"" Jul 7 02:48:57.139849 kubelet[2809]: I0707 02:48:57.139816 2809 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1732a1fd-7638-44c8-9b73-7935edb7cb78-whisker-ca-bundle\") on node \"srv-ijdf9.gb1.brightbox.com\" DevicePath \"\"" Jul 7 02:48:57.139937 kubelet[2809]: I0707 02:48:57.139927 2809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2k9rv\" (UniqueName: \"kubernetes.io/projected/1732a1fd-7638-44c8-9b73-7935edb7cb78-kube-api-access-2k9rv\") on node \"srv-ijdf9.gb1.brightbox.com\" DevicePath \"\"" Jul 7 02:48:57.347031 kubelet[2809]: I0707 02:48:57.346597 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57vcj\" (UniqueName: \"kubernetes.io/projected/17b1a8cf-c9ae-4a42-9af0-9ea141ac0f76-kube-api-access-57vcj\") pod \"whisker-677fcf4f54-2485h\" (UID: \"17b1a8cf-c9ae-4a42-9af0-9ea141ac0f76\") " pod="calico-system/whisker-677fcf4f54-2485h" Jul 7 02:48:57.347031 kubelet[2809]: I0707 02:48:57.346710 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/17b1a8cf-c9ae-4a42-9af0-9ea141ac0f76-whisker-backend-key-pair\") pod \"whisker-677fcf4f54-2485h\" (UID: \"17b1a8cf-c9ae-4a42-9af0-9ea141ac0f76\") " pod="calico-system/whisker-677fcf4f54-2485h" Jul 7 02:48:57.347031 kubelet[2809]: I0707 02:48:57.346780 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17b1a8cf-c9ae-4a42-9af0-9ea141ac0f76-whisker-ca-bundle\") pod \"whisker-677fcf4f54-2485h\" (UID: \"17b1a8cf-c9ae-4a42-9af0-9ea141ac0f76\") " pod="calico-system/whisker-677fcf4f54-2485h" Jul 7 02:48:57.530002 containerd[1598]: time="2025-07-07T02:48:57.529883095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-677fcf4f54-2485h,Uid:17b1a8cf-c9ae-4a42-9af0-9ea141ac0f76,Namespace:calico-system,Attempt:0,}" Jul 7 02:48:57.731497 systemd-networkd[1261]: caliaa94c06e5f3: Link UP Jul 7 02:48:57.732767 systemd-networkd[1261]: caliaa94c06e5f3: Gained carrier Jul 7 02:48:57.753547 containerd[1598]: 2025-07-07 02:48:57.595 [INFO][4011] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 02:48:57.753547 containerd[1598]: 2025-07-07 02:48:57.607 [INFO][4011] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--ijdf9.gb1.brightbox.com-k8s-whisker--677fcf4f54--2485h-eth0 whisker-677fcf4f54- calico-system 17b1a8cf-c9ae-4a42-9af0-9ea141ac0f76 907 0 2025-07-07 02:48:57 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:677fcf4f54 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-ijdf9.gb1.brightbox.com whisker-677fcf4f54-2485h eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] caliaa94c06e5f3 [] [] }} ContainerID="9fc9bfe3755761cd08f19ed2a8f80c4079a96a261a6194ed9b162be6758e7479" Namespace="calico-system" Pod="whisker-677fcf4f54-2485h" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-whisker--677fcf4f54--2485h-" Jul 7 02:48:57.753547 containerd[1598]: 2025-07-07 02:48:57.607 [INFO][4011] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9fc9bfe3755761cd08f19ed2a8f80c4079a96a261a6194ed9b162be6758e7479" Namespace="calico-system" Pod="whisker-677fcf4f54-2485h" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-whisker--677fcf4f54--2485h-eth0" Jul 7 02:48:57.753547 containerd[1598]: 2025-07-07 02:48:57.645 [INFO][4024] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9fc9bfe3755761cd08f19ed2a8f80c4079a96a261a6194ed9b162be6758e7479" HandleID="k8s-pod-network.9fc9bfe3755761cd08f19ed2a8f80c4079a96a261a6194ed9b162be6758e7479" Workload="srv--ijdf9.gb1.brightbox.com-k8s-whisker--677fcf4f54--2485h-eth0" Jul 7 02:48:57.753547 containerd[1598]: 2025-07-07 02:48:57.645 [INFO][4024] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9fc9bfe3755761cd08f19ed2a8f80c4079a96a261a6194ed9b162be6758e7479" HandleID="k8s-pod-network.9fc9bfe3755761cd08f19ed2a8f80c4079a96a261a6194ed9b162be6758e7479" Workload="srv--ijdf9.gb1.brightbox.com-k8s-whisker--677fcf4f54--2485h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f950), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-ijdf9.gb1.brightbox.com", "pod":"whisker-677fcf4f54-2485h", "timestamp":"2025-07-07 02:48:57.645398367 +0000 UTC"}, Hostname:"srv-ijdf9.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 02:48:57.753547 containerd[1598]: 2025-07-07 02:48:57.645 [INFO][4024] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:48:57.753547 containerd[1598]: 2025-07-07 02:48:57.645 [INFO][4024] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:48:57.753547 containerd[1598]: 2025-07-07 02:48:57.645 [INFO][4024] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-ijdf9.gb1.brightbox.com' Jul 7 02:48:57.753547 containerd[1598]: 2025-07-07 02:48:57.665 [INFO][4024] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9fc9bfe3755761cd08f19ed2a8f80c4079a96a261a6194ed9b162be6758e7479" host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:48:57.753547 containerd[1598]: 2025-07-07 02:48:57.678 [INFO][4024] ipam/ipam.go 394: Looking up existing affinities for host host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:48:57.753547 containerd[1598]: 2025-07-07 02:48:57.686 [INFO][4024] ipam/ipam.go 511: Trying affinity for 192.168.97.128/26 host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:48:57.753547 containerd[1598]: 2025-07-07 02:48:57.688 [INFO][4024] ipam/ipam.go 158: Attempting to load block cidr=192.168.97.128/26 host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:48:57.753547 containerd[1598]: 2025-07-07 02:48:57.691 [INFO][4024] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.97.128/26 host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:48:57.753547 containerd[1598]: 2025-07-07 02:48:57.691 [INFO][4024] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.97.128/26 handle="k8s-pod-network.9fc9bfe3755761cd08f19ed2a8f80c4079a96a261a6194ed9b162be6758e7479" host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:48:57.753547 containerd[1598]: 2025-07-07 02:48:57.694 [INFO][4024] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9fc9bfe3755761cd08f19ed2a8f80c4079a96a261a6194ed9b162be6758e7479 Jul 7 02:48:57.753547 containerd[1598]: 2025-07-07 02:48:57.702 [INFO][4024] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.97.128/26 handle="k8s-pod-network.9fc9bfe3755761cd08f19ed2a8f80c4079a96a261a6194ed9b162be6758e7479" host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:48:57.753547 containerd[1598]: 2025-07-07 02:48:57.712 [INFO][4024] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.97.129/26] block=192.168.97.128/26 handle="k8s-pod-network.9fc9bfe3755761cd08f19ed2a8f80c4079a96a261a6194ed9b162be6758e7479" host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:48:57.753547 containerd[1598]: 2025-07-07 02:48:57.712 [INFO][4024] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.97.129/26] handle="k8s-pod-network.9fc9bfe3755761cd08f19ed2a8f80c4079a96a261a6194ed9b162be6758e7479" host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:48:57.753547 containerd[1598]: 2025-07-07 02:48:57.713 [INFO][4024] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:48:57.753547 containerd[1598]: 2025-07-07 02:48:57.713 [INFO][4024] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.97.129/26] IPv6=[] ContainerID="9fc9bfe3755761cd08f19ed2a8f80c4079a96a261a6194ed9b162be6758e7479" HandleID="k8s-pod-network.9fc9bfe3755761cd08f19ed2a8f80c4079a96a261a6194ed9b162be6758e7479" Workload="srv--ijdf9.gb1.brightbox.com-k8s-whisker--677fcf4f54--2485h-eth0" Jul 7 02:48:57.763594 containerd[1598]: 2025-07-07 02:48:57.716 [INFO][4011] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9fc9bfe3755761cd08f19ed2a8f80c4079a96a261a6194ed9b162be6758e7479" Namespace="calico-system" Pod="whisker-677fcf4f54-2485h" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-whisker--677fcf4f54--2485h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ijdf9.gb1.brightbox.com-k8s-whisker--677fcf4f54--2485h-eth0", GenerateName:"whisker-677fcf4f54-", Namespace:"calico-system", SelfLink:"", UID:"17b1a8cf-c9ae-4a42-9af0-9ea141ac0f76", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 48, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"677fcf4f54", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ijdf9.gb1.brightbox.com", ContainerID:"", Pod:"whisker-677fcf4f54-2485h", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.97.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliaa94c06e5f3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:48:57.763594 containerd[1598]: 2025-07-07 02:48:57.716 [INFO][4011] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.129/32] ContainerID="9fc9bfe3755761cd08f19ed2a8f80c4079a96a261a6194ed9b162be6758e7479" Namespace="calico-system" Pod="whisker-677fcf4f54-2485h" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-whisker--677fcf4f54--2485h-eth0" Jul 7 02:48:57.763594 containerd[1598]: 2025-07-07 02:48:57.716 [INFO][4011] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaa94c06e5f3 ContainerID="9fc9bfe3755761cd08f19ed2a8f80c4079a96a261a6194ed9b162be6758e7479" Namespace="calico-system" Pod="whisker-677fcf4f54-2485h" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-whisker--677fcf4f54--2485h-eth0" Jul 7 02:48:57.763594 containerd[1598]: 2025-07-07 02:48:57.733 [INFO][4011] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9fc9bfe3755761cd08f19ed2a8f80c4079a96a261a6194ed9b162be6758e7479" Namespace="calico-system" Pod="whisker-677fcf4f54-2485h" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-whisker--677fcf4f54--2485h-eth0" Jul 7 02:48:57.763594 containerd[1598]: 2025-07-07 02:48:57.734 [INFO][4011] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9fc9bfe3755761cd08f19ed2a8f80c4079a96a261a6194ed9b162be6758e7479" Namespace="calico-system" Pod="whisker-677fcf4f54-2485h" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-whisker--677fcf4f54--2485h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ijdf9.gb1.brightbox.com-k8s-whisker--677fcf4f54--2485h-eth0", GenerateName:"whisker-677fcf4f54-", Namespace:"calico-system", SelfLink:"", UID:"17b1a8cf-c9ae-4a42-9af0-9ea141ac0f76", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 48, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"677fcf4f54", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ijdf9.gb1.brightbox.com", ContainerID:"9fc9bfe3755761cd08f19ed2a8f80c4079a96a261a6194ed9b162be6758e7479", Pod:"whisker-677fcf4f54-2485h", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.97.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliaa94c06e5f3", MAC:"8e:54:c3:b9:9b:be", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:48:57.763594 containerd[1598]: 2025-07-07 02:48:57.745 [INFO][4011] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9fc9bfe3755761cd08f19ed2a8f80c4079a96a261a6194ed9b162be6758e7479" Namespace="calico-system" Pod="whisker-677fcf4f54-2485h" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-whisker--677fcf4f54--2485h-eth0" Jul 7 02:48:57.794290 containerd[1598]: time="2025-07-07T02:48:57.793514467Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 02:48:57.794290 containerd[1598]: time="2025-07-07T02:48:57.793590182Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 02:48:57.794290 containerd[1598]: time="2025-07-07T02:48:57.793628411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:48:57.794290 containerd[1598]: time="2025-07-07T02:48:57.793760087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:48:57.827969 kubelet[2809]: I0707 02:48:57.827933 2809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1732a1fd-7638-44c8-9b73-7935edb7cb78" path="/var/lib/kubelet/pods/1732a1fd-7638-44c8-9b73-7935edb7cb78/volumes" Jul 7 02:48:57.994271 containerd[1598]: time="2025-07-07T02:48:57.993772871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-677fcf4f54-2485h,Uid:17b1a8cf-c9ae-4a42-9af0-9ea141ac0f76,Namespace:calico-system,Attempt:0,} returns sandbox id \"9fc9bfe3755761cd08f19ed2a8f80c4079a96a261a6194ed9b162be6758e7479\"" Jul 7 02:48:58.021256 containerd[1598]: time="2025-07-07T02:48:58.021203577Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 7 02:48:59.414194 systemd-networkd[1261]: caliaa94c06e5f3: Gained IPv6LL Jul 7 02:48:59.732815 containerd[1598]: time="2025-07-07T02:48:59.732693397Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:48:59.733853 containerd[1598]: time="2025-07-07T02:48:59.733095535Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 7 02:48:59.734562 containerd[1598]: time="2025-07-07T02:48:59.734214888Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:48:59.736839 containerd[1598]: time="2025-07-07T02:48:59.736812159Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:48:59.737596 containerd[1598]: time="2025-07-07T02:48:59.737570954Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.716130054s" Jul 7 02:48:59.737929 containerd[1598]: time="2025-07-07T02:48:59.737603100Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 7 02:48:59.741499 containerd[1598]: time="2025-07-07T02:48:59.741317084Z" level=info msg="CreateContainer within sandbox \"9fc9bfe3755761cd08f19ed2a8f80c4079a96a261a6194ed9b162be6758e7479\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 7 02:48:59.750811 containerd[1598]: time="2025-07-07T02:48:59.750695380Z" level=info msg="CreateContainer within sandbox \"9fc9bfe3755761cd08f19ed2a8f80c4079a96a261a6194ed9b162be6758e7479\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"80adcbdbb912afeb24fa2de24d5e94deec65e86b18fe819417a10a234243333a\"" Jul 7 02:48:59.754277 containerd[1598]: time="2025-07-07T02:48:59.753397401Z" level=info msg="StartContainer for \"80adcbdbb912afeb24fa2de24d5e94deec65e86b18fe819417a10a234243333a\"" Jul 7 02:48:59.758461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4163103604.mount: Deactivated successfully. Jul 7 02:48:59.824608 containerd[1598]: time="2025-07-07T02:48:59.823989327Z" level=info msg="StopPodSandbox for \"ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c\"" Jul 7 02:48:59.824608 containerd[1598]: time="2025-07-07T02:48:59.824226824Z" level=info msg="StopPodSandbox for \"7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656\"" Jul 7 02:48:59.858808 containerd[1598]: time="2025-07-07T02:48:59.858706898Z" level=info msg="StartContainer for \"80adcbdbb912afeb24fa2de24d5e94deec65e86b18fe819417a10a234243333a\" returns successfully" Jul 7 02:48:59.862113 containerd[1598]: time="2025-07-07T02:48:59.862071810Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 7 02:49:00.017571 containerd[1598]: 2025-07-07 02:48:59.930 [INFO][4238] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c" Jul 7 02:49:00.017571 containerd[1598]: 2025-07-07 02:48:59.933 [INFO][4238] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c" iface="eth0" netns="/var/run/netns/cni-9838fd19-0a80-79ba-456a-207dc57b2335" Jul 7 02:49:00.017571 containerd[1598]: 2025-07-07 02:48:59.934 [INFO][4238] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c" iface="eth0" netns="/var/run/netns/cni-9838fd19-0a80-79ba-456a-207dc57b2335" Jul 7 02:49:00.017571 containerd[1598]: 2025-07-07 02:48:59.934 [INFO][4238] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c" iface="eth0" netns="/var/run/netns/cni-9838fd19-0a80-79ba-456a-207dc57b2335" Jul 7 02:49:00.017571 containerd[1598]: 2025-07-07 02:48:59.934 [INFO][4238] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c" Jul 7 02:49:00.017571 containerd[1598]: 2025-07-07 02:48:59.934 [INFO][4238] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c" Jul 7 02:49:00.017571 containerd[1598]: 2025-07-07 02:48:59.988 [INFO][4265] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c" HandleID="k8s-pod-network.ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c" Workload="srv--ijdf9.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--grxdk-eth0" Jul 7 02:49:00.017571 containerd[1598]: 2025-07-07 02:48:59.988 [INFO][4265] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:49:00.017571 containerd[1598]: 2025-07-07 02:48:59.988 [INFO][4265] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:49:00.017571 containerd[1598]: 2025-07-07 02:48:59.999 [WARNING][4265] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c" HandleID="k8s-pod-network.ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c" Workload="srv--ijdf9.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--grxdk-eth0" Jul 7 02:49:00.017571 containerd[1598]: 2025-07-07 02:48:59.999 [INFO][4265] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c" HandleID="k8s-pod-network.ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c" Workload="srv--ijdf9.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--grxdk-eth0" Jul 7 02:49:00.017571 containerd[1598]: 2025-07-07 02:49:00.002 [INFO][4265] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:49:00.017571 containerd[1598]: 2025-07-07 02:49:00.008 [INFO][4238] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c" Jul 7 02:49:00.017571 containerd[1598]: time="2025-07-07T02:49:00.014181879Z" level=info msg="TearDown network for sandbox \"ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c\" successfully" Jul 7 02:49:00.017571 containerd[1598]: time="2025-07-07T02:49:00.014216992Z" level=info msg="StopPodSandbox for \"ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c\" returns successfully" Jul 7 02:49:00.021493 containerd[1598]: time="2025-07-07T02:49:00.018711286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-grxdk,Uid:6e39c44d-9500-40c6-a8ed-19a5f1bb302d,Namespace:kube-system,Attempt:1,}" Jul 7 02:49:00.019532 systemd[1]: run-netns-cni\x2d9838fd19\x2d0a80\x2d79ba\x2d456a\x2d207dc57b2335.mount: Deactivated successfully. Jul 7 02:49:00.031121 containerd[1598]: 2025-07-07 02:48:59.956 [INFO][4239] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656" Jul 7 02:49:00.031121 containerd[1598]: 2025-07-07 02:48:59.957 [INFO][4239] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656" iface="eth0" netns="/var/run/netns/cni-6abf8e00-3163-0db0-d76f-e274b8076afd" Jul 7 02:49:00.031121 containerd[1598]: 2025-07-07 02:48:59.957 [INFO][4239] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656" iface="eth0" netns="/var/run/netns/cni-6abf8e00-3163-0db0-d76f-e274b8076afd" Jul 7 02:49:00.031121 containerd[1598]: 2025-07-07 02:48:59.958 [INFO][4239] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656" iface="eth0" netns="/var/run/netns/cni-6abf8e00-3163-0db0-d76f-e274b8076afd" Jul 7 02:49:00.031121 containerd[1598]: 2025-07-07 02:48:59.958 [INFO][4239] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656" Jul 7 02:49:00.031121 containerd[1598]: 2025-07-07 02:48:59.958 [INFO][4239] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656" Jul 7 02:49:00.031121 containerd[1598]: 2025-07-07 02:49:00.001 [INFO][4270] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656" HandleID="k8s-pod-network.7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656" Workload="srv--ijdf9.gb1.brightbox.com-k8s-goldmane--58fd7646b9--nnz8r-eth0" Jul 7 02:49:00.031121 containerd[1598]: 2025-07-07 02:49:00.001 [INFO][4270] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:49:00.031121 containerd[1598]: 2025-07-07 02:49:00.003 [INFO][4270] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:49:00.031121 containerd[1598]: 2025-07-07 02:49:00.017 [WARNING][4270] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656" HandleID="k8s-pod-network.7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656" Workload="srv--ijdf9.gb1.brightbox.com-k8s-goldmane--58fd7646b9--nnz8r-eth0" Jul 7 02:49:00.031121 containerd[1598]: 2025-07-07 02:49:00.018 [INFO][4270] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656" HandleID="k8s-pod-network.7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656" Workload="srv--ijdf9.gb1.brightbox.com-k8s-goldmane--58fd7646b9--nnz8r-eth0" Jul 7 02:49:00.031121 containerd[1598]: 2025-07-07 02:49:00.024 [INFO][4270] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:49:00.031121 containerd[1598]: 2025-07-07 02:49:00.026 [INFO][4239] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656" Jul 7 02:49:00.032162 containerd[1598]: time="2025-07-07T02:49:00.031852150Z" level=info msg="TearDown network for sandbox \"7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656\" successfully" Jul 7 02:49:00.032162 containerd[1598]: time="2025-07-07T02:49:00.031883445Z" level=info msg="StopPodSandbox for \"7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656\" returns successfully" Jul 7 02:49:00.033330 containerd[1598]: time="2025-07-07T02:49:00.032898268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-nnz8r,Uid:b1996f07-5d79-4f58-bbb7-a4685ca36d35,Namespace:calico-system,Attempt:1,}" Jul 7 02:49:00.210526 systemd-networkd[1261]: cali7cb6eb94fe7: Link UP Jul 7 02:49:00.212332 systemd-networkd[1261]: cali7cb6eb94fe7: Gained carrier Jul 7 02:49:00.238197 containerd[1598]: 2025-07-07 02:49:00.076 [INFO][4281] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 02:49:00.238197 containerd[1598]: 2025-07-07 02:49:00.096 [INFO][4281] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--ijdf9.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--grxdk-eth0 coredns-7c65d6cfc9- kube-system 6e39c44d-9500-40c6-a8ed-19a5f1bb302d 924 0 2025-07-07 02:48:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-ijdf9.gb1.brightbox.com coredns-7c65d6cfc9-grxdk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7cb6eb94fe7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="96ab9d27065cb2f6e7ce7a94b43bea11cc2567b041417651da7f56b5597a23a8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-grxdk" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--grxdk-" Jul 7 02:49:00.238197 containerd[1598]: 2025-07-07 02:49:00.096 [INFO][4281] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="96ab9d27065cb2f6e7ce7a94b43bea11cc2567b041417651da7f56b5597a23a8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-grxdk" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--grxdk-eth0" Jul 7 02:49:00.238197 containerd[1598]: 2025-07-07 02:49:00.140 [INFO][4305] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="96ab9d27065cb2f6e7ce7a94b43bea11cc2567b041417651da7f56b5597a23a8" HandleID="k8s-pod-network.96ab9d27065cb2f6e7ce7a94b43bea11cc2567b041417651da7f56b5597a23a8" Workload="srv--ijdf9.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--grxdk-eth0" Jul 7 02:49:00.238197 containerd[1598]: 2025-07-07 02:49:00.140 [INFO][4305] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="96ab9d27065cb2f6e7ce7a94b43bea11cc2567b041417651da7f56b5597a23a8" HandleID="k8s-pod-network.96ab9d27065cb2f6e7ce7a94b43bea11cc2567b041417651da7f56b5597a23a8" Workload="srv--ijdf9.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--grxdk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f020), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-ijdf9.gb1.brightbox.com", "pod":"coredns-7c65d6cfc9-grxdk", "timestamp":"2025-07-07 02:49:00.140338741 +0000 UTC"}, Hostname:"srv-ijdf9.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 02:49:00.238197 containerd[1598]: 2025-07-07 02:49:00.140 [INFO][4305] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:49:00.238197 containerd[1598]: 2025-07-07 02:49:00.140 [INFO][4305] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:49:00.238197 containerd[1598]: 2025-07-07 02:49:00.140 [INFO][4305] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-ijdf9.gb1.brightbox.com' Jul 7 02:49:00.238197 containerd[1598]: 2025-07-07 02:49:00.149 [INFO][4305] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.96ab9d27065cb2f6e7ce7a94b43bea11cc2567b041417651da7f56b5597a23a8" host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:00.238197 containerd[1598]: 2025-07-07 02:49:00.155 [INFO][4305] ipam/ipam.go 394: Looking up existing affinities for host host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:00.238197 containerd[1598]: 2025-07-07 02:49:00.161 [INFO][4305] ipam/ipam.go 511: Trying affinity for 192.168.97.128/26 host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:00.238197 containerd[1598]: 2025-07-07 02:49:00.163 [INFO][4305] ipam/ipam.go 158: Attempting to load block cidr=192.168.97.128/26 host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:00.238197 containerd[1598]: 2025-07-07 02:49:00.166 [INFO][4305] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.97.128/26 host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:00.238197 containerd[1598]: 2025-07-07 02:49:00.167 [INFO][4305] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.97.128/26 handle="k8s-pod-network.96ab9d27065cb2f6e7ce7a94b43bea11cc2567b041417651da7f56b5597a23a8" host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:00.238197 containerd[1598]: 2025-07-07 02:49:00.173 [INFO][4305] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.96ab9d27065cb2f6e7ce7a94b43bea11cc2567b041417651da7f56b5597a23a8 Jul 7 02:49:00.238197 containerd[1598]: 2025-07-07 02:49:00.187 [INFO][4305] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.97.128/26 handle="k8s-pod-network.96ab9d27065cb2f6e7ce7a94b43bea11cc2567b041417651da7f56b5597a23a8" host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:00.238197 containerd[1598]: 2025-07-07 02:49:00.195 [INFO][4305] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.97.130/26] block=192.168.97.128/26 handle="k8s-pod-network.96ab9d27065cb2f6e7ce7a94b43bea11cc2567b041417651da7f56b5597a23a8" host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:00.238197 containerd[1598]: 2025-07-07 02:49:00.195 [INFO][4305] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.97.130/26] handle="k8s-pod-network.96ab9d27065cb2f6e7ce7a94b43bea11cc2567b041417651da7f56b5597a23a8" host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:00.238197 containerd[1598]: 2025-07-07 02:49:00.195 [INFO][4305] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:49:00.238197 containerd[1598]: 2025-07-07 02:49:00.195 [INFO][4305] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.97.130/26] IPv6=[] ContainerID="96ab9d27065cb2f6e7ce7a94b43bea11cc2567b041417651da7f56b5597a23a8" HandleID="k8s-pod-network.96ab9d27065cb2f6e7ce7a94b43bea11cc2567b041417651da7f56b5597a23a8" Workload="srv--ijdf9.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--grxdk-eth0" Jul 7 02:49:00.239321 containerd[1598]: 2025-07-07 02:49:00.199 [INFO][4281] cni-plugin/k8s.go 418: Populated endpoint ContainerID="96ab9d27065cb2f6e7ce7a94b43bea11cc2567b041417651da7f56b5597a23a8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-grxdk" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--grxdk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ijdf9.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--grxdk-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"6e39c44d-9500-40c6-a8ed-19a5f1bb302d", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 48, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ijdf9.gb1.brightbox.com", ContainerID:"", Pod:"coredns-7c65d6cfc9-grxdk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7cb6eb94fe7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:49:00.239321 containerd[1598]: 2025-07-07 02:49:00.201 [INFO][4281] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.130/32] ContainerID="96ab9d27065cb2f6e7ce7a94b43bea11cc2567b041417651da7f56b5597a23a8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-grxdk" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--grxdk-eth0" Jul 7 02:49:00.239321 containerd[1598]: 2025-07-07 02:49:00.201 [INFO][4281] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7cb6eb94fe7 ContainerID="96ab9d27065cb2f6e7ce7a94b43bea11cc2567b041417651da7f56b5597a23a8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-grxdk" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--grxdk-eth0" Jul 7 02:49:00.239321 containerd[1598]: 2025-07-07 02:49:00.212 [INFO][4281] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="96ab9d27065cb2f6e7ce7a94b43bea11cc2567b041417651da7f56b5597a23a8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-grxdk" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--grxdk-eth0" Jul 7 02:49:00.239321 containerd[1598]: 2025-07-07 02:49:00.213 [INFO][4281] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="96ab9d27065cb2f6e7ce7a94b43bea11cc2567b041417651da7f56b5597a23a8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-grxdk" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--grxdk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ijdf9.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--grxdk-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"6e39c44d-9500-40c6-a8ed-19a5f1bb302d", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 48, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ijdf9.gb1.brightbox.com", ContainerID:"96ab9d27065cb2f6e7ce7a94b43bea11cc2567b041417651da7f56b5597a23a8", Pod:"coredns-7c65d6cfc9-grxdk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7cb6eb94fe7", MAC:"ce:cb:fa:00:25:3c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:49:00.239321 containerd[1598]: 2025-07-07 02:49:00.234 [INFO][4281] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="96ab9d27065cb2f6e7ce7a94b43bea11cc2567b041417651da7f56b5597a23a8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-grxdk" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--grxdk-eth0" Jul 7 02:49:00.273238 containerd[1598]: time="2025-07-07T02:49:00.272959912Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 02:49:00.273238 containerd[1598]: time="2025-07-07T02:49:00.273050968Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 02:49:00.273488 containerd[1598]: time="2025-07-07T02:49:00.273155864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:49:00.274569 containerd[1598]: time="2025-07-07T02:49:00.274416440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:49:00.325908 systemd-networkd[1261]: cali7bf71826d2e: Link UP Jul 7 02:49:00.326194 systemd-networkd[1261]: cali7bf71826d2e: Gained carrier Jul 7 02:49:00.358005 containerd[1598]: 2025-07-07 02:49:00.088 [INFO][4286] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 02:49:00.358005 containerd[1598]: 2025-07-07 02:49:00.100 [INFO][4286] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--ijdf9.gb1.brightbox.com-k8s-goldmane--58fd7646b9--nnz8r-eth0 goldmane-58fd7646b9- calico-system b1996f07-5d79-4f58-bbb7-a4685ca36d35 925 0 2025-07-07 02:48:37 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-ijdf9.gb1.brightbox.com goldmane-58fd7646b9-nnz8r eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali7bf71826d2e [] [] }} ContainerID="6fec8bc69b37655936342ac6e993012fde3e6f7856e85e0fb9c4828fcbfce35b" Namespace="calico-system" Pod="goldmane-58fd7646b9-nnz8r" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-goldmane--58fd7646b9--nnz8r-" Jul 7 02:49:00.358005 containerd[1598]: 2025-07-07 02:49:00.100 [INFO][4286] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6fec8bc69b37655936342ac6e993012fde3e6f7856e85e0fb9c4828fcbfce35b" Namespace="calico-system" Pod="goldmane-58fd7646b9-nnz8r" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-goldmane--58fd7646b9--nnz8r-eth0" Jul 7 02:49:00.358005 containerd[1598]: 2025-07-07 02:49:00.141 [INFO][4307] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6fec8bc69b37655936342ac6e993012fde3e6f7856e85e0fb9c4828fcbfce35b" HandleID="k8s-pod-network.6fec8bc69b37655936342ac6e993012fde3e6f7856e85e0fb9c4828fcbfce35b" Workload="srv--ijdf9.gb1.brightbox.com-k8s-goldmane--58fd7646b9--nnz8r-eth0" Jul 7 02:49:00.358005 containerd[1598]: 2025-07-07 02:49:00.141 [INFO][4307] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6fec8bc69b37655936342ac6e993012fde3e6f7856e85e0fb9c4828fcbfce35b" HandleID="k8s-pod-network.6fec8bc69b37655936342ac6e993012fde3e6f7856e85e0fb9c4828fcbfce35b" Workload="srv--ijdf9.gb1.brightbox.com-k8s-goldmane--58fd7646b9--nnz8r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5000), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-ijdf9.gb1.brightbox.com", "pod":"goldmane-58fd7646b9-nnz8r", "timestamp":"2025-07-07 02:49:00.141020243 +0000 UTC"}, Hostname:"srv-ijdf9.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 02:49:00.358005 containerd[1598]: 2025-07-07 02:49:00.141 [INFO][4307] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:49:00.358005 containerd[1598]: 2025-07-07 02:49:00.195 [INFO][4307] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:49:00.358005 containerd[1598]: 2025-07-07 02:49:00.196 [INFO][4307] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-ijdf9.gb1.brightbox.com' Jul 7 02:49:00.358005 containerd[1598]: 2025-07-07 02:49:00.251 [INFO][4307] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6fec8bc69b37655936342ac6e993012fde3e6f7856e85e0fb9c4828fcbfce35b" host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:00.358005 containerd[1598]: 2025-07-07 02:49:00.267 [INFO][4307] ipam/ipam.go 394: Looking up existing affinities for host host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:00.358005 containerd[1598]: 2025-07-07 02:49:00.275 [INFO][4307] ipam/ipam.go 511: Trying affinity for 192.168.97.128/26 host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:00.358005 containerd[1598]: 2025-07-07 02:49:00.279 [INFO][4307] ipam/ipam.go 158: Attempting to load block cidr=192.168.97.128/26 host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:00.358005 containerd[1598]: 2025-07-07 02:49:00.285 [INFO][4307] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.97.128/26 host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:00.358005 containerd[1598]: 2025-07-07 02:49:00.286 [INFO][4307] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.97.128/26 handle="k8s-pod-network.6fec8bc69b37655936342ac6e993012fde3e6f7856e85e0fb9c4828fcbfce35b" host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:00.358005 containerd[1598]: 2025-07-07 02:49:00.289 [INFO][4307] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6fec8bc69b37655936342ac6e993012fde3e6f7856e85e0fb9c4828fcbfce35b Jul 7 02:49:00.358005 containerd[1598]: 2025-07-07 02:49:00.296 [INFO][4307] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.97.128/26 handle="k8s-pod-network.6fec8bc69b37655936342ac6e993012fde3e6f7856e85e0fb9c4828fcbfce35b" host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:00.358005 containerd[1598]: 2025-07-07 02:49:00.307 [INFO][4307] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.97.131/26] block=192.168.97.128/26 handle="k8s-pod-network.6fec8bc69b37655936342ac6e993012fde3e6f7856e85e0fb9c4828fcbfce35b" host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:00.358005 containerd[1598]: 2025-07-07 02:49:00.307 [INFO][4307] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.97.131/26] handle="k8s-pod-network.6fec8bc69b37655936342ac6e993012fde3e6f7856e85e0fb9c4828fcbfce35b" host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:00.358005 containerd[1598]: 2025-07-07 02:49:00.307 [INFO][4307] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:49:00.358005 containerd[1598]: 2025-07-07 02:49:00.307 [INFO][4307] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.97.131/26] IPv6=[] ContainerID="6fec8bc69b37655936342ac6e993012fde3e6f7856e85e0fb9c4828fcbfce35b" HandleID="k8s-pod-network.6fec8bc69b37655936342ac6e993012fde3e6f7856e85e0fb9c4828fcbfce35b" Workload="srv--ijdf9.gb1.brightbox.com-k8s-goldmane--58fd7646b9--nnz8r-eth0" Jul 7 02:49:00.360702 containerd[1598]: 2025-07-07 02:49:00.313 [INFO][4286] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6fec8bc69b37655936342ac6e993012fde3e6f7856e85e0fb9c4828fcbfce35b" Namespace="calico-system" Pod="goldmane-58fd7646b9-nnz8r" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-goldmane--58fd7646b9--nnz8r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ijdf9.gb1.brightbox.com-k8s-goldmane--58fd7646b9--nnz8r-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"b1996f07-5d79-4f58-bbb7-a4685ca36d35", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 48, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ijdf9.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-58fd7646b9-nnz8r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.97.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7bf71826d2e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:49:00.360702 containerd[1598]: 2025-07-07 02:49:00.314 [INFO][4286] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.131/32] ContainerID="6fec8bc69b37655936342ac6e993012fde3e6f7856e85e0fb9c4828fcbfce35b" Namespace="calico-system" Pod="goldmane-58fd7646b9-nnz8r" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-goldmane--58fd7646b9--nnz8r-eth0" Jul 7 02:49:00.360702 containerd[1598]: 2025-07-07 02:49:00.314 [INFO][4286] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7bf71826d2e ContainerID="6fec8bc69b37655936342ac6e993012fde3e6f7856e85e0fb9c4828fcbfce35b" Namespace="calico-system" Pod="goldmane-58fd7646b9-nnz8r" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-goldmane--58fd7646b9--nnz8r-eth0" Jul 7 02:49:00.360702 containerd[1598]: 2025-07-07 02:49:00.327 [INFO][4286] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6fec8bc69b37655936342ac6e993012fde3e6f7856e85e0fb9c4828fcbfce35b" Namespace="calico-system" Pod="goldmane-58fd7646b9-nnz8r" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-goldmane--58fd7646b9--nnz8r-eth0" Jul 7 02:49:00.360702 containerd[1598]: 2025-07-07 02:49:00.328 [INFO][4286] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6fec8bc69b37655936342ac6e993012fde3e6f7856e85e0fb9c4828fcbfce35b" Namespace="calico-system" Pod="goldmane-58fd7646b9-nnz8r" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-goldmane--58fd7646b9--nnz8r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ijdf9.gb1.brightbox.com-k8s-goldmane--58fd7646b9--nnz8r-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"b1996f07-5d79-4f58-bbb7-a4685ca36d35", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 48, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ijdf9.gb1.brightbox.com", ContainerID:"6fec8bc69b37655936342ac6e993012fde3e6f7856e85e0fb9c4828fcbfce35b", Pod:"goldmane-58fd7646b9-nnz8r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.97.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7bf71826d2e", MAC:"fe:bc:02:35:bc:20", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:49:00.360702 containerd[1598]: 2025-07-07 02:49:00.351 [INFO][4286] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6fec8bc69b37655936342ac6e993012fde3e6f7856e85e0fb9c4828fcbfce35b" Namespace="calico-system" Pod="goldmane-58fd7646b9-nnz8r" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-goldmane--58fd7646b9--nnz8r-eth0" Jul 7 02:49:00.381362 containerd[1598]: time="2025-07-07T02:49:00.381316618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-grxdk,Uid:6e39c44d-9500-40c6-a8ed-19a5f1bb302d,Namespace:kube-system,Attempt:1,} returns sandbox id \"96ab9d27065cb2f6e7ce7a94b43bea11cc2567b041417651da7f56b5597a23a8\"" Jul 7 02:49:00.389681 containerd[1598]: time="2025-07-07T02:49:00.389635180Z" level=info msg="CreateContainer within sandbox \"96ab9d27065cb2f6e7ce7a94b43bea11cc2567b041417651da7f56b5597a23a8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 02:49:00.398780 containerd[1598]: time="2025-07-07T02:49:00.398647514Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 02:49:00.398780 containerd[1598]: time="2025-07-07T02:49:00.398711532Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 02:49:00.398996 containerd[1598]: time="2025-07-07T02:49:00.398810513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:49:00.400237 containerd[1598]: time="2025-07-07T02:49:00.400068896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:49:00.414892 containerd[1598]: time="2025-07-07T02:49:00.414776391Z" level=info msg="CreateContainer within sandbox \"96ab9d27065cb2f6e7ce7a94b43bea11cc2567b041417651da7f56b5597a23a8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0b07d530873e15ba10f2f30e67da733d5afffb05e496eaea4e9316d3ec43c088\"" Jul 7 02:49:00.415989 containerd[1598]: time="2025-07-07T02:49:00.415950931Z" level=info msg="StartContainer for \"0b07d530873e15ba10f2f30e67da733d5afffb05e496eaea4e9316d3ec43c088\"" Jul 7 02:49:00.525323 containerd[1598]: time="2025-07-07T02:49:00.524706040Z" level=info msg="StartContainer for \"0b07d530873e15ba10f2f30e67da733d5afffb05e496eaea4e9316d3ec43c088\" returns successfully" Jul 7 02:49:00.600382 kubelet[2809]: I0707 02:49:00.598003 2809 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 02:49:00.618492 containerd[1598]: time="2025-07-07T02:49:00.617820537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-nnz8r,Uid:b1996f07-5d79-4f58-bbb7-a4685ca36d35,Namespace:calico-system,Attempt:1,} returns sandbox id \"6fec8bc69b37655936342ac6e993012fde3e6f7856e85e0fb9c4828fcbfce35b\"" Jul 7 02:49:00.774050 kubelet[2809]: I0707 02:49:00.773521 2809 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 02:49:00.789877 systemd[1]: run-netns-cni\x2d6abf8e00\x2d3163\x2d0db0\x2dd76f\x2de274b8076afd.mount: Deactivated successfully. Jul 7 02:49:01.160278 kubelet[2809]: I0707 02:49:01.157953 2809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-grxdk" podStartSLOduration=37.157923219 podStartE2EDuration="37.157923219s" podCreationTimestamp="2025-07-07 02:48:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 02:49:01.15673758 +0000 UTC m=+43.500082351" watchObservedRunningTime="2025-07-07 02:49:01.157923219 +0000 UTC m=+43.501267991" Jul 7 02:49:01.642186 kernel: bpftool[4558]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 7 02:49:01.778633 systemd-networkd[1261]: cali7cb6eb94fe7: Gained IPv6LL Jul 7 02:49:01.825115 containerd[1598]: time="2025-07-07T02:49:01.825058232Z" level=info msg="StopPodSandbox for \"e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57\"" Jul 7 02:49:01.834248 containerd[1598]: time="2025-07-07T02:49:01.826321651Z" level=info msg="StopPodSandbox for \"de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069\"" Jul 7 02:49:02.035720 systemd-networkd[1261]: cali7bf71826d2e: Gained IPv6LL Jul 7 02:49:02.096381 systemd-networkd[1261]: vxlan.calico: Link UP Jul 7 02:49:02.096666 systemd-networkd[1261]: vxlan.calico: Gained carrier Jul 7 02:49:02.252716 containerd[1598]: 2025-07-07 02:49:02.022 [INFO][4582] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57" Jul 7 02:49:02.252716 containerd[1598]: 2025-07-07 02:49:02.022 [INFO][4582] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57" iface="eth0" netns="/var/run/netns/cni-e18c6639-90c0-23ef-01ee-bcafbbab928b" Jul 7 02:49:02.252716 containerd[1598]: 2025-07-07 02:49:02.024 [INFO][4582] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57" iface="eth0" netns="/var/run/netns/cni-e18c6639-90c0-23ef-01ee-bcafbbab928b" Jul 7 02:49:02.252716 containerd[1598]: 2025-07-07 02:49:02.025 [INFO][4582] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57" iface="eth0" netns="/var/run/netns/cni-e18c6639-90c0-23ef-01ee-bcafbbab928b" Jul 7 02:49:02.252716 containerd[1598]: 2025-07-07 02:49:02.025 [INFO][4582] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57" Jul 7 02:49:02.252716 containerd[1598]: 2025-07-07 02:49:02.025 [INFO][4582] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57" Jul 7 02:49:02.252716 containerd[1598]: 2025-07-07 02:49:02.177 [INFO][4615] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57" HandleID="k8s-pod-network.e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57" Workload="srv--ijdf9.gb1.brightbox.com-k8s-calico--kube--controllers--84b8488869--mvrcm-eth0" Jul 7 02:49:02.252716 containerd[1598]: 2025-07-07 02:49:02.178 [INFO][4615] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:49:02.252716 containerd[1598]: 2025-07-07 02:49:02.178 [INFO][4615] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:49:02.252716 containerd[1598]: 2025-07-07 02:49:02.197 [WARNING][4615] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57" HandleID="k8s-pod-network.e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57" Workload="srv--ijdf9.gb1.brightbox.com-k8s-calico--kube--controllers--84b8488869--mvrcm-eth0" Jul 7 02:49:02.252716 containerd[1598]: 2025-07-07 02:49:02.197 [INFO][4615] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57" HandleID="k8s-pod-network.e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57" Workload="srv--ijdf9.gb1.brightbox.com-k8s-calico--kube--controllers--84b8488869--mvrcm-eth0" Jul 7 02:49:02.252716 containerd[1598]: 2025-07-07 02:49:02.202 [INFO][4615] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:49:02.252716 containerd[1598]: 2025-07-07 02:49:02.245 [INFO][4582] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57" Jul 7 02:49:02.259185 containerd[1598]: time="2025-07-07T02:49:02.255592202Z" level=info msg="TearDown network for sandbox \"e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57\" successfully" Jul 7 02:49:02.259185 containerd[1598]: time="2025-07-07T02:49:02.255630113Z" level=info msg="StopPodSandbox for \"e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57\" returns successfully" Jul 7 02:49:02.260161 containerd[1598]: time="2025-07-07T02:49:02.259589549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84b8488869-mvrcm,Uid:bb5b56a2-f557-4e2e-85d3-68d9ce53c0bd,Namespace:calico-system,Attempt:1,}" Jul 7 02:49:02.263379 systemd[1]: run-netns-cni\x2de18c6639\x2d90c0\x2d23ef\x2d01ee\x2dbcafbbab928b.mount: Deactivated successfully. Jul 7 02:49:02.347321 containerd[1598]: 2025-07-07 02:49:01.982 [INFO][4572] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069" Jul 7 02:49:02.347321 containerd[1598]: 2025-07-07 02:49:01.982 [INFO][4572] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069" iface="eth0" netns="/var/run/netns/cni-e913a829-0341-40ad-97fd-679063cb6f49" Jul 7 02:49:02.347321 containerd[1598]: 2025-07-07 02:49:01.982 [INFO][4572] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069" iface="eth0" netns="/var/run/netns/cni-e913a829-0341-40ad-97fd-679063cb6f49" Jul 7 02:49:02.347321 containerd[1598]: 2025-07-07 02:49:01.985 [INFO][4572] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069" iface="eth0" netns="/var/run/netns/cni-e913a829-0341-40ad-97fd-679063cb6f49" Jul 7 02:49:02.347321 containerd[1598]: 2025-07-07 02:49:01.985 [INFO][4572] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069" Jul 7 02:49:02.347321 containerd[1598]: 2025-07-07 02:49:01.985 [INFO][4572] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069" Jul 7 02:49:02.347321 containerd[1598]: 2025-07-07 02:49:02.280 [INFO][4609] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069" HandleID="k8s-pod-network.de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069" Workload="srv--ijdf9.gb1.brightbox.com-k8s-calico--apiserver--549c48f8dd--b5dg9-eth0" Jul 7 02:49:02.347321 containerd[1598]: 2025-07-07 02:49:02.280 [INFO][4609] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:49:02.347321 containerd[1598]: 2025-07-07 02:49:02.280 [INFO][4609] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:49:02.347321 containerd[1598]: 2025-07-07 02:49:02.291 [WARNING][4609] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069" HandleID="k8s-pod-network.de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069" Workload="srv--ijdf9.gb1.brightbox.com-k8s-calico--apiserver--549c48f8dd--b5dg9-eth0" Jul 7 02:49:02.347321 containerd[1598]: 2025-07-07 02:49:02.295 [INFO][4609] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069" HandleID="k8s-pod-network.de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069" Workload="srv--ijdf9.gb1.brightbox.com-k8s-calico--apiserver--549c48f8dd--b5dg9-eth0" Jul 7 02:49:02.347321 containerd[1598]: 2025-07-07 02:49:02.304 [INFO][4609] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:49:02.347321 containerd[1598]: 2025-07-07 02:49:02.324 [INFO][4572] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069" Jul 7 02:49:02.354158 containerd[1598]: time="2025-07-07T02:49:02.347474316Z" level=info msg="TearDown network for sandbox \"de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069\" successfully" Jul 7 02:49:02.354158 containerd[1598]: time="2025-07-07T02:49:02.347982877Z" level=info msg="StopPodSandbox for \"de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069\" returns successfully" Jul 7 02:49:02.354158 containerd[1598]: time="2025-07-07T02:49:02.353912010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-549c48f8dd-b5dg9,Uid:d34f9b09-d10e-4fb0-9e83-4c578ee4bf07,Namespace:calico-apiserver,Attempt:1,}" Jul 7 02:49:02.360461 systemd[1]: run-netns-cni\x2de913a829\x2d0341\x2d40ad\x2d97fd\x2d679063cb6f49.mount: Deactivated successfully. Jul 7 02:49:02.557909 systemd-journald[1172]: Under memory pressure, flushing caches. Jul 7 02:49:02.547431 systemd-resolved[1505]: Under memory pressure, flushing caches. Jul 7 02:49:02.547467 systemd-resolved[1505]: Flushed all caches. Jul 7 02:49:02.739629 systemd-networkd[1261]: cali192c8a8ac1b: Link UP Jul 7 02:49:02.756723 systemd-networkd[1261]: cali192c8a8ac1b: Gained carrier Jul 7 02:49:02.800874 containerd[1598]: 2025-07-07 02:49:02.474 [INFO][4660] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--ijdf9.gb1.brightbox.com-k8s-calico--kube--controllers--84b8488869--mvrcm-eth0 calico-kube-controllers-84b8488869- calico-system bb5b56a2-f557-4e2e-85d3-68d9ce53c0bd 959 0 2025-07-07 02:48:38 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:84b8488869 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-ijdf9.gb1.brightbox.com calico-kube-controllers-84b8488869-mvrcm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali192c8a8ac1b [] [] }} ContainerID="297167599ddd3bbbb1b0e509c43c4544b6ab61856c15e2e5554fa182735a4ca5" Namespace="calico-system" Pod="calico-kube-controllers-84b8488869-mvrcm" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-calico--kube--controllers--84b8488869--mvrcm-" Jul 7 02:49:02.800874 containerd[1598]: 2025-07-07 02:49:02.474 [INFO][4660] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="297167599ddd3bbbb1b0e509c43c4544b6ab61856c15e2e5554fa182735a4ca5" Namespace="calico-system" Pod="calico-kube-controllers-84b8488869-mvrcm" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-calico--kube--controllers--84b8488869--mvrcm-eth0" Jul 7 02:49:02.800874 containerd[1598]: 2025-07-07 02:49:02.598 [INFO][4680] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="297167599ddd3bbbb1b0e509c43c4544b6ab61856c15e2e5554fa182735a4ca5" HandleID="k8s-pod-network.297167599ddd3bbbb1b0e509c43c4544b6ab61856c15e2e5554fa182735a4ca5" Workload="srv--ijdf9.gb1.brightbox.com-k8s-calico--kube--controllers--84b8488869--mvrcm-eth0" Jul 7 02:49:02.800874 containerd[1598]: 2025-07-07 02:49:02.598 [INFO][4680] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="297167599ddd3bbbb1b0e509c43c4544b6ab61856c15e2e5554fa182735a4ca5" HandleID="k8s-pod-network.297167599ddd3bbbb1b0e509c43c4544b6ab61856c15e2e5554fa182735a4ca5" Workload="srv--ijdf9.gb1.brightbox.com-k8s-calico--kube--controllers--84b8488869--mvrcm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f0c0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-ijdf9.gb1.brightbox.com", "pod":"calico-kube-controllers-84b8488869-mvrcm", "timestamp":"2025-07-07 02:49:02.59805032 +0000 UTC"}, Hostname:"srv-ijdf9.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 02:49:02.800874 containerd[1598]: 2025-07-07 02:49:02.598 [INFO][4680] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:49:02.800874 containerd[1598]: 2025-07-07 02:49:02.599 [INFO][4680] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:49:02.800874 containerd[1598]: 2025-07-07 02:49:02.599 [INFO][4680] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-ijdf9.gb1.brightbox.com' Jul 7 02:49:02.800874 containerd[1598]: 2025-07-07 02:49:02.639 [INFO][4680] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.297167599ddd3bbbb1b0e509c43c4544b6ab61856c15e2e5554fa182735a4ca5" host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:02.800874 containerd[1598]: 2025-07-07 02:49:02.661 [INFO][4680] ipam/ipam.go 394: Looking up existing affinities for host host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:02.800874 containerd[1598]: 2025-07-07 02:49:02.676 [INFO][4680] ipam/ipam.go 511: Trying affinity for 192.168.97.128/26 host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:02.800874 containerd[1598]: 2025-07-07 02:49:02.679 [INFO][4680] ipam/ipam.go 158: Attempting to load block cidr=192.168.97.128/26 host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:02.800874 containerd[1598]: 2025-07-07 02:49:02.684 [INFO][4680] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.97.128/26 host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:02.800874 containerd[1598]: 2025-07-07 02:49:02.684 [INFO][4680] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.97.128/26 handle="k8s-pod-network.297167599ddd3bbbb1b0e509c43c4544b6ab61856c15e2e5554fa182735a4ca5" host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:02.800874 containerd[1598]: 2025-07-07 02:49:02.687 [INFO][4680] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.297167599ddd3bbbb1b0e509c43c4544b6ab61856c15e2e5554fa182735a4ca5 Jul 7 02:49:02.800874 containerd[1598]: 2025-07-07 02:49:02.697 [INFO][4680] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.97.128/26 handle="k8s-pod-network.297167599ddd3bbbb1b0e509c43c4544b6ab61856c15e2e5554fa182735a4ca5" host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:02.800874 containerd[1598]: 2025-07-07 02:49:02.713 [INFO][4680] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.97.132/26] block=192.168.97.128/26 handle="k8s-pod-network.297167599ddd3bbbb1b0e509c43c4544b6ab61856c15e2e5554fa182735a4ca5" host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:02.800874 containerd[1598]: 2025-07-07 02:49:02.713 [INFO][4680] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.97.132/26] handle="k8s-pod-network.297167599ddd3bbbb1b0e509c43c4544b6ab61856c15e2e5554fa182735a4ca5" host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:02.800874 containerd[1598]: 2025-07-07 02:49:02.713 [INFO][4680] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:49:02.800874 containerd[1598]: 2025-07-07 02:49:02.714 [INFO][4680] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.97.132/26] IPv6=[] ContainerID="297167599ddd3bbbb1b0e509c43c4544b6ab61856c15e2e5554fa182735a4ca5" HandleID="k8s-pod-network.297167599ddd3bbbb1b0e509c43c4544b6ab61856c15e2e5554fa182735a4ca5" Workload="srv--ijdf9.gb1.brightbox.com-k8s-calico--kube--controllers--84b8488869--mvrcm-eth0" Jul 7 02:49:02.803336 containerd[1598]: 2025-07-07 02:49:02.723 [INFO][4660] cni-plugin/k8s.go 418: Populated endpoint ContainerID="297167599ddd3bbbb1b0e509c43c4544b6ab61856c15e2e5554fa182735a4ca5" Namespace="calico-system" Pod="calico-kube-controllers-84b8488869-mvrcm" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-calico--kube--controllers--84b8488869--mvrcm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ijdf9.gb1.brightbox.com-k8s-calico--kube--controllers--84b8488869--mvrcm-eth0", GenerateName:"calico-kube-controllers-84b8488869-", Namespace:"calico-system", SelfLink:"", UID:"bb5b56a2-f557-4e2e-85d3-68d9ce53c0bd", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 48, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84b8488869", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ijdf9.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-84b8488869-mvrcm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.97.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali192c8a8ac1b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:49:02.803336 containerd[1598]: 2025-07-07 02:49:02.725 [INFO][4660] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.132/32] ContainerID="297167599ddd3bbbb1b0e509c43c4544b6ab61856c15e2e5554fa182735a4ca5" Namespace="calico-system" Pod="calico-kube-controllers-84b8488869-mvrcm" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-calico--kube--controllers--84b8488869--mvrcm-eth0" Jul 7 02:49:02.803336 containerd[1598]: 2025-07-07 02:49:02.725 [INFO][4660] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali192c8a8ac1b ContainerID="297167599ddd3bbbb1b0e509c43c4544b6ab61856c15e2e5554fa182735a4ca5" Namespace="calico-system" Pod="calico-kube-controllers-84b8488869-mvrcm" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-calico--kube--controllers--84b8488869--mvrcm-eth0" Jul 7 02:49:02.803336 containerd[1598]: 2025-07-07 02:49:02.765 [INFO][4660] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="297167599ddd3bbbb1b0e509c43c4544b6ab61856c15e2e5554fa182735a4ca5" Namespace="calico-system" Pod="calico-kube-controllers-84b8488869-mvrcm" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-calico--kube--controllers--84b8488869--mvrcm-eth0" Jul 7 02:49:02.803336 containerd[1598]: 2025-07-07 02:49:02.767 [INFO][4660] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="297167599ddd3bbbb1b0e509c43c4544b6ab61856c15e2e5554fa182735a4ca5" Namespace="calico-system" Pod="calico-kube-controllers-84b8488869-mvrcm" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-calico--kube--controllers--84b8488869--mvrcm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ijdf9.gb1.brightbox.com-k8s-calico--kube--controllers--84b8488869--mvrcm-eth0", GenerateName:"calico-kube-controllers-84b8488869-", Namespace:"calico-system", SelfLink:"", UID:"bb5b56a2-f557-4e2e-85d3-68d9ce53c0bd", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 48, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84b8488869", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ijdf9.gb1.brightbox.com", ContainerID:"297167599ddd3bbbb1b0e509c43c4544b6ab61856c15e2e5554fa182735a4ca5", Pod:"calico-kube-controllers-84b8488869-mvrcm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.97.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali192c8a8ac1b", MAC:"a6:2b:44:b4:aa:b2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:49:02.803336 containerd[1598]: 2025-07-07 02:49:02.795 [INFO][4660] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="297167599ddd3bbbb1b0e509c43c4544b6ab61856c15e2e5554fa182735a4ca5" Namespace="calico-system" Pod="calico-kube-controllers-84b8488869-mvrcm" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-calico--kube--controllers--84b8488869--mvrcm-eth0" Jul 7 02:49:02.819482 containerd[1598]: time="2025-07-07T02:49:02.819248736Z" level=info msg="StopPodSandbox for \"0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6\"" Jul 7 02:49:02.830842 containerd[1598]: time="2025-07-07T02:49:02.830682351Z" level=info msg="StopPodSandbox for \"745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094\"" Jul 7 02:49:02.831974 containerd[1598]: time="2025-07-07T02:49:02.831953740Z" level=info msg="StopPodSandbox for \"84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275\"" Jul 7 02:49:02.889201 systemd-networkd[1261]: calia07f16fb571: Link UP Jul 7 02:49:02.891441 systemd-networkd[1261]: calia07f16fb571: Gained carrier Jul 7 02:49:02.934253 containerd[1598]: 2025-07-07 02:49:02.501 [INFO][4664] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--ijdf9.gb1.brightbox.com-k8s-calico--apiserver--549c48f8dd--b5dg9-eth0 calico-apiserver-549c48f8dd- calico-apiserver d34f9b09-d10e-4fb0-9e83-4c578ee4bf07 958 0 2025-07-07 02:48:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:549c48f8dd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-ijdf9.gb1.brightbox.com calico-apiserver-549c48f8dd-b5dg9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia07f16fb571 [] [] }} ContainerID="af186167d99322f08a62887b6d1e0fe4965f9a45bd5bb0aa6f05bcb83712e83f" Namespace="calico-apiserver" Pod="calico-apiserver-549c48f8dd-b5dg9" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-calico--apiserver--549c48f8dd--b5dg9-" Jul 7 02:49:02.934253 containerd[1598]: 2025-07-07 02:49:02.501 [INFO][4664] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="af186167d99322f08a62887b6d1e0fe4965f9a45bd5bb0aa6f05bcb83712e83f" Namespace="calico-apiserver" Pod="calico-apiserver-549c48f8dd-b5dg9" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-calico--apiserver--549c48f8dd--b5dg9-eth0" Jul 7 02:49:02.934253 containerd[1598]: 2025-07-07 02:49:02.639 [INFO][4685] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="af186167d99322f08a62887b6d1e0fe4965f9a45bd5bb0aa6f05bcb83712e83f" HandleID="k8s-pod-network.af186167d99322f08a62887b6d1e0fe4965f9a45bd5bb0aa6f05bcb83712e83f" Workload="srv--ijdf9.gb1.brightbox.com-k8s-calico--apiserver--549c48f8dd--b5dg9-eth0" Jul 7 02:49:02.934253 containerd[1598]: 2025-07-07 02:49:02.644 [INFO][4685] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="af186167d99322f08a62887b6d1e0fe4965f9a45bd5bb0aa6f05bcb83712e83f" HandleID="k8s-pod-network.af186167d99322f08a62887b6d1e0fe4965f9a45bd5bb0aa6f05bcb83712e83f" Workload="srv--ijdf9.gb1.brightbox.com-k8s-calico--apiserver--549c48f8dd--b5dg9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5cf0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-ijdf9.gb1.brightbox.com", "pod":"calico-apiserver-549c48f8dd-b5dg9", "timestamp":"2025-07-07 02:49:02.639695449 +0000 UTC"}, Hostname:"srv-ijdf9.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 02:49:02.934253 containerd[1598]: 2025-07-07 02:49:02.644 [INFO][4685] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:49:02.934253 containerd[1598]: 2025-07-07 02:49:02.714 [INFO][4685] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:49:02.934253 containerd[1598]: 2025-07-07 02:49:02.717 [INFO][4685] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-ijdf9.gb1.brightbox.com' Jul 7 02:49:02.934253 containerd[1598]: 2025-07-07 02:49:02.763 [INFO][4685] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.af186167d99322f08a62887b6d1e0fe4965f9a45bd5bb0aa6f05bcb83712e83f" host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:02.934253 containerd[1598]: 2025-07-07 02:49:02.785 [INFO][4685] ipam/ipam.go 394: Looking up existing affinities for host host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:02.934253 containerd[1598]: 2025-07-07 02:49:02.804 [INFO][4685] ipam/ipam.go 511: Trying affinity for 192.168.97.128/26 host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:02.934253 containerd[1598]: 2025-07-07 02:49:02.813 [INFO][4685] ipam/ipam.go 158: Attempting to load block cidr=192.168.97.128/26 host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:02.934253 containerd[1598]: 2025-07-07 02:49:02.829 [INFO][4685] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.97.128/26 host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:02.934253 containerd[1598]: 2025-07-07 02:49:02.829 [INFO][4685] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.97.128/26 handle="k8s-pod-network.af186167d99322f08a62887b6d1e0fe4965f9a45bd5bb0aa6f05bcb83712e83f" host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:02.934253 containerd[1598]: 2025-07-07 02:49:02.842 [INFO][4685] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.af186167d99322f08a62887b6d1e0fe4965f9a45bd5bb0aa6f05bcb83712e83f Jul 7 02:49:02.934253 containerd[1598]: 2025-07-07 02:49:02.852 [INFO][4685] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.97.128/26 handle="k8s-pod-network.af186167d99322f08a62887b6d1e0fe4965f9a45bd5bb0aa6f05bcb83712e83f" host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:02.934253 containerd[1598]: 2025-07-07 02:49:02.872 [INFO][4685] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.97.133/26] block=192.168.97.128/26 handle="k8s-pod-network.af186167d99322f08a62887b6d1e0fe4965f9a45bd5bb0aa6f05bcb83712e83f" host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:02.934253 containerd[1598]: 2025-07-07 02:49:02.872 [INFO][4685] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.97.133/26] handle="k8s-pod-network.af186167d99322f08a62887b6d1e0fe4965f9a45bd5bb0aa6f05bcb83712e83f" host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:02.934253 containerd[1598]: 2025-07-07 02:49:02.872 [INFO][4685] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:49:02.934253 containerd[1598]: 2025-07-07 02:49:02.872 [INFO][4685] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.97.133/26] IPv6=[] ContainerID="af186167d99322f08a62887b6d1e0fe4965f9a45bd5bb0aa6f05bcb83712e83f" HandleID="k8s-pod-network.af186167d99322f08a62887b6d1e0fe4965f9a45bd5bb0aa6f05bcb83712e83f" Workload="srv--ijdf9.gb1.brightbox.com-k8s-calico--apiserver--549c48f8dd--b5dg9-eth0" Jul 7 02:49:02.935742 containerd[1598]: 2025-07-07 02:49:02.882 [INFO][4664] cni-plugin/k8s.go 418: Populated endpoint ContainerID="af186167d99322f08a62887b6d1e0fe4965f9a45bd5bb0aa6f05bcb83712e83f" Namespace="calico-apiserver" Pod="calico-apiserver-549c48f8dd-b5dg9" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-calico--apiserver--549c48f8dd--b5dg9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ijdf9.gb1.brightbox.com-k8s-calico--apiserver--549c48f8dd--b5dg9-eth0", GenerateName:"calico-apiserver-549c48f8dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"d34f9b09-d10e-4fb0-9e83-4c578ee4bf07", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 48, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"549c48f8dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ijdf9.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-549c48f8dd-b5dg9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia07f16fb571", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:49:02.935742 containerd[1598]: 2025-07-07 02:49:02.882 [INFO][4664] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.133/32] ContainerID="af186167d99322f08a62887b6d1e0fe4965f9a45bd5bb0aa6f05bcb83712e83f" Namespace="calico-apiserver" Pod="calico-apiserver-549c48f8dd-b5dg9" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-calico--apiserver--549c48f8dd--b5dg9-eth0" Jul 7 02:49:02.935742 containerd[1598]: 2025-07-07 02:49:02.882 [INFO][4664] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia07f16fb571 ContainerID="af186167d99322f08a62887b6d1e0fe4965f9a45bd5bb0aa6f05bcb83712e83f" Namespace="calico-apiserver" Pod="calico-apiserver-549c48f8dd-b5dg9" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-calico--apiserver--549c48f8dd--b5dg9-eth0" Jul 7 02:49:02.935742 containerd[1598]: 2025-07-07 02:49:02.894 [INFO][4664] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="af186167d99322f08a62887b6d1e0fe4965f9a45bd5bb0aa6f05bcb83712e83f" Namespace="calico-apiserver" Pod="calico-apiserver-549c48f8dd-b5dg9" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-calico--apiserver--549c48f8dd--b5dg9-eth0" Jul 7 02:49:02.935742 containerd[1598]: 2025-07-07 02:49:02.898 [INFO][4664] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="af186167d99322f08a62887b6d1e0fe4965f9a45bd5bb0aa6f05bcb83712e83f" Namespace="calico-apiserver" Pod="calico-apiserver-549c48f8dd-b5dg9" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-calico--apiserver--549c48f8dd--b5dg9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ijdf9.gb1.brightbox.com-k8s-calico--apiserver--549c48f8dd--b5dg9-eth0", GenerateName:"calico-apiserver-549c48f8dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"d34f9b09-d10e-4fb0-9e83-4c578ee4bf07", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 48, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"549c48f8dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ijdf9.gb1.brightbox.com", ContainerID:"af186167d99322f08a62887b6d1e0fe4965f9a45bd5bb0aa6f05bcb83712e83f", Pod:"calico-apiserver-549c48f8dd-b5dg9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia07f16fb571", MAC:"0a:6e:37:46:bf:04", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:49:02.935742 containerd[1598]: 2025-07-07 02:49:02.924 [INFO][4664] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="af186167d99322f08a62887b6d1e0fe4965f9a45bd5bb0aa6f05bcb83712e83f" Namespace="calico-apiserver" Pod="calico-apiserver-549c48f8dd-b5dg9" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-calico--apiserver--549c48f8dd--b5dg9-eth0" Jul 7 02:49:03.027557 containerd[1598]: time="2025-07-07T02:49:03.016722667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 02:49:03.027557 containerd[1598]: time="2025-07-07T02:49:03.016792103Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 02:49:03.027557 containerd[1598]: time="2025-07-07T02:49:03.016830236Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:49:03.027557 containerd[1598]: time="2025-07-07T02:49:03.017936762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:49:03.110034 containerd[1598]: time="2025-07-07T02:49:03.108542385Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 02:49:03.110034 containerd[1598]: time="2025-07-07T02:49:03.108648021Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 02:49:03.110034 containerd[1598]: time="2025-07-07T02:49:03.108829494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:49:03.111517 containerd[1598]: time="2025-07-07T02:49:03.110472238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:49:03.275168 containerd[1598]: 2025-07-07 02:49:03.046 [INFO][4767] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6" Jul 7 02:49:03.275168 containerd[1598]: 2025-07-07 02:49:03.046 [INFO][4767] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6" iface="eth0" netns="/var/run/netns/cni-26bbc194-b156-6097-6c11-deb2fb8522e1" Jul 7 02:49:03.275168 containerd[1598]: 2025-07-07 02:49:03.047 [INFO][4767] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6" iface="eth0" netns="/var/run/netns/cni-26bbc194-b156-6097-6c11-deb2fb8522e1" Jul 7 02:49:03.275168 containerd[1598]: 2025-07-07 02:49:03.056 [INFO][4767] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6" iface="eth0" netns="/var/run/netns/cni-26bbc194-b156-6097-6c11-deb2fb8522e1" Jul 7 02:49:03.275168 containerd[1598]: 2025-07-07 02:49:03.056 [INFO][4767] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6" Jul 7 02:49:03.275168 containerd[1598]: 2025-07-07 02:49:03.056 [INFO][4767] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6" Jul 7 02:49:03.275168 containerd[1598]: 2025-07-07 02:49:03.245 [INFO][4827] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6" HandleID="k8s-pod-network.0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6" Workload="srv--ijdf9.gb1.brightbox.com-k8s-csi--node--driver--fwz77-eth0" Jul 7 02:49:03.275168 containerd[1598]: 2025-07-07 02:49:03.245 [INFO][4827] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:49:03.275168 containerd[1598]: 2025-07-07 02:49:03.245 [INFO][4827] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:49:03.275168 containerd[1598]: 2025-07-07 02:49:03.255 [WARNING][4827] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6" HandleID="k8s-pod-network.0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6" Workload="srv--ijdf9.gb1.brightbox.com-k8s-csi--node--driver--fwz77-eth0" Jul 7 02:49:03.275168 containerd[1598]: 2025-07-07 02:49:03.255 [INFO][4827] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6" HandleID="k8s-pod-network.0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6" Workload="srv--ijdf9.gb1.brightbox.com-k8s-csi--node--driver--fwz77-eth0" Jul 7 02:49:03.275168 containerd[1598]: 2025-07-07 02:49:03.261 [INFO][4827] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:49:03.275168 containerd[1598]: 2025-07-07 02:49:03.269 [INFO][4767] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6" Jul 7 02:49:03.282653 containerd[1598]: time="2025-07-07T02:49:03.280538718Z" level=info msg="TearDown network for sandbox \"0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6\" successfully" Jul 7 02:49:03.282653 containerd[1598]: time="2025-07-07T02:49:03.280616171Z" level=info msg="StopPodSandbox for \"0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6\" returns successfully" Jul 7 02:49:03.287470 systemd[1]: run-netns-cni\x2d26bbc194\x2db156\x2d6097\x2d6c11\x2ddeb2fb8522e1.mount: Deactivated successfully. Jul 7 02:49:03.289156 containerd[1598]: time="2025-07-07T02:49:03.289071496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fwz77,Uid:6d6134fd-985e-434b-964a-df65b698ac32,Namespace:calico-system,Attempt:1,}" Jul 7 02:49:03.303668 containerd[1598]: time="2025-07-07T02:49:03.303622082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84b8488869-mvrcm,Uid:bb5b56a2-f557-4e2e-85d3-68d9ce53c0bd,Namespace:calico-system,Attempt:1,} returns sandbox id \"297167599ddd3bbbb1b0e509c43c4544b6ab61856c15e2e5554fa182735a4ca5\"" Jul 7 02:49:03.309562 containerd[1598]: time="2025-07-07T02:49:03.309110138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-549c48f8dd-b5dg9,Uid:d34f9b09-d10e-4fb0-9e83-4c578ee4bf07,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"af186167d99322f08a62887b6d1e0fe4965f9a45bd5bb0aa6f05bcb83712e83f\"" Jul 7 02:49:03.314909 systemd-networkd[1261]: vxlan.calico: Gained IPv6LL Jul 7 02:49:03.340966 containerd[1598]: 2025-07-07 02:49:03.086 [INFO][4770] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094" Jul 7 02:49:03.340966 containerd[1598]: 2025-07-07 02:49:03.086 [INFO][4770] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094" iface="eth0" netns="/var/run/netns/cni-4e7bfba9-7471-167b-ee15-2dc755e9c0d0" Jul 7 02:49:03.340966 containerd[1598]: 2025-07-07 02:49:03.087 [INFO][4770] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094" iface="eth0" netns="/var/run/netns/cni-4e7bfba9-7471-167b-ee15-2dc755e9c0d0" Jul 7 02:49:03.340966 containerd[1598]: 2025-07-07 02:49:03.090 [INFO][4770] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094" iface="eth0" netns="/var/run/netns/cni-4e7bfba9-7471-167b-ee15-2dc755e9c0d0" Jul 7 02:49:03.340966 containerd[1598]: 2025-07-07 02:49:03.090 [INFO][4770] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094" Jul 7 02:49:03.340966 containerd[1598]: 2025-07-07 02:49:03.090 [INFO][4770] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094" Jul 7 02:49:03.340966 containerd[1598]: 2025-07-07 02:49:03.281 [INFO][4841] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094" HandleID="k8s-pod-network.745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094" Workload="srv--ijdf9.gb1.brightbox.com-k8s-calico--apiserver--549c48f8dd--dvzbn-eth0" Jul 7 02:49:03.340966 containerd[1598]: 2025-07-07 02:49:03.293 [INFO][4841] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:49:03.340966 containerd[1598]: 2025-07-07 02:49:03.293 [INFO][4841] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:49:03.340966 containerd[1598]: 2025-07-07 02:49:03.312 [WARNING][4841] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094" HandleID="k8s-pod-network.745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094" Workload="srv--ijdf9.gb1.brightbox.com-k8s-calico--apiserver--549c48f8dd--dvzbn-eth0" Jul 7 02:49:03.340966 containerd[1598]: 2025-07-07 02:49:03.312 [INFO][4841] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094" HandleID="k8s-pod-network.745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094" Workload="srv--ijdf9.gb1.brightbox.com-k8s-calico--apiserver--549c48f8dd--dvzbn-eth0" Jul 7 02:49:03.340966 containerd[1598]: 2025-07-07 02:49:03.320 [INFO][4841] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:49:03.340966 containerd[1598]: 2025-07-07 02:49:03.334 [INFO][4770] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094" Jul 7 02:49:03.342103 containerd[1598]: time="2025-07-07T02:49:03.342060514Z" level=info msg="TearDown network for sandbox \"745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094\" successfully" Jul 7 02:49:03.342163 containerd[1598]: time="2025-07-07T02:49:03.342106226Z" level=info msg="StopPodSandbox for \"745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094\" returns successfully" Jul 7 02:49:03.344646 containerd[1598]: time="2025-07-07T02:49:03.344608109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-549c48f8dd-dvzbn,Uid:71e788e3-7905-4ea7-85e7-4768ecd449f5,Namespace:calico-apiserver,Attempt:1,}" Jul 7 02:49:03.350886 containerd[1598]: 2025-07-07 02:49:03.094 [INFO][4775] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275" Jul 7 02:49:03.350886 containerd[1598]: 2025-07-07 02:49:03.095 [INFO][4775] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275" iface="eth0" netns="/var/run/netns/cni-5802b4ae-0d51-2a1d-3e40-38f7185c418d" Jul 7 02:49:03.350886 containerd[1598]: 2025-07-07 02:49:03.096 [INFO][4775] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275" iface="eth0" netns="/var/run/netns/cni-5802b4ae-0d51-2a1d-3e40-38f7185c418d" Jul 7 02:49:03.350886 containerd[1598]: 2025-07-07 02:49:03.098 [INFO][4775] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275" iface="eth0" netns="/var/run/netns/cni-5802b4ae-0d51-2a1d-3e40-38f7185c418d" Jul 7 02:49:03.350886 containerd[1598]: 2025-07-07 02:49:03.098 [INFO][4775] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275" Jul 7 02:49:03.350886 containerd[1598]: 2025-07-07 02:49:03.098 [INFO][4775] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275" Jul 7 02:49:03.350886 containerd[1598]: 2025-07-07 02:49:03.296 [INFO][4843] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275" HandleID="k8s-pod-network.84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275" Workload="srv--ijdf9.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--ddnt5-eth0" Jul 7 02:49:03.350886 containerd[1598]: 2025-07-07 02:49:03.297 [INFO][4843] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:49:03.350886 containerd[1598]: 2025-07-07 02:49:03.320 [INFO][4843] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:49:03.350886 containerd[1598]: 2025-07-07 02:49:03.337 [WARNING][4843] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275" HandleID="k8s-pod-network.84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275" Workload="srv--ijdf9.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--ddnt5-eth0" Jul 7 02:49:03.350886 containerd[1598]: 2025-07-07 02:49:03.337 [INFO][4843] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275" HandleID="k8s-pod-network.84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275" Workload="srv--ijdf9.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--ddnt5-eth0" Jul 7 02:49:03.350886 containerd[1598]: 2025-07-07 02:49:03.341 [INFO][4843] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:49:03.350886 containerd[1598]: 2025-07-07 02:49:03.346 [INFO][4775] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275" Jul 7 02:49:03.352075 containerd[1598]: time="2025-07-07T02:49:03.351016402Z" level=info msg="TearDown network for sandbox \"84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275\" successfully" Jul 7 02:49:03.352075 containerd[1598]: time="2025-07-07T02:49:03.351039904Z" level=info msg="StopPodSandbox for \"84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275\" returns successfully" Jul 7 02:49:03.353694 containerd[1598]: time="2025-07-07T02:49:03.353216388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ddnt5,Uid:8a24b7f2-ddb0-42b0-bd4a-3ac6f41d83cd,Namespace:kube-system,Attempt:1,}" Jul 7 02:49:03.625821 systemd-networkd[1261]: cali7e857705598: Link UP Jul 7 02:49:03.628230 systemd-networkd[1261]: cali7e857705598: Gained carrier Jul 7 02:49:03.653051 containerd[1598]: 2025-07-07 02:49:03.423 [INFO][4903] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--ijdf9.gb1.brightbox.com-k8s-csi--node--driver--fwz77-eth0 csi-node-driver- calico-system 6d6134fd-985e-434b-964a-df65b698ac32 972 0 2025-07-07 02:48:38 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-ijdf9.gb1.brightbox.com csi-node-driver-fwz77 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali7e857705598 [] [] }} ContainerID="bcad93abb9f7dbcbf591692e31d34f168a043e65eda2c594875e725deebca071" Namespace="calico-system" Pod="csi-node-driver-fwz77" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-csi--node--driver--fwz77-" Jul 7 02:49:03.653051 containerd[1598]: 2025-07-07 02:49:03.424 [INFO][4903] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bcad93abb9f7dbcbf591692e31d34f168a043e65eda2c594875e725deebca071" Namespace="calico-system" Pod="csi-node-driver-fwz77" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-csi--node--driver--fwz77-eth0" Jul 7 02:49:03.653051 containerd[1598]: 2025-07-07 02:49:03.534 [INFO][4935] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bcad93abb9f7dbcbf591692e31d34f168a043e65eda2c594875e725deebca071" HandleID="k8s-pod-network.bcad93abb9f7dbcbf591692e31d34f168a043e65eda2c594875e725deebca071" Workload="srv--ijdf9.gb1.brightbox.com-k8s-csi--node--driver--fwz77-eth0" Jul 7 02:49:03.653051 containerd[1598]: 2025-07-07 02:49:03.535 [INFO][4935] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bcad93abb9f7dbcbf591692e31d34f168a043e65eda2c594875e725deebca071" HandleID="k8s-pod-network.bcad93abb9f7dbcbf591692e31d34f168a043e65eda2c594875e725deebca071" Workload="srv--ijdf9.gb1.brightbox.com-k8s-csi--node--driver--fwz77-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf970), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-ijdf9.gb1.brightbox.com", "pod":"csi-node-driver-fwz77", "timestamp":"2025-07-07 02:49:03.532907796 +0000 UTC"}, Hostname:"srv-ijdf9.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 02:49:03.653051 containerd[1598]: 2025-07-07 02:49:03.535 [INFO][4935] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:49:03.653051 containerd[1598]: 2025-07-07 02:49:03.535 [INFO][4935] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:49:03.653051 containerd[1598]: 2025-07-07 02:49:03.535 [INFO][4935] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-ijdf9.gb1.brightbox.com' Jul 7 02:49:03.653051 containerd[1598]: 2025-07-07 02:49:03.549 [INFO][4935] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bcad93abb9f7dbcbf591692e31d34f168a043e65eda2c594875e725deebca071" host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:03.653051 containerd[1598]: 2025-07-07 02:49:03.558 [INFO][4935] ipam/ipam.go 394: Looking up existing affinities for host host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:03.653051 containerd[1598]: 2025-07-07 02:49:03.566 [INFO][4935] ipam/ipam.go 511: Trying affinity for 192.168.97.128/26 host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:03.653051 containerd[1598]: 2025-07-07 02:49:03.571 [INFO][4935] ipam/ipam.go 158: Attempting to load block cidr=192.168.97.128/26 host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:03.653051 containerd[1598]: 2025-07-07 02:49:03.577 [INFO][4935] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.97.128/26 host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:03.653051 containerd[1598]: 2025-07-07 02:49:03.577 [INFO][4935] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.97.128/26 handle="k8s-pod-network.bcad93abb9f7dbcbf591692e31d34f168a043e65eda2c594875e725deebca071" host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:03.653051 containerd[1598]: 2025-07-07 02:49:03.581 [INFO][4935] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bcad93abb9f7dbcbf591692e31d34f168a043e65eda2c594875e725deebca071 Jul 7 02:49:03.653051 containerd[1598]: 2025-07-07 02:49:03.592 [INFO][4935] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.97.128/26 handle="k8s-pod-network.bcad93abb9f7dbcbf591692e31d34f168a043e65eda2c594875e725deebca071" host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:03.653051 containerd[1598]: 2025-07-07 02:49:03.606 [INFO][4935] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.97.134/26] block=192.168.97.128/26 handle="k8s-pod-network.bcad93abb9f7dbcbf591692e31d34f168a043e65eda2c594875e725deebca071" host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:03.653051 containerd[1598]: 2025-07-07 02:49:03.606 [INFO][4935] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.97.134/26] handle="k8s-pod-network.bcad93abb9f7dbcbf591692e31d34f168a043e65eda2c594875e725deebca071" host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:03.653051 containerd[1598]: 2025-07-07 02:49:03.607 [INFO][4935] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:49:03.653051 containerd[1598]: 2025-07-07 02:49:03.607 [INFO][4935] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.97.134/26] IPv6=[] ContainerID="bcad93abb9f7dbcbf591692e31d34f168a043e65eda2c594875e725deebca071" HandleID="k8s-pod-network.bcad93abb9f7dbcbf591692e31d34f168a043e65eda2c594875e725deebca071" Workload="srv--ijdf9.gb1.brightbox.com-k8s-csi--node--driver--fwz77-eth0" Jul 7 02:49:03.655249 containerd[1598]: 2025-07-07 02:49:03.613 [INFO][4903] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bcad93abb9f7dbcbf591692e31d34f168a043e65eda2c594875e725deebca071" Namespace="calico-system" Pod="csi-node-driver-fwz77" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-csi--node--driver--fwz77-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ijdf9.gb1.brightbox.com-k8s-csi--node--driver--fwz77-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6d6134fd-985e-434b-964a-df65b698ac32", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 48, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ijdf9.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-fwz77", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.97.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7e857705598", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:49:03.655249 containerd[1598]: 2025-07-07 02:49:03.613 [INFO][4903] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.134/32] ContainerID="bcad93abb9f7dbcbf591692e31d34f168a043e65eda2c594875e725deebca071" Namespace="calico-system" Pod="csi-node-driver-fwz77" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-csi--node--driver--fwz77-eth0" Jul 7 02:49:03.655249 containerd[1598]: 2025-07-07 02:49:03.613 [INFO][4903] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7e857705598 ContainerID="bcad93abb9f7dbcbf591692e31d34f168a043e65eda2c594875e725deebca071" Namespace="calico-system" Pod="csi-node-driver-fwz77" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-csi--node--driver--fwz77-eth0" Jul 7 02:49:03.655249 containerd[1598]: 2025-07-07 02:49:03.629 [INFO][4903] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bcad93abb9f7dbcbf591692e31d34f168a043e65eda2c594875e725deebca071" Namespace="calico-system" Pod="csi-node-driver-fwz77" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-csi--node--driver--fwz77-eth0" Jul 7 02:49:03.655249 containerd[1598]: 2025-07-07 02:49:03.630 [INFO][4903] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bcad93abb9f7dbcbf591692e31d34f168a043e65eda2c594875e725deebca071" Namespace="calico-system" Pod="csi-node-driver-fwz77" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-csi--node--driver--fwz77-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ijdf9.gb1.brightbox.com-k8s-csi--node--driver--fwz77-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6d6134fd-985e-434b-964a-df65b698ac32", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 48, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ijdf9.gb1.brightbox.com", ContainerID:"bcad93abb9f7dbcbf591692e31d34f168a043e65eda2c594875e725deebca071", Pod:"csi-node-driver-fwz77", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.97.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7e857705598", MAC:"fa:bc:e9:eb:e9:21", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:49:03.655249 containerd[1598]: 2025-07-07 02:49:03.646 [INFO][4903] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bcad93abb9f7dbcbf591692e31d34f168a043e65eda2c594875e725deebca071" Namespace="calico-system" Pod="csi-node-driver-fwz77" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-csi--node--driver--fwz77-eth0" Jul 7 02:49:03.727901 containerd[1598]: time="2025-07-07T02:49:03.727270459Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 02:49:03.728102 containerd[1598]: time="2025-07-07T02:49:03.727861165Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 02:49:03.728102 containerd[1598]: time="2025-07-07T02:49:03.727875256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:49:03.728343 containerd[1598]: time="2025-07-07T02:49:03.728066832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:49:03.736471 systemd-networkd[1261]: cali8b5cd036b4c: Link UP Jul 7 02:49:03.753354 systemd-networkd[1261]: cali8b5cd036b4c: Gained carrier Jul 7 02:49:03.800408 containerd[1598]: 2025-07-07 02:49:03.471 [INFO][4912] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--ijdf9.gb1.brightbox.com-k8s-calico--apiserver--549c48f8dd--dvzbn-eth0 calico-apiserver-549c48f8dd- calico-apiserver 71e788e3-7905-4ea7-85e7-4768ecd449f5 973 0 2025-07-07 02:48:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:549c48f8dd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-ijdf9.gb1.brightbox.com calico-apiserver-549c48f8dd-dvzbn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8b5cd036b4c [] [] }} ContainerID="982bd2844d72df9d4ee25b9d16b5757995a0610dd81506a6c926138cddb1efb2" Namespace="calico-apiserver" Pod="calico-apiserver-549c48f8dd-dvzbn" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-calico--apiserver--549c48f8dd--dvzbn-" Jul 7 02:49:03.800408 containerd[1598]: 2025-07-07 02:49:03.471 [INFO][4912] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="982bd2844d72df9d4ee25b9d16b5757995a0610dd81506a6c926138cddb1efb2" Namespace="calico-apiserver" Pod="calico-apiserver-549c48f8dd-dvzbn" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-calico--apiserver--549c48f8dd--dvzbn-eth0" Jul 7 02:49:03.800408 containerd[1598]: 2025-07-07 02:49:03.555 [INFO][4942] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="982bd2844d72df9d4ee25b9d16b5757995a0610dd81506a6c926138cddb1efb2" HandleID="k8s-pod-network.982bd2844d72df9d4ee25b9d16b5757995a0610dd81506a6c926138cddb1efb2" Workload="srv--ijdf9.gb1.brightbox.com-k8s-calico--apiserver--549c48f8dd--dvzbn-eth0" Jul 7 02:49:03.800408 containerd[1598]: 2025-07-07 02:49:03.556 [INFO][4942] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="982bd2844d72df9d4ee25b9d16b5757995a0610dd81506a6c926138cddb1efb2" HandleID="k8s-pod-network.982bd2844d72df9d4ee25b9d16b5757995a0610dd81506a6c926138cddb1efb2" Workload="srv--ijdf9.gb1.brightbox.com-k8s-calico--apiserver--549c48f8dd--dvzbn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-ijdf9.gb1.brightbox.com", "pod":"calico-apiserver-549c48f8dd-dvzbn", "timestamp":"2025-07-07 02:49:03.555325161 +0000 UTC"}, Hostname:"srv-ijdf9.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 02:49:03.800408 containerd[1598]: 2025-07-07 02:49:03.556 [INFO][4942] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:49:03.800408 containerd[1598]: 2025-07-07 02:49:03.607 [INFO][4942] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:49:03.800408 containerd[1598]: 2025-07-07 02:49:03.607 [INFO][4942] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-ijdf9.gb1.brightbox.com' Jul 7 02:49:03.800408 containerd[1598]: 2025-07-07 02:49:03.650 [INFO][4942] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.982bd2844d72df9d4ee25b9d16b5757995a0610dd81506a6c926138cddb1efb2" host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:03.800408 containerd[1598]: 2025-07-07 02:49:03.661 [INFO][4942] ipam/ipam.go 394: Looking up existing affinities for host host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:03.800408 containerd[1598]: 2025-07-07 02:49:03.669 [INFO][4942] ipam/ipam.go 511: Trying affinity for 192.168.97.128/26 host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:03.800408 containerd[1598]: 2025-07-07 02:49:03.673 [INFO][4942] ipam/ipam.go 158: Attempting to load block cidr=192.168.97.128/26 host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:03.800408 containerd[1598]: 2025-07-07 02:49:03.681 [INFO][4942] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.97.128/26 host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:03.800408 containerd[1598]: 2025-07-07 02:49:03.681 [INFO][4942] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.97.128/26 handle="k8s-pod-network.982bd2844d72df9d4ee25b9d16b5757995a0610dd81506a6c926138cddb1efb2" host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:03.800408 containerd[1598]: 2025-07-07 02:49:03.687 [INFO][4942] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.982bd2844d72df9d4ee25b9d16b5757995a0610dd81506a6c926138cddb1efb2 Jul 7 02:49:03.800408 containerd[1598]: 2025-07-07 02:49:03.698 [INFO][4942] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.97.128/26 handle="k8s-pod-network.982bd2844d72df9d4ee25b9d16b5757995a0610dd81506a6c926138cddb1efb2" host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:03.800408 containerd[1598]: 2025-07-07 02:49:03.711 [INFO][4942] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.97.135/26] block=192.168.97.128/26 handle="k8s-pod-network.982bd2844d72df9d4ee25b9d16b5757995a0610dd81506a6c926138cddb1efb2" host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:03.800408 containerd[1598]: 2025-07-07 02:49:03.711 [INFO][4942] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.97.135/26] handle="k8s-pod-network.982bd2844d72df9d4ee25b9d16b5757995a0610dd81506a6c926138cddb1efb2" host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:03.800408 containerd[1598]: 2025-07-07 02:49:03.711 [INFO][4942] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:49:03.800408 containerd[1598]: 2025-07-07 02:49:03.711 [INFO][4942] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.97.135/26] IPv6=[] ContainerID="982bd2844d72df9d4ee25b9d16b5757995a0610dd81506a6c926138cddb1efb2" HandleID="k8s-pod-network.982bd2844d72df9d4ee25b9d16b5757995a0610dd81506a6c926138cddb1efb2" Workload="srv--ijdf9.gb1.brightbox.com-k8s-calico--apiserver--549c48f8dd--dvzbn-eth0" Jul 7 02:49:03.801692 containerd[1598]: 2025-07-07 02:49:03.725 [INFO][4912] cni-plugin/k8s.go 418: Populated endpoint ContainerID="982bd2844d72df9d4ee25b9d16b5757995a0610dd81506a6c926138cddb1efb2" Namespace="calico-apiserver" Pod="calico-apiserver-549c48f8dd-dvzbn" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-calico--apiserver--549c48f8dd--dvzbn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ijdf9.gb1.brightbox.com-k8s-calico--apiserver--549c48f8dd--dvzbn-eth0", GenerateName:"calico-apiserver-549c48f8dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"71e788e3-7905-4ea7-85e7-4768ecd449f5", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 48, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"549c48f8dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ijdf9.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-549c48f8dd-dvzbn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8b5cd036b4c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:49:03.801692 containerd[1598]: 2025-07-07 02:49:03.726 [INFO][4912] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.135/32] ContainerID="982bd2844d72df9d4ee25b9d16b5757995a0610dd81506a6c926138cddb1efb2" Namespace="calico-apiserver" Pod="calico-apiserver-549c48f8dd-dvzbn" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-calico--apiserver--549c48f8dd--dvzbn-eth0" Jul 7 02:49:03.801692 containerd[1598]: 2025-07-07 02:49:03.726 [INFO][4912] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8b5cd036b4c ContainerID="982bd2844d72df9d4ee25b9d16b5757995a0610dd81506a6c926138cddb1efb2" Namespace="calico-apiserver" Pod="calico-apiserver-549c48f8dd-dvzbn" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-calico--apiserver--549c48f8dd--dvzbn-eth0" Jul 7 02:49:03.801692 containerd[1598]: 2025-07-07 02:49:03.755 [INFO][4912] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="982bd2844d72df9d4ee25b9d16b5757995a0610dd81506a6c926138cddb1efb2" Namespace="calico-apiserver" Pod="calico-apiserver-549c48f8dd-dvzbn" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-calico--apiserver--549c48f8dd--dvzbn-eth0" Jul 7 02:49:03.801692 containerd[1598]: 2025-07-07 02:49:03.760 [INFO][4912] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="982bd2844d72df9d4ee25b9d16b5757995a0610dd81506a6c926138cddb1efb2" Namespace="calico-apiserver" Pod="calico-apiserver-549c48f8dd-dvzbn" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-calico--apiserver--549c48f8dd--dvzbn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ijdf9.gb1.brightbox.com-k8s-calico--apiserver--549c48f8dd--dvzbn-eth0", GenerateName:"calico-apiserver-549c48f8dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"71e788e3-7905-4ea7-85e7-4768ecd449f5", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 48, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"549c48f8dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ijdf9.gb1.brightbox.com", ContainerID:"982bd2844d72df9d4ee25b9d16b5757995a0610dd81506a6c926138cddb1efb2", Pod:"calico-apiserver-549c48f8dd-dvzbn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8b5cd036b4c", MAC:"be:f9:f3:79:d2:59", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:49:03.801692 containerd[1598]: 2025-07-07 02:49:03.790 [INFO][4912] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="982bd2844d72df9d4ee25b9d16b5757995a0610dd81506a6c926138cddb1efb2" Namespace="calico-apiserver" Pod="calico-apiserver-549c48f8dd-dvzbn" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-calico--apiserver--549c48f8dd--dvzbn-eth0" Jul 7 02:49:03.875265 containerd[1598]: time="2025-07-07T02:49:03.874750329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fwz77,Uid:6d6134fd-985e-434b-964a-df65b698ac32,Namespace:calico-system,Attempt:1,} returns sandbox id \"bcad93abb9f7dbcbf591692e31d34f168a043e65eda2c594875e725deebca071\"" Jul 7 02:49:03.877749 systemd-networkd[1261]: calice808665d05: Link UP Jul 7 02:49:03.882350 systemd-networkd[1261]: calice808665d05: Gained carrier Jul 7 02:49:03.884879 containerd[1598]: time="2025-07-07T02:49:03.881807014Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 02:49:03.884879 containerd[1598]: time="2025-07-07T02:49:03.883613789Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 02:49:03.884879 containerd[1598]: time="2025-07-07T02:49:03.883662232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:49:03.884879 containerd[1598]: time="2025-07-07T02:49:03.883815580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:49:03.931932 containerd[1598]: 2025-07-07 02:49:03.525 [INFO][4921] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--ijdf9.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--ddnt5-eth0 coredns-7c65d6cfc9- kube-system 8a24b7f2-ddb0-42b0-bd4a-3ac6f41d83cd 974 0 2025-07-07 02:48:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-ijdf9.gb1.brightbox.com coredns-7c65d6cfc9-ddnt5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calice808665d05 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="bc838bf94b685e7cc3ab03d08f740303e0f33c7a610b159dc98c54c1f40d7e2b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ddnt5" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--ddnt5-" Jul 7 02:49:03.931932 containerd[1598]: 2025-07-07 02:49:03.526 [INFO][4921] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bc838bf94b685e7cc3ab03d08f740303e0f33c7a610b159dc98c54c1f40d7e2b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ddnt5" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--ddnt5-eth0" Jul 7 02:49:03.931932 containerd[1598]: 2025-07-07 02:49:03.604 [INFO][4950] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bc838bf94b685e7cc3ab03d08f740303e0f33c7a610b159dc98c54c1f40d7e2b" HandleID="k8s-pod-network.bc838bf94b685e7cc3ab03d08f740303e0f33c7a610b159dc98c54c1f40d7e2b" Workload="srv--ijdf9.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--ddnt5-eth0" Jul 7 02:49:03.931932 containerd[1598]: 2025-07-07 02:49:03.604 [INFO][4950] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bc838bf94b685e7cc3ab03d08f740303e0f33c7a610b159dc98c54c1f40d7e2b" HandleID="k8s-pod-network.bc838bf94b685e7cc3ab03d08f740303e0f33c7a610b159dc98c54c1f40d7e2b" Workload="srv--ijdf9.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--ddnt5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf8f0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-ijdf9.gb1.brightbox.com", "pod":"coredns-7c65d6cfc9-ddnt5", "timestamp":"2025-07-07 02:49:03.604663559 +0000 UTC"}, Hostname:"srv-ijdf9.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 02:49:03.931932 containerd[1598]: 2025-07-07 02:49:03.605 [INFO][4950] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:49:03.931932 containerd[1598]: 2025-07-07 02:49:03.711 [INFO][4950] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:49:03.931932 containerd[1598]: 2025-07-07 02:49:03.712 [INFO][4950] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-ijdf9.gb1.brightbox.com' Jul 7 02:49:03.931932 containerd[1598]: 2025-07-07 02:49:03.753 [INFO][4950] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bc838bf94b685e7cc3ab03d08f740303e0f33c7a610b159dc98c54c1f40d7e2b" host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:03.931932 containerd[1598]: 2025-07-07 02:49:03.764 [INFO][4950] ipam/ipam.go 394: Looking up existing affinities for host host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:03.931932 containerd[1598]: 2025-07-07 02:49:03.782 [INFO][4950] ipam/ipam.go 511: Trying affinity for 192.168.97.128/26 host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:03.931932 containerd[1598]: 2025-07-07 02:49:03.786 [INFO][4950] ipam/ipam.go 158: Attempting to load block cidr=192.168.97.128/26 host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:03.931932 containerd[1598]: 2025-07-07 02:49:03.794 [INFO][4950] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.97.128/26 host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:03.931932 containerd[1598]: 2025-07-07 02:49:03.794 [INFO][4950] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.97.128/26 handle="k8s-pod-network.bc838bf94b685e7cc3ab03d08f740303e0f33c7a610b159dc98c54c1f40d7e2b" host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:03.931932 containerd[1598]: 2025-07-07 02:49:03.797 [INFO][4950] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bc838bf94b685e7cc3ab03d08f740303e0f33c7a610b159dc98c54c1f40d7e2b Jul 7 02:49:03.931932 containerd[1598]: 2025-07-07 02:49:03.805 [INFO][4950] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.97.128/26 handle="k8s-pod-network.bc838bf94b685e7cc3ab03d08f740303e0f33c7a610b159dc98c54c1f40d7e2b" host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:03.931932 containerd[1598]: 2025-07-07 02:49:03.828 [INFO][4950] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.97.136/26] block=192.168.97.128/26 handle="k8s-pod-network.bc838bf94b685e7cc3ab03d08f740303e0f33c7a610b159dc98c54c1f40d7e2b" host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:03.931932 containerd[1598]: 2025-07-07 02:49:03.830 [INFO][4950] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.97.136/26] handle="k8s-pod-network.bc838bf94b685e7cc3ab03d08f740303e0f33c7a610b159dc98c54c1f40d7e2b" host="srv-ijdf9.gb1.brightbox.com" Jul 7 02:49:03.931932 containerd[1598]: 2025-07-07 02:49:03.830 [INFO][4950] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:49:03.931932 containerd[1598]: 2025-07-07 02:49:03.833 [INFO][4950] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.97.136/26] IPv6=[] ContainerID="bc838bf94b685e7cc3ab03d08f740303e0f33c7a610b159dc98c54c1f40d7e2b" HandleID="k8s-pod-network.bc838bf94b685e7cc3ab03d08f740303e0f33c7a610b159dc98c54c1f40d7e2b" Workload="srv--ijdf9.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--ddnt5-eth0" Jul 7 02:49:03.932748 containerd[1598]: 2025-07-07 02:49:03.852 [INFO][4921] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bc838bf94b685e7cc3ab03d08f740303e0f33c7a610b159dc98c54c1f40d7e2b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ddnt5" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--ddnt5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ijdf9.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--ddnt5-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"8a24b7f2-ddb0-42b0-bd4a-3ac6f41d83cd", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 48, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ijdf9.gb1.brightbox.com", ContainerID:"", Pod:"coredns-7c65d6cfc9-ddnt5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calice808665d05", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:49:03.932748 containerd[1598]: 2025-07-07 02:49:03.852 [INFO][4921] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.136/32] ContainerID="bc838bf94b685e7cc3ab03d08f740303e0f33c7a610b159dc98c54c1f40d7e2b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ddnt5" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--ddnt5-eth0" Jul 7 02:49:03.932748 containerd[1598]: 2025-07-07 02:49:03.852 [INFO][4921] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calice808665d05 ContainerID="bc838bf94b685e7cc3ab03d08f740303e0f33c7a610b159dc98c54c1f40d7e2b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ddnt5" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--ddnt5-eth0" Jul 7 02:49:03.932748 containerd[1598]: 2025-07-07 02:49:03.889 [INFO][4921] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bc838bf94b685e7cc3ab03d08f740303e0f33c7a610b159dc98c54c1f40d7e2b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ddnt5" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--ddnt5-eth0" Jul 7 02:49:03.932748 containerd[1598]: 2025-07-07 02:49:03.891 [INFO][4921] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bc838bf94b685e7cc3ab03d08f740303e0f33c7a610b159dc98c54c1f40d7e2b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ddnt5" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--ddnt5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ijdf9.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--ddnt5-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"8a24b7f2-ddb0-42b0-bd4a-3ac6f41d83cd", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 48, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ijdf9.gb1.brightbox.com", ContainerID:"bc838bf94b685e7cc3ab03d08f740303e0f33c7a610b159dc98c54c1f40d7e2b", Pod:"coredns-7c65d6cfc9-ddnt5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calice808665d05", MAC:"0e:2c:7b:2e:ca:a3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:49:03.932748 containerd[1598]: 2025-07-07 02:49:03.912 [INFO][4921] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bc838bf94b685e7cc3ab03d08f740303e0f33c7a610b159dc98c54c1f40d7e2b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ddnt5" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--ddnt5-eth0" Jul 7 02:49:04.001446 containerd[1598]: time="2025-07-07T02:49:04.001025051Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 02:49:04.002366 containerd[1598]: time="2025-07-07T02:49:04.002313302Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 02:49:04.002725 containerd[1598]: time="2025-07-07T02:49:04.002689103Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:49:04.002948 containerd[1598]: time="2025-07-07T02:49:04.002920947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:49:04.034923 systemd[1]: run-netns-cni\x2d4e7bfba9\x2d7471\x2d167b\x2dee15\x2d2dc755e9c0d0.mount: Deactivated successfully. Jul 7 02:49:04.035113 systemd[1]: run-netns-cni\x2d5802b4ae\x2d0d51\x2d2a1d\x2d3e40\x2d38f7185c418d.mount: Deactivated successfully. Jul 7 02:49:04.070311 containerd[1598]: time="2025-07-07T02:49:04.069369144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-549c48f8dd-dvzbn,Uid:71e788e3-7905-4ea7-85e7-4768ecd449f5,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"982bd2844d72df9d4ee25b9d16b5757995a0610dd81506a6c926138cddb1efb2\"" Jul 7 02:49:04.135267 containerd[1598]: time="2025-07-07T02:49:04.134389160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ddnt5,Uid:8a24b7f2-ddb0-42b0-bd4a-3ac6f41d83cd,Namespace:kube-system,Attempt:1,} returns sandbox id \"bc838bf94b685e7cc3ab03d08f740303e0f33c7a610b159dc98c54c1f40d7e2b\"" Jul 7 02:49:04.146358 containerd[1598]: time="2025-07-07T02:49:04.145756022Z" level=info msg="CreateContainer within sandbox \"bc838bf94b685e7cc3ab03d08f740303e0f33c7a610b159dc98c54c1f40d7e2b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 02:49:04.167432 containerd[1598]: time="2025-07-07T02:49:04.167390342Z" level=info msg="CreateContainer within sandbox \"bc838bf94b685e7cc3ab03d08f740303e0f33c7a610b159dc98c54c1f40d7e2b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ee33f3e7dab1d1446b5c9ebf32540d98607838187ec8b0c7a0ca62c0f27a9ef4\"" Jul 7 02:49:04.169448 containerd[1598]: time="2025-07-07T02:49:04.168865283Z" level=info msg="StartContainer for \"ee33f3e7dab1d1446b5c9ebf32540d98607838187ec8b0c7a0ca62c0f27a9ef4\"" Jul 7 02:49:04.272677 containerd[1598]: time="2025-07-07T02:49:04.272640049Z" level=info msg="StartContainer for \"ee33f3e7dab1d1446b5c9ebf32540d98607838187ec8b0c7a0ca62c0f27a9ef4\" returns successfully" Jul 7 02:49:04.391000 containerd[1598]: time="2025-07-07T02:49:04.390845163Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:49:04.391652 containerd[1598]: time="2025-07-07T02:49:04.391611799Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 7 02:49:04.392622 containerd[1598]: time="2025-07-07T02:49:04.392600667Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:49:04.397651 containerd[1598]: time="2025-07-07T02:49:04.397622416Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:49:04.398623 containerd[1598]: time="2025-07-07T02:49:04.398597343Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 4.536476704s" Jul 7 02:49:04.398719 containerd[1598]: time="2025-07-07T02:49:04.398705392Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 7 02:49:04.400421 containerd[1598]: time="2025-07-07T02:49:04.400388567Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 7 02:49:04.402494 containerd[1598]: time="2025-07-07T02:49:04.402414826Z" level=info msg="CreateContainer within sandbox \"9fc9bfe3755761cd08f19ed2a8f80c4079a96a261a6194ed9b162be6758e7479\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 7 02:49:04.426654 containerd[1598]: time="2025-07-07T02:49:04.426514823Z" level=info msg="CreateContainer within sandbox \"9fc9bfe3755761cd08f19ed2a8f80c4079a96a261a6194ed9b162be6758e7479\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"1b7678a00b29e3a83602194a456738fe8237b526471ab8c8543558bf763ee15e\"" Jul 7 02:49:04.430824 containerd[1598]: time="2025-07-07T02:49:04.429300266Z" level=info msg="StartContainer for \"1b7678a00b29e3a83602194a456738fe8237b526471ab8c8543558bf763ee15e\"" Jul 7 02:49:04.466638 systemd-networkd[1261]: cali192c8a8ac1b: Gained IPv6LL Jul 7 02:49:04.526053 containerd[1598]: time="2025-07-07T02:49:04.525873484Z" level=info msg="StartContainer for \"1b7678a00b29e3a83602194a456738fe8237b526471ab8c8543558bf763ee15e\" returns successfully" Jul 7 02:49:04.914665 systemd-networkd[1261]: calia07f16fb571: Gained IPv6LL Jul 7 02:49:05.034715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount855819627.mount: Deactivated successfully. Jul 7 02:49:05.042613 systemd-networkd[1261]: cali7e857705598: Gained IPv6LL Jul 7 02:49:05.243952 kubelet[2809]: I0707 02:49:05.243291 2809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-ddnt5" podStartSLOduration=41.243258952 podStartE2EDuration="41.243258952s" podCreationTimestamp="2025-07-07 02:48:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 02:49:05.241836421 +0000 UTC m=+47.585181192" watchObservedRunningTime="2025-07-07 02:49:05.243258952 +0000 UTC m=+47.586603724" Jul 7 02:49:05.618783 systemd-networkd[1261]: cali8b5cd036b4c: Gained IPv6LL Jul 7 02:49:05.682554 systemd-networkd[1261]: calice808665d05: Gained IPv6LL Jul 7 02:49:07.386401 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount881843236.mount: Deactivated successfully. Jul 7 02:49:08.069621 containerd[1598]: time="2025-07-07T02:49:08.069568240Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:49:08.071745 containerd[1598]: time="2025-07-07T02:49:08.070967371Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 7 02:49:08.074121 containerd[1598]: time="2025-07-07T02:49:08.074004999Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:49:08.077290 containerd[1598]: time="2025-07-07T02:49:08.077244296Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:49:08.078552 containerd[1598]: time="2025-07-07T02:49:08.078115824Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 3.677572535s" Jul 7 02:49:08.078552 containerd[1598]: time="2025-07-07T02:49:08.078171731Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 7 02:49:08.085799 containerd[1598]: time="2025-07-07T02:49:08.085677865Z" level=info msg="CreateContainer within sandbox \"6fec8bc69b37655936342ac6e993012fde3e6f7856e85e0fb9c4828fcbfce35b\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 7 02:49:08.087399 containerd[1598]: time="2025-07-07T02:49:08.087337196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 7 02:49:08.105075 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2723582976.mount: Deactivated successfully. Jul 7 02:49:08.121359 containerd[1598]: time="2025-07-07T02:49:08.121285885Z" level=info msg="CreateContainer within sandbox \"6fec8bc69b37655936342ac6e993012fde3e6f7856e85e0fb9c4828fcbfce35b\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"a91a0ca36e824ebd5efd5cf3e6b352a16a2554597d18158b0117080c89a560f7\"" Jul 7 02:49:08.123246 containerd[1598]: time="2025-07-07T02:49:08.122876455Z" level=info msg="StartContainer for \"a91a0ca36e824ebd5efd5cf3e6b352a16a2554597d18158b0117080c89a560f7\"" Jul 7 02:49:08.285718 systemd[1]: run-containerd-runc-k8s.io-a91a0ca36e824ebd5efd5cf3e6b352a16a2554597d18158b0117080c89a560f7-runc.VoQRL1.mount: Deactivated successfully. Jul 7 02:49:08.372769 containerd[1598]: time="2025-07-07T02:49:08.372652223Z" level=info msg="StartContainer for \"a91a0ca36e824ebd5efd5cf3e6b352a16a2554597d18158b0117080c89a560f7\" returns successfully" Jul 7 02:49:08.562340 systemd-resolved[1505]: Under memory pressure, flushing caches. Jul 7 02:49:08.562406 systemd-resolved[1505]: Flushed all caches. Jul 7 02:49:08.568247 systemd-journald[1172]: Under memory pressure, flushing caches. Jul 7 02:49:09.277428 kubelet[2809]: I0707 02:49:09.274354 2809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-677fcf4f54-2485h" podStartSLOduration=5.873700308 podStartE2EDuration="12.274327577s" podCreationTimestamp="2025-07-07 02:48:57 +0000 UTC" firstStartedPulling="2025-07-07 02:48:57.999102654 +0000 UTC m=+40.342447400" lastFinishedPulling="2025-07-07 02:49:04.399729919 +0000 UTC m=+46.743074669" observedRunningTime="2025-07-07 02:49:05.289191918 +0000 UTC m=+47.632536689" watchObservedRunningTime="2025-07-07 02:49:09.274327577 +0000 UTC m=+51.617672344" Jul 7 02:49:09.277428 kubelet[2809]: I0707 02:49:09.274560 2809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-nnz8r" podStartSLOduration=24.818959959 podStartE2EDuration="32.274554439s" podCreationTimestamp="2025-07-07 02:48:37 +0000 UTC" firstStartedPulling="2025-07-07 02:49:00.623641145 +0000 UTC m=+42.966985892" lastFinishedPulling="2025-07-07 02:49:08.079235626 +0000 UTC m=+50.422580372" observedRunningTime="2025-07-07 02:49:09.273438094 +0000 UTC m=+51.616782865" watchObservedRunningTime="2025-07-07 02:49:09.274554439 +0000 UTC m=+51.617899207" Jul 7 02:49:11.387876 systemd[1]: run-containerd-runc-k8s.io-a91a0ca36e824ebd5efd5cf3e6b352a16a2554597d18158b0117080c89a560f7-runc.9ZraSR.mount: Deactivated successfully. Jul 7 02:49:11.625929 containerd[1598]: time="2025-07-07T02:49:11.625726579Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:49:11.627528 containerd[1598]: time="2025-07-07T02:49:11.627477980Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 7 02:49:11.629235 containerd[1598]: time="2025-07-07T02:49:11.628195802Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:49:11.640117 containerd[1598]: time="2025-07-07T02:49:11.639987941Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:49:11.642350 containerd[1598]: time="2025-07-07T02:49:11.642309991Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 3.540748343s" Jul 7 02:49:11.642498 containerd[1598]: time="2025-07-07T02:49:11.642482185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 7 02:49:11.655614 containerd[1598]: time="2025-07-07T02:49:11.655078664Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 02:49:11.746872 containerd[1598]: time="2025-07-07T02:49:11.746583341Z" level=info msg="CreateContainer within sandbox \"297167599ddd3bbbb1b0e509c43c4544b6ab61856c15e2e5554fa182735a4ca5\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 7 02:49:11.756047 containerd[1598]: time="2025-07-07T02:49:11.755910538Z" level=info msg="CreateContainer within sandbox \"297167599ddd3bbbb1b0e509c43c4544b6ab61856c15e2e5554fa182735a4ca5\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"d28546f386b324fb8fdfadbd113bdbe530c02a2822a3c7f16909c196f133d6d1\"" Jul 7 02:49:11.757580 containerd[1598]: time="2025-07-07T02:49:11.757547129Z" level=info msg="StartContainer for \"d28546f386b324fb8fdfadbd113bdbe530c02a2822a3c7f16909c196f133d6d1\"" Jul 7 02:49:11.849738 containerd[1598]: time="2025-07-07T02:49:11.849694344Z" level=info msg="StartContainer for \"d28546f386b324fb8fdfadbd113bdbe530c02a2822a3c7f16909c196f133d6d1\" returns successfully" Jul 7 02:49:12.383877 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3222700452.mount: Deactivated successfully. Jul 7 02:49:12.467764 kubelet[2809]: I0707 02:49:12.467703 2809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-84b8488869-mvrcm" podStartSLOduration=26.118913017 podStartE2EDuration="34.467674345s" podCreationTimestamp="2025-07-07 02:48:38 +0000 UTC" firstStartedPulling="2025-07-07 02:49:03.305994124 +0000 UTC m=+45.649338875" lastFinishedPulling="2025-07-07 02:49:11.654755456 +0000 UTC m=+53.998100203" observedRunningTime="2025-07-07 02:49:12.457757958 +0000 UTC m=+54.801102729" watchObservedRunningTime="2025-07-07 02:49:12.467674345 +0000 UTC m=+54.811019115" Jul 7 02:49:12.530576 systemd-resolved[1505]: Under memory pressure, flushing caches. Jul 7 02:49:12.537775 systemd-journald[1172]: Under memory pressure, flushing caches. Jul 7 02:49:12.530625 systemd-resolved[1505]: Flushed all caches. Jul 7 02:49:14.587106 systemd-journald[1172]: Under memory pressure, flushing caches. Jul 7 02:49:14.585228 systemd-resolved[1505]: Under memory pressure, flushing caches. Jul 7 02:49:14.585264 systemd-resolved[1505]: Flushed all caches. Jul 7 02:49:14.821182 containerd[1598]: time="2025-07-07T02:49:14.820798872Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:49:14.822279 containerd[1598]: time="2025-07-07T02:49:14.822244973Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 7 02:49:14.822871 containerd[1598]: time="2025-07-07T02:49:14.822827634Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:49:14.826776 containerd[1598]: time="2025-07-07T02:49:14.826494928Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:49:14.827248 containerd[1598]: time="2025-07-07T02:49:14.827219583Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 3.171666259s" Jul 7 02:49:14.827341 containerd[1598]: time="2025-07-07T02:49:14.827326852Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 02:49:14.829538 containerd[1598]: time="2025-07-07T02:49:14.829317428Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 7 02:49:14.836685 containerd[1598]: time="2025-07-07T02:49:14.836650200Z" level=info msg="CreateContainer within sandbox \"af186167d99322f08a62887b6d1e0fe4965f9a45bd5bb0aa6f05bcb83712e83f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 02:49:14.846832 containerd[1598]: time="2025-07-07T02:49:14.846672847Z" level=info msg="CreateContainer within sandbox \"af186167d99322f08a62887b6d1e0fe4965f9a45bd5bb0aa6f05bcb83712e83f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f183c76510e2aaf6bc1cdaefa9756c73866d94402c7e509784bda5261184e6ad\"" Jul 7 02:49:14.849203 containerd[1598]: time="2025-07-07T02:49:14.848348659Z" level=info msg="StartContainer for \"f183c76510e2aaf6bc1cdaefa9756c73866d94402c7e509784bda5261184e6ad\"" Jul 7 02:49:14.855753 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2286029843.mount: Deactivated successfully. Jul 7 02:49:15.021193 containerd[1598]: time="2025-07-07T02:49:15.021131341Z" level=info msg="StartContainer for \"f183c76510e2aaf6bc1cdaefa9756c73866d94402c7e509784bda5261184e6ad\" returns successfully" Jul 7 02:49:16.486768 kubelet[2809]: I0707 02:49:16.486654 2809 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 02:49:16.651073 containerd[1598]: time="2025-07-07T02:49:16.651015660Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:49:16.652177 containerd[1598]: time="2025-07-07T02:49:16.652017667Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 7 02:49:16.653672 containerd[1598]: time="2025-07-07T02:49:16.653624902Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:49:16.656710 containerd[1598]: time="2025-07-07T02:49:16.656215439Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:49:16.656880 containerd[1598]: time="2025-07-07T02:49:16.656857345Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.827511476s" Jul 7 02:49:16.656956 containerd[1598]: time="2025-07-07T02:49:16.656943316Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 7 02:49:16.658597 containerd[1598]: time="2025-07-07T02:49:16.658575600Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 02:49:16.674828 containerd[1598]: time="2025-07-07T02:49:16.674790867Z" level=info msg="CreateContainer within sandbox \"bcad93abb9f7dbcbf591692e31d34f168a043e65eda2c594875e725deebca071\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 7 02:49:16.729442 containerd[1598]: time="2025-07-07T02:49:16.728975485Z" level=info msg="CreateContainer within sandbox \"bcad93abb9f7dbcbf591692e31d34f168a043e65eda2c594875e725deebca071\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"98225164ac215fa2e6d37dfdca62d7fd3f0407f5ec0a26757592f3dec0aebce6\"" Jul 7 02:49:16.729078 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3213767460.mount: Deactivated successfully. Jul 7 02:49:16.731880 containerd[1598]: time="2025-07-07T02:49:16.731158897Z" level=info msg="StartContainer for \"98225164ac215fa2e6d37dfdca62d7fd3f0407f5ec0a26757592f3dec0aebce6\"" Jul 7 02:49:16.800867 systemd[1]: run-containerd-runc-k8s.io-98225164ac215fa2e6d37dfdca62d7fd3f0407f5ec0a26757592f3dec0aebce6-runc.kRdldY.mount: Deactivated successfully. Jul 7 02:49:16.856061 containerd[1598]: time="2025-07-07T02:49:16.855950229Z" level=info msg="StartContainer for \"98225164ac215fa2e6d37dfdca62d7fd3f0407f5ec0a26757592f3dec0aebce6\" returns successfully" Jul 7 02:49:17.029746 containerd[1598]: time="2025-07-07T02:49:17.029431356Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:49:17.031013 containerd[1598]: time="2025-07-07T02:49:17.030912923Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 7 02:49:17.032590 containerd[1598]: time="2025-07-07T02:49:17.032545798Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 373.828404ms" Jul 7 02:49:17.032590 containerd[1598]: time="2025-07-07T02:49:17.032590190Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 02:49:17.035049 containerd[1598]: time="2025-07-07T02:49:17.033924388Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 7 02:49:17.036300 containerd[1598]: time="2025-07-07T02:49:17.035668600Z" level=info msg="CreateContainer within sandbox \"982bd2844d72df9d4ee25b9d16b5757995a0610dd81506a6c926138cddb1efb2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 02:49:17.048381 containerd[1598]: time="2025-07-07T02:49:17.048029084Z" level=info msg="CreateContainer within sandbox \"982bd2844d72df9d4ee25b9d16b5757995a0610dd81506a6c926138cddb1efb2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"16c34f76c5ebeea92162209f6786135b79000a5f1232c85c2275179e7d459b18\"" Jul 7 02:49:17.050609 containerd[1598]: time="2025-07-07T02:49:17.050475794Z" level=info msg="StartContainer for \"16c34f76c5ebeea92162209f6786135b79000a5f1232c85c2275179e7d459b18\"" Jul 7 02:49:17.152453 containerd[1598]: time="2025-07-07T02:49:17.152270863Z" level=info msg="StartContainer for \"16c34f76c5ebeea92162209f6786135b79000a5f1232c85c2275179e7d459b18\" returns successfully" Jul 7 02:49:17.547037 kubelet[2809]: I0707 02:49:17.545875 2809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-549c48f8dd-b5dg9" podStartSLOduration=32.039855925 podStartE2EDuration="43.545846308s" podCreationTimestamp="2025-07-07 02:48:34 +0000 UTC" firstStartedPulling="2025-07-07 02:49:03.323055826 +0000 UTC m=+45.666400573" lastFinishedPulling="2025-07-07 02:49:14.82904621 +0000 UTC m=+57.172390956" observedRunningTime="2025-07-07 02:49:15.503494607 +0000 UTC m=+57.846839378" watchObservedRunningTime="2025-07-07 02:49:17.545846308 +0000 UTC m=+59.889191079" Jul 7 02:49:18.108829 containerd[1598]: time="2025-07-07T02:49:18.108362101Z" level=info msg="StopPodSandbox for \"745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094\"" Jul 7 02:49:18.527298 kubelet[2809]: I0707 02:49:18.527160 2809 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 02:49:18.559269 systemd-journald[1172]: Under memory pressure, flushing caches. Jul 7 02:49:18.547066 systemd-resolved[1505]: Under memory pressure, flushing caches. Jul 7 02:49:18.554159 systemd-resolved[1505]: Flushed all caches. Jul 7 02:49:18.667913 kubelet[2809]: I0707 02:49:18.652052 2809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-549c48f8dd-dvzbn" podStartSLOduration=31.694332573 podStartE2EDuration="44.651995302s" podCreationTimestamp="2025-07-07 02:48:34 +0000 UTC" firstStartedPulling="2025-07-07 02:49:04.075806347 +0000 UTC m=+46.419151094" lastFinishedPulling="2025-07-07 02:49:17.033469073 +0000 UTC m=+59.376813823" observedRunningTime="2025-07-07 02:49:17.536420756 +0000 UTC m=+59.879765505" watchObservedRunningTime="2025-07-07 02:49:18.651995302 +0000 UTC m=+60.995340049" Jul 7 02:49:19.193750 containerd[1598]: 2025-07-07 02:49:18.842 [WARNING][5557] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ijdf9.gb1.brightbox.com-k8s-calico--apiserver--549c48f8dd--dvzbn-eth0", GenerateName:"calico-apiserver-549c48f8dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"71e788e3-7905-4ea7-85e7-4768ecd449f5", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 48, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"549c48f8dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ijdf9.gb1.brightbox.com", ContainerID:"982bd2844d72df9d4ee25b9d16b5757995a0610dd81506a6c926138cddb1efb2", Pod:"calico-apiserver-549c48f8dd-dvzbn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8b5cd036b4c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:49:19.193750 containerd[1598]: 2025-07-07 02:49:18.856 [INFO][5557] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094" Jul 7 02:49:19.193750 containerd[1598]: 2025-07-07 02:49:18.856 [INFO][5557] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094" iface="eth0" netns="" Jul 7 02:49:19.193750 containerd[1598]: 2025-07-07 02:49:18.856 [INFO][5557] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094" Jul 7 02:49:19.193750 containerd[1598]: 2025-07-07 02:49:18.856 [INFO][5557] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094" Jul 7 02:49:19.193750 containerd[1598]: 2025-07-07 02:49:19.155 [INFO][5586] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094" HandleID="k8s-pod-network.745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094" Workload="srv--ijdf9.gb1.brightbox.com-k8s-calico--apiserver--549c48f8dd--dvzbn-eth0" Jul 7 02:49:19.193750 containerd[1598]: 2025-07-07 02:49:19.158 [INFO][5586] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:49:19.193750 containerd[1598]: 2025-07-07 02:49:19.159 [INFO][5586] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:49:19.193750 containerd[1598]: 2025-07-07 02:49:19.180 [WARNING][5586] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094" HandleID="k8s-pod-network.745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094" Workload="srv--ijdf9.gb1.brightbox.com-k8s-calico--apiserver--549c48f8dd--dvzbn-eth0" Jul 7 02:49:19.193750 containerd[1598]: 2025-07-07 02:49:19.180 [INFO][5586] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094" HandleID="k8s-pod-network.745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094" Workload="srv--ijdf9.gb1.brightbox.com-k8s-calico--apiserver--549c48f8dd--dvzbn-eth0" Jul 7 02:49:19.193750 containerd[1598]: 2025-07-07 02:49:19.182 [INFO][5586] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:49:19.193750 containerd[1598]: 2025-07-07 02:49:19.187 [INFO][5557] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094" Jul 7 02:49:19.196379 containerd[1598]: time="2025-07-07T02:49:19.193819657Z" level=info msg="TearDown network for sandbox \"745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094\" successfully" Jul 7 02:49:19.196379 containerd[1598]: time="2025-07-07T02:49:19.193857984Z" level=info msg="StopPodSandbox for \"745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094\" returns successfully" Jul 7 02:49:19.344833 containerd[1598]: time="2025-07-07T02:49:19.344665867Z" level=info msg="RemovePodSandbox for \"745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094\"" Jul 7 02:49:19.347960 containerd[1598]: time="2025-07-07T02:49:19.347720296Z" level=info msg="Forcibly stopping sandbox \"745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094\"" Jul 7 02:49:19.420189 containerd[1598]: time="2025-07-07T02:49:19.415822963Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:49:19.429007 containerd[1598]: time="2025-07-07T02:49:19.428929030Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 7 02:49:19.429448 containerd[1598]: time="2025-07-07T02:49:19.429428593Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:49:19.432545 containerd[1598]: time="2025-07-07T02:49:19.432508894Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:49:19.434388 containerd[1598]: time="2025-07-07T02:49:19.434201032Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.400193184s" Jul 7 02:49:19.434388 containerd[1598]: time="2025-07-07T02:49:19.434321586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 7 02:49:19.512749 containerd[1598]: 2025-07-07 02:49:19.401 [WARNING][5607] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ijdf9.gb1.brightbox.com-k8s-calico--apiserver--549c48f8dd--dvzbn-eth0", GenerateName:"calico-apiserver-549c48f8dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"71e788e3-7905-4ea7-85e7-4768ecd449f5", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 48, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"549c48f8dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ijdf9.gb1.brightbox.com", ContainerID:"982bd2844d72df9d4ee25b9d16b5757995a0610dd81506a6c926138cddb1efb2", Pod:"calico-apiserver-549c48f8dd-dvzbn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8b5cd036b4c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:49:19.512749 containerd[1598]: 2025-07-07 02:49:19.402 [INFO][5607] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094" Jul 7 02:49:19.512749 containerd[1598]: 2025-07-07 02:49:19.402 [INFO][5607] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094" iface="eth0" netns="" Jul 7 02:49:19.512749 containerd[1598]: 2025-07-07 02:49:19.402 [INFO][5607] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094" Jul 7 02:49:19.512749 containerd[1598]: 2025-07-07 02:49:19.402 [INFO][5607] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094" Jul 7 02:49:19.512749 containerd[1598]: 2025-07-07 02:49:19.485 [INFO][5614] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094" HandleID="k8s-pod-network.745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094" Workload="srv--ijdf9.gb1.brightbox.com-k8s-calico--apiserver--549c48f8dd--dvzbn-eth0" Jul 7 02:49:19.512749 containerd[1598]: 2025-07-07 02:49:19.485 [INFO][5614] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:49:19.512749 containerd[1598]: 2025-07-07 02:49:19.485 [INFO][5614] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:49:19.512749 containerd[1598]: 2025-07-07 02:49:19.498 [WARNING][5614] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094" HandleID="k8s-pod-network.745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094" Workload="srv--ijdf9.gb1.brightbox.com-k8s-calico--apiserver--549c48f8dd--dvzbn-eth0" Jul 7 02:49:19.512749 containerd[1598]: 2025-07-07 02:49:19.498 [INFO][5614] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094" HandleID="k8s-pod-network.745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094" Workload="srv--ijdf9.gb1.brightbox.com-k8s-calico--apiserver--549c48f8dd--dvzbn-eth0" Jul 7 02:49:19.512749 containerd[1598]: 2025-07-07 02:49:19.503 [INFO][5614] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:49:19.512749 containerd[1598]: 2025-07-07 02:49:19.508 [INFO][5607] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094" Jul 7 02:49:19.512749 containerd[1598]: time="2025-07-07T02:49:19.512321484Z" level=info msg="TearDown network for sandbox \"745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094\" successfully" Jul 7 02:49:19.537627 containerd[1598]: time="2025-07-07T02:49:19.537581311Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 02:49:19.565253 containerd[1598]: time="2025-07-07T02:49:19.565198133Z" level=info msg="RemovePodSandbox \"745ff921adcf68a4cfdb5203dbc7c1c7fb9a5168b6bfb15e4661e79c6e49b094\" returns successfully" Jul 7 02:49:19.567287 containerd[1598]: time="2025-07-07T02:49:19.567242685Z" level=info msg="StopPodSandbox for \"de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069\"" Jul 7 02:49:19.581182 containerd[1598]: time="2025-07-07T02:49:19.581035185Z" level=info msg="CreateContainer within sandbox \"bcad93abb9f7dbcbf591692e31d34f168a043e65eda2c594875e725deebca071\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 7 02:49:19.616175 containerd[1598]: time="2025-07-07T02:49:19.616091212Z" level=info msg="CreateContainer within sandbox \"bcad93abb9f7dbcbf591692e31d34f168a043e65eda2c594875e725deebca071\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"02e03b367c5dc3b8b1f2fe5dd8abbb1690227122523a792b1686d8c40a204cbb\"" Jul 7 02:49:19.621797 containerd[1598]: time="2025-07-07T02:49:19.621760041Z" level=info msg="StartContainer for \"02e03b367c5dc3b8b1f2fe5dd8abbb1690227122523a792b1686d8c40a204cbb\"" Jul 7 02:49:19.737005 containerd[1598]: 2025-07-07 02:49:19.634 [WARNING][5628] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ijdf9.gb1.brightbox.com-k8s-calico--apiserver--549c48f8dd--b5dg9-eth0", GenerateName:"calico-apiserver-549c48f8dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"d34f9b09-d10e-4fb0-9e83-4c578ee4bf07", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 48, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"549c48f8dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ijdf9.gb1.brightbox.com", ContainerID:"af186167d99322f08a62887b6d1e0fe4965f9a45bd5bb0aa6f05bcb83712e83f", Pod:"calico-apiserver-549c48f8dd-b5dg9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia07f16fb571", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:49:19.737005 containerd[1598]: 2025-07-07 02:49:19.635 [INFO][5628] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069" Jul 7 02:49:19.737005 containerd[1598]: 2025-07-07 02:49:19.635 [INFO][5628] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069" iface="eth0" netns="" Jul 7 02:49:19.737005 containerd[1598]: 2025-07-07 02:49:19.635 [INFO][5628] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069" Jul 7 02:49:19.737005 containerd[1598]: 2025-07-07 02:49:19.635 [INFO][5628] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069" Jul 7 02:49:19.737005 containerd[1598]: 2025-07-07 02:49:19.709 [INFO][5636] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069" HandleID="k8s-pod-network.de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069" Workload="srv--ijdf9.gb1.brightbox.com-k8s-calico--apiserver--549c48f8dd--b5dg9-eth0" Jul 7 02:49:19.737005 containerd[1598]: 2025-07-07 02:49:19.709 [INFO][5636] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:49:19.737005 containerd[1598]: 2025-07-07 02:49:19.710 [INFO][5636] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:49:19.737005 containerd[1598]: 2025-07-07 02:49:19.721 [WARNING][5636] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069" HandleID="k8s-pod-network.de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069" Workload="srv--ijdf9.gb1.brightbox.com-k8s-calico--apiserver--549c48f8dd--b5dg9-eth0" Jul 7 02:49:19.737005 containerd[1598]: 2025-07-07 02:49:19.722 [INFO][5636] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069" HandleID="k8s-pod-network.de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069" Workload="srv--ijdf9.gb1.brightbox.com-k8s-calico--apiserver--549c48f8dd--b5dg9-eth0" Jul 7 02:49:19.737005 containerd[1598]: 2025-07-07 02:49:19.725 [INFO][5636] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:49:19.737005 containerd[1598]: 2025-07-07 02:49:19.730 [INFO][5628] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069" Jul 7 02:49:19.737005 containerd[1598]: time="2025-07-07T02:49:19.736766502Z" level=info msg="TearDown network for sandbox \"de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069\" successfully" Jul 7 02:49:19.737005 containerd[1598]: time="2025-07-07T02:49:19.736791868Z" level=info msg="StopPodSandbox for \"de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069\" returns successfully" Jul 7 02:49:19.741675 containerd[1598]: time="2025-07-07T02:49:19.738056075Z" level=info msg="RemovePodSandbox for \"de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069\"" Jul 7 02:49:19.741675 containerd[1598]: time="2025-07-07T02:49:19.738118774Z" level=info msg="Forcibly stopping sandbox \"de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069\"" Jul 7 02:49:19.945050 containerd[1598]: 2025-07-07 02:49:19.827 [WARNING][5654] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ijdf9.gb1.brightbox.com-k8s-calico--apiserver--549c48f8dd--b5dg9-eth0", GenerateName:"calico-apiserver-549c48f8dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"d34f9b09-d10e-4fb0-9e83-4c578ee4bf07", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 48, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"549c48f8dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ijdf9.gb1.brightbox.com", ContainerID:"af186167d99322f08a62887b6d1e0fe4965f9a45bd5bb0aa6f05bcb83712e83f", Pod:"calico-apiserver-549c48f8dd-b5dg9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia07f16fb571", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:49:19.945050 containerd[1598]: 2025-07-07 02:49:19.827 [INFO][5654] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069" Jul 7 02:49:19.945050 containerd[1598]: 2025-07-07 02:49:19.827 [INFO][5654] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069" iface="eth0" netns="" Jul 7 02:49:19.945050 containerd[1598]: 2025-07-07 02:49:19.828 [INFO][5654] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069" Jul 7 02:49:19.945050 containerd[1598]: 2025-07-07 02:49:19.828 [INFO][5654] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069" Jul 7 02:49:19.945050 containerd[1598]: 2025-07-07 02:49:19.914 [INFO][5666] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069" HandleID="k8s-pod-network.de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069" Workload="srv--ijdf9.gb1.brightbox.com-k8s-calico--apiserver--549c48f8dd--b5dg9-eth0" Jul 7 02:49:19.945050 containerd[1598]: 2025-07-07 02:49:19.915 [INFO][5666] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:49:19.945050 containerd[1598]: 2025-07-07 02:49:19.915 [INFO][5666] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:49:19.945050 containerd[1598]: 2025-07-07 02:49:19.924 [WARNING][5666] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069" HandleID="k8s-pod-network.de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069" Workload="srv--ijdf9.gb1.brightbox.com-k8s-calico--apiserver--549c48f8dd--b5dg9-eth0" Jul 7 02:49:19.945050 containerd[1598]: 2025-07-07 02:49:19.925 [INFO][5666] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069" HandleID="k8s-pod-network.de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069" Workload="srv--ijdf9.gb1.brightbox.com-k8s-calico--apiserver--549c48f8dd--b5dg9-eth0" Jul 7 02:49:19.945050 containerd[1598]: 2025-07-07 02:49:19.930 [INFO][5666] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:49:19.945050 containerd[1598]: 2025-07-07 02:49:19.934 [INFO][5654] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069" Jul 7 02:49:19.945050 containerd[1598]: time="2025-07-07T02:49:19.943844233Z" level=info msg="TearDown network for sandbox \"de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069\" successfully" Jul 7 02:49:19.972719 containerd[1598]: time="2025-07-07T02:49:19.972672221Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 02:49:19.973201 containerd[1598]: time="2025-07-07T02:49:19.972912258Z" level=info msg="RemovePodSandbox \"de651dd1120a27833a957257af66e05a0e4b24fcf36c3aa1aee9fc33743bb069\" returns successfully" Jul 7 02:49:19.980052 containerd[1598]: time="2025-07-07T02:49:19.979328124Z" level=info msg="StopPodSandbox for \"84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275\"" Jul 7 02:49:20.020616 containerd[1598]: time="2025-07-07T02:49:20.020558859Z" level=info msg="StartContainer for \"02e03b367c5dc3b8b1f2fe5dd8abbb1690227122523a792b1686d8c40a204cbb\" returns successfully" Jul 7 02:49:20.115375 containerd[1598]: 2025-07-07 02:49:20.057 [WARNING][5697] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ijdf9.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--ddnt5-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"8a24b7f2-ddb0-42b0-bd4a-3ac6f41d83cd", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 48, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ijdf9.gb1.brightbox.com", ContainerID:"bc838bf94b685e7cc3ab03d08f740303e0f33c7a610b159dc98c54c1f40d7e2b", Pod:"coredns-7c65d6cfc9-ddnt5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calice808665d05", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:49:20.115375 containerd[1598]: 2025-07-07 02:49:20.057 [INFO][5697] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275" Jul 7 02:49:20.115375 containerd[1598]: 2025-07-07 02:49:20.058 [INFO][5697] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275" iface="eth0" netns="" Jul 7 02:49:20.115375 containerd[1598]: 2025-07-07 02:49:20.058 [INFO][5697] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275" Jul 7 02:49:20.115375 containerd[1598]: 2025-07-07 02:49:20.058 [INFO][5697] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275" Jul 7 02:49:20.115375 containerd[1598]: 2025-07-07 02:49:20.096 [INFO][5716] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275" HandleID="k8s-pod-network.84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275" Workload="srv--ijdf9.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--ddnt5-eth0" Jul 7 02:49:20.115375 containerd[1598]: 2025-07-07 02:49:20.097 [INFO][5716] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:49:20.115375 containerd[1598]: 2025-07-07 02:49:20.097 [INFO][5716] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:49:20.115375 containerd[1598]: 2025-07-07 02:49:20.105 [WARNING][5716] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275" HandleID="k8s-pod-network.84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275" Workload="srv--ijdf9.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--ddnt5-eth0" Jul 7 02:49:20.115375 containerd[1598]: 2025-07-07 02:49:20.105 [INFO][5716] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275" HandleID="k8s-pod-network.84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275" Workload="srv--ijdf9.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--ddnt5-eth0" Jul 7 02:49:20.115375 containerd[1598]: 2025-07-07 02:49:20.108 [INFO][5716] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:49:20.115375 containerd[1598]: 2025-07-07 02:49:20.111 [INFO][5697] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275" Jul 7 02:49:20.118065 containerd[1598]: time="2025-07-07T02:49:20.115401195Z" level=info msg="TearDown network for sandbox \"84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275\" successfully" Jul 7 02:49:20.118065 containerd[1598]: time="2025-07-07T02:49:20.115428265Z" level=info msg="StopPodSandbox for \"84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275\" returns successfully" Jul 7 02:49:20.118065 containerd[1598]: time="2025-07-07T02:49:20.116658479Z" level=info msg="RemovePodSandbox for \"84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275\"" Jul 7 02:49:20.118065 containerd[1598]: time="2025-07-07T02:49:20.116716638Z" level=info msg="Forcibly stopping sandbox \"84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275\"" Jul 7 02:49:20.228897 containerd[1598]: 2025-07-07 02:49:20.177 [WARNING][5731] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ijdf9.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--ddnt5-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"8a24b7f2-ddb0-42b0-bd4a-3ac6f41d83cd", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 48, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ijdf9.gb1.brightbox.com", ContainerID:"bc838bf94b685e7cc3ab03d08f740303e0f33c7a610b159dc98c54c1f40d7e2b", Pod:"coredns-7c65d6cfc9-ddnt5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calice808665d05", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:49:20.228897 containerd[1598]: 2025-07-07 02:49:20.177 [INFO][5731] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275" Jul 7 02:49:20.228897 containerd[1598]: 2025-07-07 02:49:20.177 [INFO][5731] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275" iface="eth0" netns="" Jul 7 02:49:20.228897 containerd[1598]: 2025-07-07 02:49:20.177 [INFO][5731] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275" Jul 7 02:49:20.228897 containerd[1598]: 2025-07-07 02:49:20.177 [INFO][5731] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275" Jul 7 02:49:20.228897 containerd[1598]: 2025-07-07 02:49:20.213 [INFO][5738] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275" HandleID="k8s-pod-network.84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275" Workload="srv--ijdf9.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--ddnt5-eth0" Jul 7 02:49:20.228897 containerd[1598]: 2025-07-07 02:49:20.213 [INFO][5738] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:49:20.228897 containerd[1598]: 2025-07-07 02:49:20.213 [INFO][5738] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:49:20.228897 containerd[1598]: 2025-07-07 02:49:20.220 [WARNING][5738] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275" HandleID="k8s-pod-network.84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275" Workload="srv--ijdf9.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--ddnt5-eth0" Jul 7 02:49:20.228897 containerd[1598]: 2025-07-07 02:49:20.220 [INFO][5738] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275" HandleID="k8s-pod-network.84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275" Workload="srv--ijdf9.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--ddnt5-eth0" Jul 7 02:49:20.228897 containerd[1598]: 2025-07-07 02:49:20.223 [INFO][5738] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:49:20.228897 containerd[1598]: 2025-07-07 02:49:20.226 [INFO][5731] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275" Jul 7 02:49:20.228897 containerd[1598]: time="2025-07-07T02:49:20.228814772Z" level=info msg="TearDown network for sandbox \"84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275\" successfully" Jul 7 02:49:20.232816 containerd[1598]: time="2025-07-07T02:49:20.232588114Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 02:49:20.232816 containerd[1598]: time="2025-07-07T02:49:20.232662864Z" level=info msg="RemovePodSandbox \"84e5d8698cf571ff68977dee078947cf8d10a3f607415a22f153ff6561ff5275\" returns successfully" Jul 7 02:49:20.233783 containerd[1598]: time="2025-07-07T02:49:20.233693721Z" level=info msg="StopPodSandbox for \"69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa\"" Jul 7 02:49:20.339214 containerd[1598]: 2025-07-07 02:49:20.281 [WARNING][5752] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-whisker--6bb4cf4bdb--rwl2k-eth0" Jul 7 02:49:20.339214 containerd[1598]: 2025-07-07 02:49:20.281 [INFO][5752] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa" Jul 7 02:49:20.339214 containerd[1598]: 2025-07-07 02:49:20.281 [INFO][5752] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa" iface="eth0" netns="" Jul 7 02:49:20.339214 containerd[1598]: 2025-07-07 02:49:20.281 [INFO][5752] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa" Jul 7 02:49:20.339214 containerd[1598]: 2025-07-07 02:49:20.281 [INFO][5752] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa" Jul 7 02:49:20.339214 containerd[1598]: 2025-07-07 02:49:20.318 [INFO][5759] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa" HandleID="k8s-pod-network.69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa" Workload="srv--ijdf9.gb1.brightbox.com-k8s-whisker--6bb4cf4bdb--rwl2k-eth0" Jul 7 02:49:20.339214 containerd[1598]: 2025-07-07 02:49:20.319 [INFO][5759] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:49:20.339214 containerd[1598]: 2025-07-07 02:49:20.319 [INFO][5759] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:49:20.339214 containerd[1598]: 2025-07-07 02:49:20.330 [WARNING][5759] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa" HandleID="k8s-pod-network.69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa" Workload="srv--ijdf9.gb1.brightbox.com-k8s-whisker--6bb4cf4bdb--rwl2k-eth0" Jul 7 02:49:20.339214 containerd[1598]: 2025-07-07 02:49:20.330 [INFO][5759] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa" HandleID="k8s-pod-network.69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa" Workload="srv--ijdf9.gb1.brightbox.com-k8s-whisker--6bb4cf4bdb--rwl2k-eth0" Jul 7 02:49:20.339214 containerd[1598]: 2025-07-07 02:49:20.332 [INFO][5759] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:49:20.339214 containerd[1598]: 2025-07-07 02:49:20.336 [INFO][5752] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa" Jul 7 02:49:20.341022 containerd[1598]: time="2025-07-07T02:49:20.339538875Z" level=info msg="TearDown network for sandbox \"69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa\" successfully" Jul 7 02:49:20.341022 containerd[1598]: time="2025-07-07T02:49:20.339587260Z" level=info msg="StopPodSandbox for \"69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa\" returns successfully" Jul 7 02:49:20.341022 containerd[1598]: time="2025-07-07T02:49:20.340365011Z" level=info msg="RemovePodSandbox for \"69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa\"" Jul 7 02:49:20.341022 containerd[1598]: time="2025-07-07T02:49:20.340395283Z" level=info msg="Forcibly stopping sandbox \"69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa\"" Jul 7 02:49:20.439265 containerd[1598]: 2025-07-07 02:49:20.383 [WARNING][5773] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa" WorkloadEndpoint="srv--ijdf9.gb1.brightbox.com-k8s-whisker--6bb4cf4bdb--rwl2k-eth0" Jul 7 02:49:20.439265 containerd[1598]: 2025-07-07 02:49:20.384 [INFO][5773] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa" Jul 7 02:49:20.439265 containerd[1598]: 2025-07-07 02:49:20.384 [INFO][5773] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa" iface="eth0" netns="" Jul 7 02:49:20.439265 containerd[1598]: 2025-07-07 02:49:20.384 [INFO][5773] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa" Jul 7 02:49:20.439265 containerd[1598]: 2025-07-07 02:49:20.384 [INFO][5773] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa" Jul 7 02:49:20.439265 containerd[1598]: 2025-07-07 02:49:20.423 [INFO][5780] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa" HandleID="k8s-pod-network.69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa" Workload="srv--ijdf9.gb1.brightbox.com-k8s-whisker--6bb4cf4bdb--rwl2k-eth0" Jul 7 02:49:20.439265 containerd[1598]: 2025-07-07 02:49:20.423 [INFO][5780] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:49:20.439265 containerd[1598]: 2025-07-07 02:49:20.423 [INFO][5780] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:49:20.439265 containerd[1598]: 2025-07-07 02:49:20.431 [WARNING][5780] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa" HandleID="k8s-pod-network.69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa" Workload="srv--ijdf9.gb1.brightbox.com-k8s-whisker--6bb4cf4bdb--rwl2k-eth0" Jul 7 02:49:20.439265 containerd[1598]: 2025-07-07 02:49:20.431 [INFO][5780] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa" HandleID="k8s-pod-network.69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa" Workload="srv--ijdf9.gb1.brightbox.com-k8s-whisker--6bb4cf4bdb--rwl2k-eth0" Jul 7 02:49:20.439265 containerd[1598]: 2025-07-07 02:49:20.433 [INFO][5780] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:49:20.439265 containerd[1598]: 2025-07-07 02:49:20.435 [INFO][5773] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa" Jul 7 02:49:20.439265 containerd[1598]: time="2025-07-07T02:49:20.439335548Z" level=info msg="TearDown network for sandbox \"69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa\" successfully" Jul 7 02:49:20.443342 containerd[1598]: time="2025-07-07T02:49:20.443294825Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 02:49:20.443579 containerd[1598]: time="2025-07-07T02:49:20.443460130Z" level=info msg="RemovePodSandbox \"69243925f2b62c96907ce7f00ff3b0dbb23716d55a811cda76e10c33430ae4fa\" returns successfully" Jul 7 02:49:20.444343 containerd[1598]: time="2025-07-07T02:49:20.443989422Z" level=info msg="StopPodSandbox for \"0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6\"" Jul 7 02:49:20.551286 containerd[1598]: 2025-07-07 02:49:20.499 [WARNING][5795] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ijdf9.gb1.brightbox.com-k8s-csi--node--driver--fwz77-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6d6134fd-985e-434b-964a-df65b698ac32", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 48, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ijdf9.gb1.brightbox.com", ContainerID:"bcad93abb9f7dbcbf591692e31d34f168a043e65eda2c594875e725deebca071", Pod:"csi-node-driver-fwz77", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.97.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7e857705598", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:49:20.551286 containerd[1598]: 2025-07-07 02:49:20.499 [INFO][5795] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6" Jul 7 02:49:20.551286 containerd[1598]: 2025-07-07 02:49:20.499 [INFO][5795] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6" iface="eth0" netns="" Jul 7 02:49:20.551286 containerd[1598]: 2025-07-07 02:49:20.500 [INFO][5795] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6" Jul 7 02:49:20.551286 containerd[1598]: 2025-07-07 02:49:20.500 [INFO][5795] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6" Jul 7 02:49:20.551286 containerd[1598]: 2025-07-07 02:49:20.529 [INFO][5802] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6" HandleID="k8s-pod-network.0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6" Workload="srv--ijdf9.gb1.brightbox.com-k8s-csi--node--driver--fwz77-eth0" Jul 7 02:49:20.551286 containerd[1598]: 2025-07-07 02:49:20.529 [INFO][5802] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:49:20.551286 containerd[1598]: 2025-07-07 02:49:20.529 [INFO][5802] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:49:20.551286 containerd[1598]: 2025-07-07 02:49:20.537 [WARNING][5802] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6" HandleID="k8s-pod-network.0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6" Workload="srv--ijdf9.gb1.brightbox.com-k8s-csi--node--driver--fwz77-eth0" Jul 7 02:49:20.551286 containerd[1598]: 2025-07-07 02:49:20.537 [INFO][5802] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6" HandleID="k8s-pod-network.0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6" Workload="srv--ijdf9.gb1.brightbox.com-k8s-csi--node--driver--fwz77-eth0" Jul 7 02:49:20.551286 containerd[1598]: 2025-07-07 02:49:20.541 [INFO][5802] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:49:20.551286 containerd[1598]: 2025-07-07 02:49:20.545 [INFO][5795] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6" Jul 7 02:49:20.551286 containerd[1598]: time="2025-07-07T02:49:20.551112129Z" level=info msg="TearDown network for sandbox \"0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6\" successfully" Jul 7 02:49:20.551286 containerd[1598]: time="2025-07-07T02:49:20.551250670Z" level=info msg="StopPodSandbox for \"0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6\" returns successfully" Jul 7 02:49:20.555219 containerd[1598]: time="2025-07-07T02:49:20.552846476Z" level=info msg="RemovePodSandbox for \"0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6\"" Jul 7 02:49:20.555219 containerd[1598]: time="2025-07-07T02:49:20.552913321Z" level=info msg="Forcibly stopping sandbox \"0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6\"" Jul 7 02:49:20.609750 systemd-journald[1172]: Under memory pressure, flushing caches. Jul 7 02:49:20.594645 systemd-resolved[1505]: Under memory pressure, flushing caches. Jul 7 02:49:20.594656 systemd-resolved[1505]: Flushed all caches. Jul 7 02:49:20.735960 kubelet[2809]: I0707 02:49:20.735850 2809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-fwz77" podStartSLOduration=27.164253715 podStartE2EDuration="42.726875876s" podCreationTimestamp="2025-07-07 02:48:38 +0000 UTC" firstStartedPulling="2025-07-07 02:49:03.885357622 +0000 UTC m=+46.228702372" lastFinishedPulling="2025-07-07 02:49:19.447979787 +0000 UTC m=+61.791324533" observedRunningTime="2025-07-07 02:49:20.717509028 +0000 UTC m=+63.060853799" watchObservedRunningTime="2025-07-07 02:49:20.726875876 +0000 UTC m=+63.070220622" Jul 7 02:49:20.750767 containerd[1598]: 2025-07-07 02:49:20.666 [WARNING][5817] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ijdf9.gb1.brightbox.com-k8s-csi--node--driver--fwz77-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6d6134fd-985e-434b-964a-df65b698ac32", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 48, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ijdf9.gb1.brightbox.com", ContainerID:"bcad93abb9f7dbcbf591692e31d34f168a043e65eda2c594875e725deebca071", Pod:"csi-node-driver-fwz77", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.97.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7e857705598", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:49:20.750767 containerd[1598]: 2025-07-07 02:49:20.666 [INFO][5817] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6" Jul 7 02:49:20.750767 containerd[1598]: 2025-07-07 02:49:20.667 [INFO][5817] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6" iface="eth0" netns="" Jul 7 02:49:20.750767 containerd[1598]: 2025-07-07 02:49:20.667 [INFO][5817] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6" Jul 7 02:49:20.750767 containerd[1598]: 2025-07-07 02:49:20.667 [INFO][5817] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6" Jul 7 02:49:20.750767 containerd[1598]: 2025-07-07 02:49:20.721 [INFO][5825] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6" HandleID="k8s-pod-network.0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6" Workload="srv--ijdf9.gb1.brightbox.com-k8s-csi--node--driver--fwz77-eth0" Jul 7 02:49:20.750767 containerd[1598]: 2025-07-07 02:49:20.722 [INFO][5825] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:49:20.750767 containerd[1598]: 2025-07-07 02:49:20.722 [INFO][5825] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:49:20.750767 containerd[1598]: 2025-07-07 02:49:20.734 [WARNING][5825] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6" HandleID="k8s-pod-network.0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6" Workload="srv--ijdf9.gb1.brightbox.com-k8s-csi--node--driver--fwz77-eth0" Jul 7 02:49:20.750767 containerd[1598]: 2025-07-07 02:49:20.734 [INFO][5825] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6" HandleID="k8s-pod-network.0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6" Workload="srv--ijdf9.gb1.brightbox.com-k8s-csi--node--driver--fwz77-eth0" Jul 7 02:49:20.750767 containerd[1598]: 2025-07-07 02:49:20.742 [INFO][5825] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:49:20.750767 containerd[1598]: 2025-07-07 02:49:20.746 [INFO][5817] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6" Jul 7 02:49:20.755486 containerd[1598]: time="2025-07-07T02:49:20.750820322Z" level=info msg="TearDown network for sandbox \"0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6\" successfully" Jul 7 02:49:20.800854 containerd[1598]: time="2025-07-07T02:49:20.799904166Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 02:49:20.800854 containerd[1598]: time="2025-07-07T02:49:20.800055064Z" level=info msg="RemovePodSandbox \"0e3b27cbdcf3147137e8284e6395893b16c1d85e6d4988f9e6b79a39c62a78e6\" returns successfully" Jul 7 02:49:20.802444 containerd[1598]: time="2025-07-07T02:49:20.801747389Z" level=info msg="StopPodSandbox for \"ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c\"" Jul 7 02:49:20.923161 containerd[1598]: 2025-07-07 02:49:20.865 [WARNING][5839] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ijdf9.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--grxdk-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"6e39c44d-9500-40c6-a8ed-19a5f1bb302d", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 48, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ijdf9.gb1.brightbox.com", ContainerID:"96ab9d27065cb2f6e7ce7a94b43bea11cc2567b041417651da7f56b5597a23a8", Pod:"coredns-7c65d6cfc9-grxdk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7cb6eb94fe7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:49:20.923161 containerd[1598]: 2025-07-07 02:49:20.865 [INFO][5839] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c" Jul 7 02:49:20.923161 containerd[1598]: 2025-07-07 02:49:20.865 [INFO][5839] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c" iface="eth0" netns="" Jul 7 02:49:20.923161 containerd[1598]: 2025-07-07 02:49:20.865 [INFO][5839] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c" Jul 7 02:49:20.923161 containerd[1598]: 2025-07-07 02:49:20.865 [INFO][5839] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c" Jul 7 02:49:20.923161 containerd[1598]: 2025-07-07 02:49:20.909 [INFO][5846] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c" HandleID="k8s-pod-network.ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c" Workload="srv--ijdf9.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--grxdk-eth0" Jul 7 02:49:20.923161 containerd[1598]: 2025-07-07 02:49:20.910 [INFO][5846] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:49:20.923161 containerd[1598]: 2025-07-07 02:49:20.910 [INFO][5846] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:49:20.923161 containerd[1598]: 2025-07-07 02:49:20.917 [WARNING][5846] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c" HandleID="k8s-pod-network.ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c" Workload="srv--ijdf9.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--grxdk-eth0" Jul 7 02:49:20.923161 containerd[1598]: 2025-07-07 02:49:20.917 [INFO][5846] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c" HandleID="k8s-pod-network.ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c" Workload="srv--ijdf9.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--grxdk-eth0" Jul 7 02:49:20.923161 containerd[1598]: 2025-07-07 02:49:20.918 [INFO][5846] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:49:20.923161 containerd[1598]: 2025-07-07 02:49:20.921 [INFO][5839] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c" Jul 7 02:49:20.923841 containerd[1598]: time="2025-07-07T02:49:20.923815560Z" level=info msg="TearDown network for sandbox \"ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c\" successfully" Jul 7 02:49:20.923921 containerd[1598]: time="2025-07-07T02:49:20.923909489Z" level=info msg="StopPodSandbox for \"ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c\" returns successfully" Jul 7 02:49:20.924543 containerd[1598]: time="2025-07-07T02:49:20.924501192Z" level=info msg="RemovePodSandbox for \"ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c\"" Jul 7 02:49:20.924965 containerd[1598]: time="2025-07-07T02:49:20.924632628Z" level=info msg="Forcibly stopping sandbox \"ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c\"" Jul 7 02:49:21.048365 containerd[1598]: 2025-07-07 02:49:21.002 [WARNING][5860] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ijdf9.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--grxdk-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"6e39c44d-9500-40c6-a8ed-19a5f1bb302d", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 48, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ijdf9.gb1.brightbox.com", ContainerID:"96ab9d27065cb2f6e7ce7a94b43bea11cc2567b041417651da7f56b5597a23a8", Pod:"coredns-7c65d6cfc9-grxdk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7cb6eb94fe7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:49:21.048365 containerd[1598]: 2025-07-07 02:49:21.002 [INFO][5860] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c" Jul 7 02:49:21.048365 containerd[1598]: 2025-07-07 02:49:21.002 [INFO][5860] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c" iface="eth0" netns="" Jul 7 02:49:21.048365 containerd[1598]: 2025-07-07 02:49:21.002 [INFO][5860] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c" Jul 7 02:49:21.048365 containerd[1598]: 2025-07-07 02:49:21.002 [INFO][5860] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c" Jul 7 02:49:21.048365 containerd[1598]: 2025-07-07 02:49:21.033 [INFO][5867] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c" HandleID="k8s-pod-network.ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c" Workload="srv--ijdf9.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--grxdk-eth0" Jul 7 02:49:21.048365 containerd[1598]: 2025-07-07 02:49:21.033 [INFO][5867] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:49:21.048365 containerd[1598]: 2025-07-07 02:49:21.033 [INFO][5867] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:49:21.048365 containerd[1598]: 2025-07-07 02:49:21.041 [WARNING][5867] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c" HandleID="k8s-pod-network.ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c" Workload="srv--ijdf9.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--grxdk-eth0" Jul 7 02:49:21.048365 containerd[1598]: 2025-07-07 02:49:21.041 [INFO][5867] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c" HandleID="k8s-pod-network.ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c" Workload="srv--ijdf9.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--grxdk-eth0" Jul 7 02:49:21.048365 containerd[1598]: 2025-07-07 02:49:21.043 [INFO][5867] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:49:21.048365 containerd[1598]: 2025-07-07 02:49:21.045 [INFO][5860] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c" Jul 7 02:49:21.048943 containerd[1598]: time="2025-07-07T02:49:21.048402422Z" level=info msg="TearDown network for sandbox \"ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c\" successfully" Jul 7 02:49:21.050713 containerd[1598]: time="2025-07-07T02:49:21.050583034Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 02:49:21.050713 containerd[1598]: time="2025-07-07T02:49:21.050646819Z" level=info msg="RemovePodSandbox \"ee5015c7fb812d3e29ffcf6ec895f77ff2f29bfa27d58415a0c4275fa625190c\" returns successfully" Jul 7 02:49:21.051945 containerd[1598]: time="2025-07-07T02:49:21.051907971Z" level=info msg="StopPodSandbox for \"7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656\"" Jul 7 02:49:21.075646 kubelet[2809]: I0707 02:49:21.075447 2809 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 7 02:49:21.083754 kubelet[2809]: I0707 02:49:21.083640 2809 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 7 02:49:21.157308 containerd[1598]: 2025-07-07 02:49:21.099 [WARNING][5881] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ijdf9.gb1.brightbox.com-k8s-goldmane--58fd7646b9--nnz8r-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"b1996f07-5d79-4f58-bbb7-a4685ca36d35", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 48, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ijdf9.gb1.brightbox.com", ContainerID:"6fec8bc69b37655936342ac6e993012fde3e6f7856e85e0fb9c4828fcbfce35b", Pod:"goldmane-58fd7646b9-nnz8r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.97.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7bf71826d2e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:49:21.157308 containerd[1598]: 2025-07-07 02:49:21.099 [INFO][5881] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656" Jul 7 02:49:21.157308 containerd[1598]: 2025-07-07 02:49:21.099 [INFO][5881] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656" iface="eth0" netns="" Jul 7 02:49:21.157308 containerd[1598]: 2025-07-07 02:49:21.099 [INFO][5881] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656" Jul 7 02:49:21.157308 containerd[1598]: 2025-07-07 02:49:21.099 [INFO][5881] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656" Jul 7 02:49:21.157308 containerd[1598]: 2025-07-07 02:49:21.139 [INFO][5890] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656" HandleID="k8s-pod-network.7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656" Workload="srv--ijdf9.gb1.brightbox.com-k8s-goldmane--58fd7646b9--nnz8r-eth0" Jul 7 02:49:21.157308 containerd[1598]: 2025-07-07 02:49:21.140 [INFO][5890] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:49:21.157308 containerd[1598]: 2025-07-07 02:49:21.140 [INFO][5890] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:49:21.157308 containerd[1598]: 2025-07-07 02:49:21.148 [WARNING][5890] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656" HandleID="k8s-pod-network.7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656" Workload="srv--ijdf9.gb1.brightbox.com-k8s-goldmane--58fd7646b9--nnz8r-eth0" Jul 7 02:49:21.157308 containerd[1598]: 2025-07-07 02:49:21.148 [INFO][5890] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656" HandleID="k8s-pod-network.7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656" Workload="srv--ijdf9.gb1.brightbox.com-k8s-goldmane--58fd7646b9--nnz8r-eth0" Jul 7 02:49:21.157308 containerd[1598]: 2025-07-07 02:49:21.152 [INFO][5890] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:49:21.157308 containerd[1598]: 2025-07-07 02:49:21.155 [INFO][5881] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656" Jul 7 02:49:21.159017 containerd[1598]: time="2025-07-07T02:49:21.157355563Z" level=info msg="TearDown network for sandbox \"7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656\" successfully" Jul 7 02:49:21.159017 containerd[1598]: time="2025-07-07T02:49:21.157381955Z" level=info msg="StopPodSandbox for \"7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656\" returns successfully" Jul 7 02:49:21.159017 containerd[1598]: time="2025-07-07T02:49:21.158823812Z" level=info msg="RemovePodSandbox for \"7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656\"" Jul 7 02:49:21.159017 containerd[1598]: time="2025-07-07T02:49:21.158857487Z" level=info msg="Forcibly stopping sandbox \"7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656\"" Jul 7 02:49:21.274951 containerd[1598]: 2025-07-07 02:49:21.216 [WARNING][5904] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ijdf9.gb1.brightbox.com-k8s-goldmane--58fd7646b9--nnz8r-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"b1996f07-5d79-4f58-bbb7-a4685ca36d35", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 48, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ijdf9.gb1.brightbox.com", ContainerID:"6fec8bc69b37655936342ac6e993012fde3e6f7856e85e0fb9c4828fcbfce35b", Pod:"goldmane-58fd7646b9-nnz8r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.97.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7bf71826d2e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:49:21.274951 containerd[1598]: 2025-07-07 02:49:21.217 [INFO][5904] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656" Jul 7 02:49:21.274951 containerd[1598]: 2025-07-07 02:49:21.217 [INFO][5904] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656" iface="eth0" netns="" Jul 7 02:49:21.274951 containerd[1598]: 2025-07-07 02:49:21.217 [INFO][5904] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656" Jul 7 02:49:21.274951 containerd[1598]: 2025-07-07 02:49:21.217 [INFO][5904] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656" Jul 7 02:49:21.274951 containerd[1598]: 2025-07-07 02:49:21.255 [INFO][5911] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656" HandleID="k8s-pod-network.7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656" Workload="srv--ijdf9.gb1.brightbox.com-k8s-goldmane--58fd7646b9--nnz8r-eth0" Jul 7 02:49:21.274951 containerd[1598]: 2025-07-07 02:49:21.255 [INFO][5911] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:49:21.274951 containerd[1598]: 2025-07-07 02:49:21.255 [INFO][5911] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:49:21.274951 containerd[1598]: 2025-07-07 02:49:21.267 [WARNING][5911] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656" HandleID="k8s-pod-network.7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656" Workload="srv--ijdf9.gb1.brightbox.com-k8s-goldmane--58fd7646b9--nnz8r-eth0" Jul 7 02:49:21.274951 containerd[1598]: 2025-07-07 02:49:21.267 [INFO][5911] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656" HandleID="k8s-pod-network.7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656" Workload="srv--ijdf9.gb1.brightbox.com-k8s-goldmane--58fd7646b9--nnz8r-eth0" Jul 7 02:49:21.274951 containerd[1598]: 2025-07-07 02:49:21.269 [INFO][5911] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:49:21.274951 containerd[1598]: 2025-07-07 02:49:21.271 [INFO][5904] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656" Jul 7 02:49:21.276391 containerd[1598]: time="2025-07-07T02:49:21.275563341Z" level=info msg="TearDown network for sandbox \"7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656\" successfully" Jul 7 02:49:21.278262 containerd[1598]: time="2025-07-07T02:49:21.278226301Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 02:49:21.278316 containerd[1598]: time="2025-07-07T02:49:21.278299998Z" level=info msg="RemovePodSandbox \"7d9e082d0321b0be82967386a0f55f2a760295cff0b0a199eaf7cca822f53656\" returns successfully" Jul 7 02:49:21.278977 containerd[1598]: time="2025-07-07T02:49:21.278945953Z" level=info msg="StopPodSandbox for \"e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57\"" Jul 7 02:49:21.373337 containerd[1598]: 2025-07-07 02:49:21.325 [WARNING][5925] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ijdf9.gb1.brightbox.com-k8s-calico--kube--controllers--84b8488869--mvrcm-eth0", GenerateName:"calico-kube-controllers-84b8488869-", Namespace:"calico-system", SelfLink:"", UID:"bb5b56a2-f557-4e2e-85d3-68d9ce53c0bd", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 48, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84b8488869", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ijdf9.gb1.brightbox.com", ContainerID:"297167599ddd3bbbb1b0e509c43c4544b6ab61856c15e2e5554fa182735a4ca5", Pod:"calico-kube-controllers-84b8488869-mvrcm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.97.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali192c8a8ac1b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:49:21.373337 containerd[1598]: 2025-07-07 02:49:21.326 [INFO][5925] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57" Jul 7 02:49:21.373337 containerd[1598]: 2025-07-07 02:49:21.326 [INFO][5925] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57" iface="eth0" netns="" Jul 7 02:49:21.373337 containerd[1598]: 2025-07-07 02:49:21.326 [INFO][5925] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57" Jul 7 02:49:21.373337 containerd[1598]: 2025-07-07 02:49:21.326 [INFO][5925] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57" Jul 7 02:49:21.373337 containerd[1598]: 2025-07-07 02:49:21.355 [INFO][5932] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57" HandleID="k8s-pod-network.e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57" Workload="srv--ijdf9.gb1.brightbox.com-k8s-calico--kube--controllers--84b8488869--mvrcm-eth0" Jul 7 02:49:21.373337 containerd[1598]: 2025-07-07 02:49:21.355 [INFO][5932] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:49:21.373337 containerd[1598]: 2025-07-07 02:49:21.355 [INFO][5932] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:49:21.373337 containerd[1598]: 2025-07-07 02:49:21.367 [WARNING][5932] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57" HandleID="k8s-pod-network.e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57" Workload="srv--ijdf9.gb1.brightbox.com-k8s-calico--kube--controllers--84b8488869--mvrcm-eth0" Jul 7 02:49:21.373337 containerd[1598]: 2025-07-07 02:49:21.367 [INFO][5932] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57" HandleID="k8s-pod-network.e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57" Workload="srv--ijdf9.gb1.brightbox.com-k8s-calico--kube--controllers--84b8488869--mvrcm-eth0" Jul 7 02:49:21.373337 containerd[1598]: 2025-07-07 02:49:21.369 [INFO][5932] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:49:21.373337 containerd[1598]: 2025-07-07 02:49:21.371 [INFO][5925] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57" Jul 7 02:49:21.374095 containerd[1598]: time="2025-07-07T02:49:21.373311621Z" level=info msg="TearDown network for sandbox \"e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57\" successfully" Jul 7 02:49:21.374095 containerd[1598]: time="2025-07-07T02:49:21.373972366Z" level=info msg="StopPodSandbox for \"e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57\" returns successfully" Jul 7 02:49:21.374811 containerd[1598]: time="2025-07-07T02:49:21.374499649Z" level=info msg="RemovePodSandbox for \"e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57\"" Jul 7 02:49:21.374811 containerd[1598]: time="2025-07-07T02:49:21.374539510Z" level=info msg="Forcibly stopping sandbox \"e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57\"" Jul 7 02:49:21.463618 containerd[1598]: 2025-07-07 02:49:21.417 [WARNING][5946] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ijdf9.gb1.brightbox.com-k8s-calico--kube--controllers--84b8488869--mvrcm-eth0", GenerateName:"calico-kube-controllers-84b8488869-", Namespace:"calico-system", SelfLink:"", UID:"bb5b56a2-f557-4e2e-85d3-68d9ce53c0bd", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 48, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84b8488869", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ijdf9.gb1.brightbox.com", ContainerID:"297167599ddd3bbbb1b0e509c43c4544b6ab61856c15e2e5554fa182735a4ca5", Pod:"calico-kube-controllers-84b8488869-mvrcm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.97.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali192c8a8ac1b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:49:21.463618 containerd[1598]: 2025-07-07 02:49:21.418 [INFO][5946] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57" Jul 7 02:49:21.463618 containerd[1598]: 2025-07-07 02:49:21.418 [INFO][5946] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57" iface="eth0" netns="" Jul 7 02:49:21.463618 containerd[1598]: 2025-07-07 02:49:21.418 [INFO][5946] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57" Jul 7 02:49:21.463618 containerd[1598]: 2025-07-07 02:49:21.418 [INFO][5946] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57" Jul 7 02:49:21.463618 containerd[1598]: 2025-07-07 02:49:21.447 [INFO][5953] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57" HandleID="k8s-pod-network.e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57" Workload="srv--ijdf9.gb1.brightbox.com-k8s-calico--kube--controllers--84b8488869--mvrcm-eth0" Jul 7 02:49:21.463618 containerd[1598]: 2025-07-07 02:49:21.448 [INFO][5953] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:49:21.463618 containerd[1598]: 2025-07-07 02:49:21.448 [INFO][5953] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:49:21.463618 containerd[1598]: 2025-07-07 02:49:21.456 [WARNING][5953] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57" HandleID="k8s-pod-network.e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57" Workload="srv--ijdf9.gb1.brightbox.com-k8s-calico--kube--controllers--84b8488869--mvrcm-eth0" Jul 7 02:49:21.463618 containerd[1598]: 2025-07-07 02:49:21.456 [INFO][5953] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57" HandleID="k8s-pod-network.e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57" Workload="srv--ijdf9.gb1.brightbox.com-k8s-calico--kube--controllers--84b8488869--mvrcm-eth0" Jul 7 02:49:21.463618 containerd[1598]: 2025-07-07 02:49:21.459 [INFO][5953] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:49:21.463618 containerd[1598]: 2025-07-07 02:49:21.461 [INFO][5946] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57" Jul 7 02:49:21.464157 containerd[1598]: time="2025-07-07T02:49:21.463677328Z" level=info msg="TearDown network for sandbox \"e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57\" successfully" Jul 7 02:49:21.482517 containerd[1598]: time="2025-07-07T02:49:21.482006952Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 02:49:21.482658 containerd[1598]: time="2025-07-07T02:49:21.482570552Z" level=info msg="RemovePodSandbox \"e07b2d06e878b1cea353e2a9fd41b015f37961d4c97133071139309b6ac4fe57\" returns successfully" Jul 7 02:49:22.652711 systemd-journald[1172]: Under memory pressure, flushing caches. Jul 7 02:49:22.642339 systemd-resolved[1505]: Under memory pressure, flushing caches. Jul 7 02:49:22.642353 systemd-resolved[1505]: Flushed all caches. Jul 7 02:49:28.822275 kubelet[2809]: I0707 02:49:28.812882 2809 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 02:49:39.051540 kubelet[2809]: I0707 02:49:39.050391 2809 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 02:49:48.303263 systemd[1]: Started sshd@7-10.244.101.74:22-139.178.68.195:35820.service - OpenSSH per-connection server daemon (139.178.68.195:35820). Jul 7 02:49:49.349643 sshd[6063]: Accepted publickey for core from 139.178.68.195 port 35820 ssh2: RSA SHA256:OzzIFs54pJXMP2eymQNEzIb/qF+YzQ98zvMT1AG90zI Jul 7 02:49:49.354396 sshd[6063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:49:49.396756 systemd-logind[1572]: New session 10 of user core. Jul 7 02:49:49.404335 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 02:49:50.563881 systemd-journald[1172]: Under memory pressure, flushing caches. Jul 7 02:49:50.547203 systemd-resolved[1505]: Under memory pressure, flushing caches. Jul 7 02:49:50.547211 systemd-resolved[1505]: Flushed all caches. Jul 7 02:49:50.622328 sshd[6063]: pam_unix(sshd:session): session closed for user core Jul 7 02:49:50.642277 systemd[1]: sshd@7-10.244.101.74:22-139.178.68.195:35820.service: Deactivated successfully. Jul 7 02:49:50.651539 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 02:49:50.651739 systemd-logind[1572]: Session 10 logged out. Waiting for processes to exit. Jul 7 02:49:50.657433 systemd-logind[1572]: Removed session 10. Jul 7 02:49:52.602436 systemd-journald[1172]: Under memory pressure, flushing caches. Jul 7 02:49:52.601804 systemd-resolved[1505]: Under memory pressure, flushing caches. Jul 7 02:49:52.601816 systemd-resolved[1505]: Flushed all caches. Jul 7 02:49:55.785274 systemd[1]: Started sshd@8-10.244.101.74:22-139.178.68.195:35822.service - OpenSSH per-connection server daemon (139.178.68.195:35822). Jul 7 02:49:56.732442 sshd[6083]: Accepted publickey for core from 139.178.68.195 port 35822 ssh2: RSA SHA256:OzzIFs54pJXMP2eymQNEzIb/qF+YzQ98zvMT1AG90zI Jul 7 02:49:56.734800 sshd[6083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:49:56.741718 systemd-logind[1572]: New session 11 of user core. Jul 7 02:49:56.746567 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 02:49:57.782533 sshd[6083]: pam_unix(sshd:session): session closed for user core Jul 7 02:49:57.802055 systemd[1]: sshd@8-10.244.101.74:22-139.178.68.195:35822.service: Deactivated successfully. Jul 7 02:49:57.816386 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 02:49:57.817328 systemd-logind[1572]: Session 11 logged out. Waiting for processes to exit. Jul 7 02:49:57.825133 systemd-logind[1572]: Removed session 11. Jul 7 02:50:02.929784 systemd[1]: Started sshd@9-10.244.101.74:22-139.178.68.195:39432.service - OpenSSH per-connection server daemon (139.178.68.195:39432). Jul 7 02:50:03.905815 sshd[6119]: Accepted publickey for core from 139.178.68.195 port 39432 ssh2: RSA SHA256:OzzIFs54pJXMP2eymQNEzIb/qF+YzQ98zvMT1AG90zI Jul 7 02:50:03.909720 sshd[6119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:50:03.921509 systemd-logind[1572]: New session 12 of user core. Jul 7 02:50:03.927926 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 02:50:04.577325 systemd-journald[1172]: Under memory pressure, flushing caches. Jul 7 02:50:04.562509 systemd-resolved[1505]: Under memory pressure, flushing caches. Jul 7 02:50:04.562518 systemd-resolved[1505]: Flushed all caches. Jul 7 02:50:04.998716 sshd[6119]: pam_unix(sshd:session): session closed for user core Jul 7 02:50:05.004849 systemd[1]: sshd@9-10.244.101.74:22-139.178.68.195:39432.service: Deactivated successfully. Jul 7 02:50:05.008697 systemd-logind[1572]: Session 12 logged out. Waiting for processes to exit. Jul 7 02:50:05.012245 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 02:50:05.014124 systemd-logind[1572]: Removed session 12. Jul 7 02:50:05.154542 systemd[1]: Started sshd@10-10.244.101.74:22-139.178.68.195:39444.service - OpenSSH per-connection server daemon (139.178.68.195:39444). Jul 7 02:50:06.046472 sshd[6136]: Accepted publickey for core from 139.178.68.195 port 39444 ssh2: RSA SHA256:OzzIFs54pJXMP2eymQNEzIb/qF+YzQ98zvMT1AG90zI Jul 7 02:50:06.048755 sshd[6136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:50:06.062327 systemd-logind[1572]: New session 13 of user core. Jul 7 02:50:06.071066 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 02:50:07.117185 sshd[6136]: pam_unix(sshd:session): session closed for user core Jul 7 02:50:07.130735 systemd-logind[1572]: Session 13 logged out. Waiting for processes to exit. Jul 7 02:50:07.131604 systemd[1]: sshd@10-10.244.101.74:22-139.178.68.195:39444.service: Deactivated successfully. Jul 7 02:50:07.138289 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 02:50:07.140282 systemd-logind[1572]: Removed session 13. Jul 7 02:50:07.263723 systemd[1]: Started sshd@11-10.244.101.74:22-139.178.68.195:39450.service - OpenSSH per-connection server daemon (139.178.68.195:39450). Jul 7 02:50:08.224175 sshd[6148]: Accepted publickey for core from 139.178.68.195 port 39450 ssh2: RSA SHA256:OzzIFs54pJXMP2eymQNEzIb/qF+YzQ98zvMT1AG90zI Jul 7 02:50:08.226513 sshd[6148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:50:08.234412 systemd-logind[1572]: New session 14 of user core. Jul 7 02:50:08.241890 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 02:50:09.213809 sshd[6148]: pam_unix(sshd:session): session closed for user core Jul 7 02:50:09.238036 systemd[1]: sshd@11-10.244.101.74:22-139.178.68.195:39450.service: Deactivated successfully. Jul 7 02:50:09.250395 systemd-logind[1572]: Session 14 logged out. Waiting for processes to exit. Jul 7 02:50:09.260152 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 02:50:09.265581 systemd-logind[1572]: Removed session 14. Jul 7 02:50:10.606485 systemd-journald[1172]: Under memory pressure, flushing caches. Jul 7 02:50:10.578518 systemd-resolved[1505]: Under memory pressure, flushing caches. Jul 7 02:50:10.578529 systemd-resolved[1505]: Flushed all caches. Jul 7 02:50:14.364541 systemd[1]: Started sshd@12-10.244.101.74:22-139.178.68.195:44004.service - OpenSSH per-connection server daemon (139.178.68.195:44004). Jul 7 02:50:15.286284 sshd[6163]: Accepted publickey for core from 139.178.68.195 port 44004 ssh2: RSA SHA256:OzzIFs54pJXMP2eymQNEzIb/qF+YzQ98zvMT1AG90zI Jul 7 02:50:15.290687 sshd[6163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:50:15.307904 systemd-logind[1572]: New session 15 of user core. Jul 7 02:50:15.310436 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 02:50:16.352737 sshd[6163]: pam_unix(sshd:session): session closed for user core Jul 7 02:50:16.361504 systemd-logind[1572]: Session 15 logged out. Waiting for processes to exit. Jul 7 02:50:16.362178 systemd[1]: sshd@12-10.244.101.74:22-139.178.68.195:44004.service: Deactivated successfully. Jul 7 02:50:16.371954 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 02:50:16.377205 systemd-logind[1572]: Removed session 15. Jul 7 02:50:18.589488 systemd-journald[1172]: Under memory pressure, flushing caches. Jul 7 02:50:18.579392 systemd-resolved[1505]: Under memory pressure, flushing caches. Jul 7 02:50:18.579401 systemd-resolved[1505]: Flushed all caches. Jul 7 02:50:20.635651 systemd-journald[1172]: Under memory pressure, flushing caches. Jul 7 02:50:20.626295 systemd-resolved[1505]: Under memory pressure, flushing caches. Jul 7 02:50:20.626310 systemd-resolved[1505]: Flushed all caches. Jul 7 02:50:21.513622 systemd[1]: Started sshd@13-10.244.101.74:22-139.178.68.195:38406.service - OpenSSH per-connection server daemon (139.178.68.195:38406). Jul 7 02:50:22.494003 sshd[6242]: Accepted publickey for core from 139.178.68.195 port 38406 ssh2: RSA SHA256:OzzIFs54pJXMP2eymQNEzIb/qF+YzQ98zvMT1AG90zI Jul 7 02:50:22.503456 sshd[6242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:50:22.513484 systemd-logind[1572]: New session 16 of user core. Jul 7 02:50:22.519429 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 02:50:23.830437 sshd[6242]: pam_unix(sshd:session): session closed for user core Jul 7 02:50:23.841832 systemd-logind[1572]: Session 16 logged out. Waiting for processes to exit. Jul 7 02:50:23.844456 systemd[1]: sshd@13-10.244.101.74:22-139.178.68.195:38406.service: Deactivated successfully. Jul 7 02:50:23.854093 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 02:50:23.856959 systemd-logind[1572]: Removed session 16. Jul 7 02:50:28.985937 systemd[1]: Started sshd@14-10.244.101.74:22-139.178.68.195:56666.service - OpenSSH per-connection server daemon (139.178.68.195:56666). Jul 7 02:50:29.919676 sshd[6285]: Accepted publickey for core from 139.178.68.195 port 56666 ssh2: RSA SHA256:OzzIFs54pJXMP2eymQNEzIb/qF+YzQ98zvMT1AG90zI Jul 7 02:50:29.922254 sshd[6285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:50:29.936462 systemd-logind[1572]: New session 17 of user core. Jul 7 02:50:29.942496 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 02:50:30.858506 systemd[1]: run-containerd-runc-k8s.io-0d9de881d7922dfe1edff4e0d13f53cb0bd54665bf492e29886da9d72ca0541b-runc.XfDOUl.mount: Deactivated successfully. Jul 7 02:50:31.237166 sshd[6285]: pam_unix(sshd:session): session closed for user core Jul 7 02:50:31.255843 systemd[1]: sshd@14-10.244.101.74:22-139.178.68.195:56666.service: Deactivated successfully. Jul 7 02:50:31.283472 systemd-logind[1572]: Session 17 logged out. Waiting for processes to exit. Jul 7 02:50:31.283650 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 02:50:31.291185 systemd-logind[1572]: Removed session 17. Jul 7 02:50:31.388297 systemd[1]: Started sshd@15-10.244.101.74:22-139.178.68.195:56672.service - OpenSSH per-connection server daemon (139.178.68.195:56672). Jul 7 02:50:32.322277 sshd[6323]: Accepted publickey for core from 139.178.68.195 port 56672 ssh2: RSA SHA256:OzzIFs54pJXMP2eymQNEzIb/qF+YzQ98zvMT1AG90zI Jul 7 02:50:32.323996 sshd[6323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:50:32.338203 systemd-logind[1572]: New session 18 of user core. Jul 7 02:50:32.342491 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 02:50:32.662577 systemd-journald[1172]: Under memory pressure, flushing caches. Jul 7 02:50:32.661406 systemd-resolved[1505]: Under memory pressure, flushing caches. Jul 7 02:50:32.661416 systemd-resolved[1505]: Flushed all caches. Jul 7 02:50:33.355683 sshd[6323]: pam_unix(sshd:session): session closed for user core Jul 7 02:50:33.366449 systemd[1]: sshd@15-10.244.101.74:22-139.178.68.195:56672.service: Deactivated successfully. Jul 7 02:50:33.379837 systemd-logind[1572]: Session 18 logged out. Waiting for processes to exit. Jul 7 02:50:33.380825 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 02:50:33.384288 systemd-logind[1572]: Removed session 18. Jul 7 02:50:33.510524 systemd[1]: Started sshd@16-10.244.101.74:22-139.178.68.195:56676.service - OpenSSH per-connection server daemon (139.178.68.195:56676). Jul 7 02:50:34.429530 sshd[6348]: Accepted publickey for core from 139.178.68.195 port 56676 ssh2: RSA SHA256:OzzIFs54pJXMP2eymQNEzIb/qF+YzQ98zvMT1AG90zI Jul 7 02:50:34.432743 sshd[6348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:50:34.445720 systemd-logind[1572]: New session 19 of user core. Jul 7 02:50:34.451234 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 02:50:38.647984 systemd-journald[1172]: Under memory pressure, flushing caches. Jul 7 02:50:38.648351 systemd-resolved[1505]: Under memory pressure, flushing caches. Jul 7 02:50:38.648367 systemd-resolved[1505]: Flushed all caches. Jul 7 02:50:38.773573 sshd[6348]: pam_unix(sshd:session): session closed for user core Jul 7 02:50:38.827573 systemd[1]: sshd@16-10.244.101.74:22-139.178.68.195:56676.service: Deactivated successfully. Jul 7 02:50:38.839751 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 02:50:38.840815 systemd-logind[1572]: Session 19 logged out. Waiting for processes to exit. Jul 7 02:50:38.860495 systemd-logind[1572]: Removed session 19. Jul 7 02:50:38.921491 systemd[1]: Started sshd@17-10.244.101.74:22-139.178.68.195:41226.service - OpenSSH per-connection server daemon (139.178.68.195:41226). Jul 7 02:50:39.879232 sshd[6378]: Accepted publickey for core from 139.178.68.195 port 41226 ssh2: RSA SHA256:OzzIFs54pJXMP2eymQNEzIb/qF+YzQ98zvMT1AG90zI Jul 7 02:50:39.885560 sshd[6378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:50:39.940430 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 02:50:39.940717 systemd-logind[1572]: New session 20 of user core. Jul 7 02:50:40.455671 kubelet[2809]: E0707 02:50:40.455600 2809 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.378s" Jul 7 02:50:40.704792 systemd-journald[1172]: Under memory pressure, flushing caches. Jul 7 02:50:40.674903 systemd-resolved[1505]: Under memory pressure, flushing caches. Jul 7 02:50:40.674919 systemd-resolved[1505]: Flushed all caches. Jul 7 02:50:41.720637 sshd[6378]: pam_unix(sshd:session): session closed for user core Jul 7 02:50:41.776888 systemd[1]: sshd@17-10.244.101.74:22-139.178.68.195:41226.service: Deactivated successfully. Jul 7 02:50:41.792043 systemd-logind[1572]: Session 20 logged out. Waiting for processes to exit. Jul 7 02:50:41.792175 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 02:50:41.800432 systemd-logind[1572]: Removed session 20. Jul 7 02:50:41.885064 systemd[1]: Started sshd@18-10.244.101.74:22-139.178.68.195:41238.service - OpenSSH per-connection server daemon (139.178.68.195:41238). Jul 7 02:50:42.706961 systemd-resolved[1505]: Under memory pressure, flushing caches. Jul 7 02:50:42.720851 systemd-journald[1172]: Under memory pressure, flushing caches. Jul 7 02:50:42.706969 systemd-resolved[1505]: Flushed all caches. Jul 7 02:50:42.854262 sshd[6393]: Accepted publickey for core from 139.178.68.195 port 41238 ssh2: RSA SHA256:OzzIFs54pJXMP2eymQNEzIb/qF+YzQ98zvMT1AG90zI Jul 7 02:50:42.856381 sshd[6393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:50:42.871361 systemd-logind[1572]: New session 21 of user core. Jul 7 02:50:42.874701 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 7 02:50:44.383409 sshd[6393]: pam_unix(sshd:session): session closed for user core Jul 7 02:50:44.391067 systemd[1]: sshd@18-10.244.101.74:22-139.178.68.195:41238.service: Deactivated successfully. Jul 7 02:50:44.401027 systemd[1]: session-21.scope: Deactivated successfully. Jul 7 02:50:44.404027 systemd-logind[1572]: Session 21 logged out. Waiting for processes to exit. Jul 7 02:50:44.408682 systemd-logind[1572]: Removed session 21. Jul 7 02:50:49.551784 systemd[1]: Started sshd@19-10.244.101.74:22-139.178.68.195:50144.service - OpenSSH per-connection server daemon (139.178.68.195:50144). Jul 7 02:50:50.592066 systemd-journald[1172]: Under memory pressure, flushing caches. Jul 7 02:50:50.607845 sshd[6452]: Accepted publickey for core from 139.178.68.195 port 50144 ssh2: RSA SHA256:OzzIFs54pJXMP2eymQNEzIb/qF+YzQ98zvMT1AG90zI Jul 7 02:50:50.594005 sshd[6452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:50:50.593589 systemd-resolved[1505]: Under memory pressure, flushing caches. Jul 7 02:50:50.593602 systemd-resolved[1505]: Flushed all caches. Jul 7 02:50:50.622187 systemd-logind[1572]: New session 22 of user core. Jul 7 02:50:50.628646 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 7 02:50:52.648482 systemd-journald[1172]: Under memory pressure, flushing caches. Jul 7 02:50:52.648874 systemd-resolved[1505]: Under memory pressure, flushing caches. Jul 7 02:50:52.648896 systemd-resolved[1505]: Flushed all caches. Jul 7 02:50:52.715276 sshd[6452]: pam_unix(sshd:session): session closed for user core Jul 7 02:50:52.743755 systemd[1]: sshd@19-10.244.101.74:22-139.178.68.195:50144.service: Deactivated successfully. Jul 7 02:50:52.754278 systemd-logind[1572]: Session 22 logged out. Waiting for processes to exit. Jul 7 02:50:52.755167 systemd[1]: session-22.scope: Deactivated successfully. Jul 7 02:50:52.762006 systemd-logind[1572]: Removed session 22. Jul 7 02:50:57.902420 systemd[1]: Started sshd@20-10.244.101.74:22-139.178.68.195:50156.service - OpenSSH per-connection server daemon (139.178.68.195:50156). Jul 7 02:50:58.866550 sshd[6469]: Accepted publickey for core from 139.178.68.195 port 50156 ssh2: RSA SHA256:OzzIFs54pJXMP2eymQNEzIb/qF+YzQ98zvMT1AG90zI Jul 7 02:50:58.870932 sshd[6469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:50:58.886316 systemd-logind[1572]: New session 23 of user core. Jul 7 02:50:58.894633 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 7 02:50:59.566428 systemd[1]: Started sshd@21-10.244.101.74:22-85.159.226.148:43928.service - OpenSSH per-connection server daemon (85.159.226.148:43928). Jul 7 02:50:59.882675 sshd[6480]: Received disconnect from 85.159.226.148 port 43928:11: Bye Bye [preauth] Jul 7 02:50:59.893345 sshd[6480]: Disconnected from authenticating user root 85.159.226.148 port 43928 [preauth] Jul 7 02:50:59.894423 systemd[1]: sshd@21-10.244.101.74:22-85.159.226.148:43928.service: Deactivated successfully. Jul 7 02:51:00.401755 sshd[6469]: pam_unix(sshd:session): session closed for user core Jul 7 02:51:00.414795 systemd[1]: sshd@20-10.244.101.74:22-139.178.68.195:50156.service: Deactivated successfully. Jul 7 02:51:00.425447 systemd[1]: session-23.scope: Deactivated successfully. Jul 7 02:51:00.434093 systemd-logind[1572]: Session 23 logged out. Waiting for processes to exit. Jul 7 02:51:00.439193 systemd-logind[1572]: Removed session 23. Jul 7 02:51:00.636885 systemd-journald[1172]: Under memory pressure, flushing caches. Jul 7 02:51:00.626265 systemd-resolved[1505]: Under memory pressure, flushing caches. Jul 7 02:51:00.626275 systemd-resolved[1505]: Flushed all caches. Jul 7 02:51:02.677537 systemd-journald[1172]: Under memory pressure, flushing caches. Jul 7 02:51:02.677118 systemd-resolved[1505]: Under memory pressure, flushing caches. Jul 7 02:51:02.677128 systemd-resolved[1505]: Flushed all caches.