Jan 29 14:35:47.047526 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 29 14:35:47.047568 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 14:35:47.047583 kernel: BIOS-provided physical RAM map: Jan 29 14:35:47.047599 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 29 14:35:47.047609 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 29 14:35:47.047619 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 29 14:35:47.047631 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jan 29 14:35:47.047641 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jan 29 14:35:47.047651 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 29 14:35:47.047662 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 29 14:35:47.047672 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 29 14:35:47.047683 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 29 14:35:47.047699 kernel: NX (Execute Disable) protection: active Jan 29 14:35:47.047709 kernel: APIC: Static calls initialized Jan 29 14:35:47.047722 kernel: SMBIOS 2.8 present. Jan 29 14:35:47.047734 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Jan 29 14:35:47.047745 kernel: Hypervisor detected: KVM Jan 29 14:35:47.047761 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 14:35:47.047773 kernel: kvm-clock: using sched offset of 4366125743 cycles Jan 29 14:35:47.047785 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 14:35:47.047797 kernel: tsc: Detected 2499.998 MHz processor Jan 29 14:35:47.047844 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 14:35:47.047857 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 14:35:47.047868 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 29 14:35:47.047880 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 29 14:35:47.047892 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 14:35:47.047910 kernel: Using GB pages for direct mapping Jan 29 14:35:47.047921 kernel: ACPI: Early table checksum verification disabled Jan 29 14:35:47.047933 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Jan 29 14:35:47.047944 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 14:35:47.047956 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 14:35:47.047968 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 14:35:47.047979 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jan 29 14:35:47.047991 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 14:35:47.048002 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 14:35:47.048019 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 14:35:47.048031 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 14:35:47.048042 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jan 29 14:35:47.048053 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jan 29 14:35:47.048065 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jan 29 14:35:47.048084 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jan 29 14:35:47.048096 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jan 29 14:35:47.048112 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jan 29 14:35:47.048125 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jan 29 14:35:47.048137 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 29 14:35:47.048149 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 29 14:35:47.048161 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jan 29 14:35:47.048173 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Jan 29 14:35:47.048184 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jan 29 14:35:47.048202 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Jan 29 14:35:47.048214 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jan 29 14:35:47.048225 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Jan 29 14:35:47.048237 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jan 29 14:35:47.048249 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Jan 29 14:35:47.048261 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jan 29 14:35:47.048273 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Jan 29 14:35:47.048285 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jan 29 14:35:47.048297 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Jan 29 14:35:47.048309 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jan 29 14:35:47.048326 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Jan 29 14:35:47.048338 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 29 14:35:47.048350 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 29 14:35:47.048362 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jan 29 14:35:47.048374 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Jan 29 14:35:47.048386 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Jan 29 14:35:47.048399 kernel: Zone ranges: Jan 29 14:35:47.048411 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 14:35:47.048423 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jan 29 14:35:47.048440 kernel: Normal empty Jan 29 14:35:47.048452 kernel: Movable zone start for each node Jan 29 14:35:47.048464 kernel: Early memory node ranges Jan 29 14:35:47.048476 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 29 14:35:47.048488 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jan 29 14:35:47.048500 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jan 29 14:35:47.048512 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 14:35:47.048535 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 29 14:35:47.048548 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jan 29 14:35:47.048560 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 29 14:35:47.048578 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 14:35:47.048591 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 14:35:47.048603 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 29 14:35:47.048615 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 14:35:47.048627 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 14:35:47.048639 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 14:35:47.048651 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 14:35:47.048663 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 14:35:47.048675 kernel: TSC deadline timer available Jan 29 14:35:47.048692 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Jan 29 14:35:47.048704 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 14:35:47.048716 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 29 14:35:47.048728 kernel: Booting paravirtualized kernel on KVM Jan 29 14:35:47.048740 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 14:35:47.048752 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 29 14:35:47.048765 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 29 14:35:47.048777 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 29 14:35:47.048789 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 29 14:35:47.048821 kernel: kvm-guest: PV spinlocks enabled Jan 29 14:35:47.048834 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 29 14:35:47.048848 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 14:35:47.048861 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 14:35:47.048873 kernel: random: crng init done Jan 29 14:35:47.048885 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 14:35:47.048898 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 29 14:35:47.048910 kernel: Fallback order for Node 0: 0 Jan 29 14:35:47.048928 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Jan 29 14:35:47.048940 kernel: Policy zone: DMA32 Jan 29 14:35:47.048952 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 14:35:47.048964 kernel: software IO TLB: area num 16. Jan 29 14:35:47.048977 kernel: Memory: 1901528K/2096616K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 194828K reserved, 0K cma-reserved) Jan 29 14:35:47.048989 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 29 14:35:47.049001 kernel: Kernel/User page tables isolation: enabled Jan 29 14:35:47.049013 kernel: ftrace: allocating 37921 entries in 149 pages Jan 29 14:35:47.049025 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 14:35:47.049042 kernel: Dynamic Preempt: voluntary Jan 29 14:35:47.049054 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 14:35:47.049067 kernel: rcu: RCU event tracing is enabled. Jan 29 14:35:47.049079 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 29 14:35:47.049092 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 14:35:47.049117 kernel: Rude variant of Tasks RCU enabled. Jan 29 14:35:47.049135 kernel: Tracing variant of Tasks RCU enabled. Jan 29 14:35:47.049147 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 14:35:47.049160 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 29 14:35:47.049172 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jan 29 14:35:47.049185 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 14:35:47.049197 kernel: Console: colour VGA+ 80x25 Jan 29 14:35:47.049215 kernel: printk: console [tty0] enabled Jan 29 14:35:47.049228 kernel: printk: console [ttyS0] enabled Jan 29 14:35:47.049240 kernel: ACPI: Core revision 20230628 Jan 29 14:35:47.049253 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 14:35:47.049265 kernel: x2apic enabled Jan 29 14:35:47.049283 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 14:35:47.049296 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 29 14:35:47.049309 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Jan 29 14:35:47.049322 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 29 14:35:47.049335 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 29 14:35:47.049347 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 29 14:35:47.049360 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 14:35:47.049372 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 14:35:47.049385 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 14:35:47.049403 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 14:35:47.049416 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 29 14:35:47.049429 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 14:35:47.049441 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 14:35:47.049453 kernel: MDS: Mitigation: Clear CPU buffers Jan 29 14:35:47.049466 kernel: MMIO Stale Data: Unknown: No mitigations Jan 29 14:35:47.049478 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 29 14:35:47.049491 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 14:35:47.049504 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 14:35:47.049525 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 14:35:47.049539 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 14:35:47.049558 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 29 14:35:47.049571 kernel: Freeing SMP alternatives memory: 32K Jan 29 14:35:47.049583 kernel: pid_max: default: 32768 minimum: 301 Jan 29 14:35:47.049596 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 14:35:47.049608 kernel: landlock: Up and running. Jan 29 14:35:47.049620 kernel: SELinux: Initializing. Jan 29 14:35:47.049633 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 14:35:47.049646 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 14:35:47.049659 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Jan 29 14:35:47.049671 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 29 14:35:47.049684 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 29 14:35:47.049703 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 29 14:35:47.049715 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Jan 29 14:35:47.049728 kernel: signal: max sigframe size: 1776 Jan 29 14:35:47.049741 kernel: rcu: Hierarchical SRCU implementation. Jan 29 14:35:47.049754 kernel: rcu: Max phase no-delay instances is 400. Jan 29 14:35:47.049767 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 29 14:35:47.049779 kernel: smp: Bringing up secondary CPUs ... Jan 29 14:35:47.049792 kernel: smpboot: x86: Booting SMP configuration: Jan 29 14:35:47.050769 kernel: .... node #0, CPUs: #1 Jan 29 14:35:47.050794 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jan 29 14:35:47.053776 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 14:35:47.053794 kernel: smpboot: Max logical packages: 16 Jan 29 14:35:47.053823 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Jan 29 14:35:47.053837 kernel: devtmpfs: initialized Jan 29 14:35:47.053850 kernel: x86/mm: Memory block size: 128MB Jan 29 14:35:47.053863 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 14:35:47.053876 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 29 14:35:47.053889 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 14:35:47.053910 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 14:35:47.053923 kernel: audit: initializing netlink subsys (disabled) Jan 29 14:35:47.053936 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 14:35:47.053949 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 14:35:47.053962 kernel: audit: type=2000 audit(1738161345.307:1): state=initialized audit_enabled=0 res=1 Jan 29 14:35:47.053975 kernel: cpuidle: using governor menu Jan 29 14:35:47.053988 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 14:35:47.054001 kernel: dca service started, version 1.12.1 Jan 29 14:35:47.054014 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 29 14:35:47.054035 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 29 14:35:47.054049 kernel: PCI: Using configuration type 1 for base access Jan 29 14:35:47.054062 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 14:35:47.054075 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 14:35:47.054088 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 14:35:47.054100 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 14:35:47.054113 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 14:35:47.054126 kernel: ACPI: Added _OSI(Module Device) Jan 29 14:35:47.054139 kernel: ACPI: Added _OSI(Processor Device) Jan 29 14:35:47.054157 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 14:35:47.054170 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 14:35:47.054182 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 14:35:47.054195 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 14:35:47.054210 kernel: ACPI: Interpreter enabled Jan 29 14:35:47.054223 kernel: ACPI: PM: (supports S0 S5) Jan 29 14:35:47.054284 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 14:35:47.054298 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 14:35:47.054315 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 14:35:47.054334 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 29 14:35:47.054347 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 14:35:47.054625 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 14:35:47.054841 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 29 14:35:47.055018 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 29 14:35:47.055037 kernel: PCI host bridge to bus 0000:00 Jan 29 14:35:47.055232 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 14:35:47.055400 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 14:35:47.055570 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 14:35:47.055725 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 29 14:35:47.056269 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 29 14:35:47.056429 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jan 29 14:35:47.057081 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 14:35:47.057286 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 29 14:35:47.057526 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Jan 29 14:35:47.057703 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Jan 29 14:35:47.057896 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Jan 29 14:35:47.058067 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Jan 29 14:35:47.058238 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 14:35:47.058422 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 29 14:35:47.058619 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Jan 29 14:35:47.061094 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 29 14:35:47.061284 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Jan 29 14:35:47.061480 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 29 14:35:47.061670 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Jan 29 14:35:47.062578 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 29 14:35:47.062761 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Jan 29 14:35:47.064122 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 29 14:35:47.064298 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Jan 29 14:35:47.064479 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 29 14:35:47.064663 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Jan 29 14:35:47.064912 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 29 14:35:47.065091 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Jan 29 14:35:47.065280 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 29 14:35:47.065448 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Jan 29 14:35:47.065640 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 29 14:35:47.067907 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 29 14:35:47.068094 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Jan 29 14:35:47.068269 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jan 29 14:35:47.068451 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Jan 29 14:35:47.068655 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 29 14:35:47.069922 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 29 14:35:47.070109 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Jan 29 14:35:47.070277 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Jan 29 14:35:47.070455 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 29 14:35:47.070635 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 29 14:35:47.071892 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 29 14:35:47.072068 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Jan 29 14:35:47.072233 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Jan 29 14:35:47.072408 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 29 14:35:47.072590 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 29 14:35:47.072777 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Jan 29 14:35:47.074027 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Jan 29 14:35:47.074209 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 29 14:35:47.074392 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 29 14:35:47.074577 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 29 14:35:47.074771 kernel: pci_bus 0000:02: extended config space not accessible Jan 29 14:35:47.078063 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Jan 29 14:35:47.078258 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Jan 29 14:35:47.078431 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 29 14:35:47.078625 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 29 14:35:47.078847 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 29 14:35:47.079024 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Jan 29 14:35:47.079190 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 29 14:35:47.079352 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 29 14:35:47.079540 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 29 14:35:47.079732 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 29 14:35:47.079936 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jan 29 14:35:47.080108 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 29 14:35:47.080275 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 29 14:35:47.080465 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 29 14:35:47.080661 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 29 14:35:47.081897 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 29 14:35:47.082099 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 29 14:35:47.082286 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 29 14:35:47.082466 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 29 14:35:47.082657 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 29 14:35:47.084930 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 29 14:35:47.085112 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 29 14:35:47.085283 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 29 14:35:47.085458 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 29 14:35:47.085654 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 29 14:35:47.087849 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 29 14:35:47.088039 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 29 14:35:47.088207 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 29 14:35:47.088372 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 29 14:35:47.088392 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 14:35:47.088406 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 14:35:47.088420 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 14:35:47.088441 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 14:35:47.088455 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 29 14:35:47.088468 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 29 14:35:47.088481 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 29 14:35:47.088494 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 29 14:35:47.088507 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 29 14:35:47.088532 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 29 14:35:47.088546 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 29 14:35:47.088559 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 29 14:35:47.088578 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 29 14:35:47.088592 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 29 14:35:47.088605 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 29 14:35:47.088618 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 29 14:35:47.088631 kernel: iommu: Default domain type: Translated Jan 29 14:35:47.088644 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 14:35:47.088657 kernel: PCI: Using ACPI for IRQ routing Jan 29 14:35:47.088670 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 14:35:47.088683 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 29 14:35:47.088701 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jan 29 14:35:47.088913 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 29 14:35:47.089079 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 29 14:35:47.089240 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 14:35:47.089260 kernel: vgaarb: loaded Jan 29 14:35:47.089273 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 14:35:47.089286 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 14:35:47.089300 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 14:35:47.089321 kernel: pnp: PnP ACPI init Jan 29 14:35:47.089500 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 29 14:35:47.089533 kernel: pnp: PnP ACPI: found 5 devices Jan 29 14:35:47.089548 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 14:35:47.089561 kernel: NET: Registered PF_INET protocol family Jan 29 14:35:47.089574 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 14:35:47.089587 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 29 14:35:47.089600 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 14:35:47.089613 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 29 14:35:47.089634 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 29 14:35:47.089647 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 29 14:35:47.089660 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 14:35:47.089674 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 14:35:47.089686 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 14:35:47.089700 kernel: NET: Registered PF_XDP protocol family Jan 29 14:35:47.089883 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Jan 29 14:35:47.090052 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 29 14:35:47.090228 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 29 14:35:47.090393 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 29 14:35:47.090575 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 29 14:35:47.090744 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 29 14:35:47.092959 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 29 14:35:47.093133 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 29 14:35:47.093309 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 29 14:35:47.093473 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 29 14:35:47.093654 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 29 14:35:47.099248 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 29 14:35:47.099439 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 29 14:35:47.099621 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 29 14:35:47.099788 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 29 14:35:47.100008 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 29 14:35:47.100211 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 29 14:35:47.100390 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 29 14:35:47.100570 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 29 14:35:47.100737 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 29 14:35:47.100918 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 29 14:35:47.101082 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 29 14:35:47.101246 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 29 14:35:47.101409 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 29 14:35:47.101595 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 29 14:35:47.101762 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 29 14:35:47.101967 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 29 14:35:47.102139 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 29 14:35:47.102308 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 29 14:35:47.102486 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 29 14:35:47.102688 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 29 14:35:47.102882 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 29 14:35:47.103047 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 29 14:35:47.103209 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 29 14:35:47.103371 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 29 14:35:47.103550 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 29 14:35:47.103720 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 29 14:35:47.104944 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 29 14:35:47.105115 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 29 14:35:47.105290 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 29 14:35:47.105454 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 29 14:35:47.105632 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 29 14:35:47.105796 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 29 14:35:47.109166 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 29 14:35:47.109346 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 29 14:35:47.109524 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 29 14:35:47.109697 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 29 14:35:47.109907 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 29 14:35:47.110072 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 29 14:35:47.110236 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 29 14:35:47.110399 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 14:35:47.110562 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 14:35:47.110722 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 14:35:47.110889 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 29 14:35:47.111038 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 29 14:35:47.111186 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jan 29 14:35:47.111375 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 29 14:35:47.111559 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jan 29 14:35:47.111717 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 29 14:35:47.111942 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 29 14:35:47.112115 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Jan 29 14:35:47.112273 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 29 14:35:47.112450 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 29 14:35:47.112649 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Jan 29 14:35:47.112829 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 29 14:35:47.112989 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 29 14:35:47.113179 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jan 29 14:35:47.113334 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 29 14:35:47.113507 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 29 14:35:47.113714 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Jan 29 14:35:47.120789 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 29 14:35:47.120983 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 29 14:35:47.121156 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Jan 29 14:35:47.121324 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 29 14:35:47.121478 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 29 14:35:47.121660 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Jan 29 14:35:47.121833 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jan 29 14:35:47.121991 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 29 14:35:47.122168 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Jan 29 14:35:47.122322 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 29 14:35:47.122485 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 29 14:35:47.122507 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 29 14:35:47.122535 kernel: PCI: CLS 0 bytes, default 64 Jan 29 14:35:47.122549 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 29 14:35:47.122563 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jan 29 14:35:47.122578 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 29 14:35:47.122592 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 29 14:35:47.122606 kernel: Initialise system trusted keyrings Jan 29 14:35:47.122627 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 29 14:35:47.122641 kernel: Key type asymmetric registered Jan 29 14:35:47.122655 kernel: Asymmetric key parser 'x509' registered Jan 29 14:35:47.122668 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 14:35:47.122682 kernel: io scheduler mq-deadline registered Jan 29 14:35:47.122696 kernel: io scheduler kyber registered Jan 29 14:35:47.122710 kernel: io scheduler bfq registered Jan 29 14:35:47.122910 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 29 14:35:47.123081 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 29 14:35:47.123259 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 14:35:47.123444 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 29 14:35:47.123623 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 29 14:35:47.123789 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 14:35:47.123977 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 29 14:35:47.124143 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 29 14:35:47.124317 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 14:35:47.124484 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 29 14:35:47.124663 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 29 14:35:47.125889 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 14:35:47.126063 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 29 14:35:47.126227 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 29 14:35:47.126401 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 14:35:47.126581 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 29 14:35:47.126744 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 29 14:35:47.127927 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 14:35:47.128107 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 29 14:35:47.128271 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 29 14:35:47.128457 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 14:35:47.128648 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 29 14:35:47.129860 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 29 14:35:47.130040 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 14:35:47.130063 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 14:35:47.130078 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 29 14:35:47.130102 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 29 14:35:47.130116 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 14:35:47.130130 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 14:35:47.130145 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 14:35:47.130159 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 14:35:47.130172 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 14:35:47.130186 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 14:35:47.130359 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 29 14:35:47.130542 kernel: rtc_cmos 00:03: registered as rtc0 Jan 29 14:35:47.130699 kernel: rtc_cmos 00:03: setting system clock to 2025-01-29T14:35:46 UTC (1738161346) Jan 29 14:35:47.131952 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 29 14:35:47.131975 kernel: intel_pstate: CPU model not supported Jan 29 14:35:47.131989 kernel: NET: Registered PF_INET6 protocol family Jan 29 14:35:47.132003 kernel: Segment Routing with IPv6 Jan 29 14:35:47.132017 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 14:35:47.132030 kernel: NET: Registered PF_PACKET protocol family Jan 29 14:35:47.132044 kernel: Key type dns_resolver registered Jan 29 14:35:47.132066 kernel: IPI shorthand broadcast: enabled Jan 29 14:35:47.132081 kernel: sched_clock: Marking stable (1323003538, 239183183)->(1689634246, -127447525) Jan 29 14:35:47.132095 kernel: registered taskstats version 1 Jan 29 14:35:47.132109 kernel: Loading compiled-in X.509 certificates Jan 29 14:35:47.132123 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 29 14:35:47.132137 kernel: Key type .fscrypt registered Jan 29 14:35:47.132150 kernel: Key type fscrypt-provisioning registered Jan 29 14:35:47.132164 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 14:35:47.132178 kernel: ima: Allocated hash algorithm: sha1 Jan 29 14:35:47.132197 kernel: ima: No architecture policies found Jan 29 14:35:47.132210 kernel: clk: Disabling unused clocks Jan 29 14:35:47.132224 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 29 14:35:47.132238 kernel: Write protecting the kernel read-only data: 36864k Jan 29 14:35:47.132252 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 29 14:35:47.132266 kernel: Run /init as init process Jan 29 14:35:47.132280 kernel: with arguments: Jan 29 14:35:47.132293 kernel: /init Jan 29 14:35:47.132307 kernel: with environment: Jan 29 14:35:47.132325 kernel: HOME=/ Jan 29 14:35:47.132339 kernel: TERM=linux Jan 29 14:35:47.132352 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 14:35:47.132369 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 14:35:47.132386 systemd[1]: Detected virtualization kvm. Jan 29 14:35:47.132400 systemd[1]: Detected architecture x86-64. Jan 29 14:35:47.132414 systemd[1]: Running in initrd. Jan 29 14:35:47.132435 systemd[1]: No hostname configured, using default hostname. Jan 29 14:35:47.132449 systemd[1]: Hostname set to . Jan 29 14:35:47.132464 systemd[1]: Initializing machine ID from VM UUID. Jan 29 14:35:47.132478 systemd[1]: Queued start job for default target initrd.target. Jan 29 14:35:47.132492 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 14:35:47.132507 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 14:35:47.132535 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 14:35:47.132550 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 14:35:47.132570 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 14:35:47.132585 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 14:35:47.132602 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 14:35:47.132617 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 14:35:47.132632 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 14:35:47.132647 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 14:35:47.132661 systemd[1]: Reached target paths.target - Path Units. Jan 29 14:35:47.132681 systemd[1]: Reached target slices.target - Slice Units. Jan 29 14:35:47.132696 systemd[1]: Reached target swap.target - Swaps. Jan 29 14:35:47.132710 systemd[1]: Reached target timers.target - Timer Units. Jan 29 14:35:47.132725 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 14:35:47.132740 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 14:35:47.132755 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 14:35:47.132769 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 14:35:47.132783 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 14:35:47.132798 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 14:35:47.133434 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 14:35:47.133451 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 14:35:47.133465 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 14:35:47.133480 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 14:35:47.133495 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 14:35:47.133510 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 14:35:47.133538 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 14:35:47.133553 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 14:35:47.133574 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 14:35:47.133651 systemd-journald[201]: Collecting audit messages is disabled. Jan 29 14:35:47.133687 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 14:35:47.133702 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 14:35:47.133724 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 14:35:47.133740 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 14:35:47.133755 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 14:35:47.133769 kernel: Bridge firewalling registered Jan 29 14:35:47.133786 systemd-journald[201]: Journal started Jan 29 14:35:47.133844 systemd-journald[201]: Runtime Journal (/run/log/journal/30cb4c5658a74992a913221bb6e2166c) is 4.7M, max 38.0M, 33.2M free. Jan 29 14:35:47.063941 systemd-modules-load[202]: Inserted module 'overlay' Jan 29 14:35:47.134343 systemd-modules-load[202]: Inserted module 'br_netfilter' Jan 29 14:35:47.163832 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 14:35:47.165227 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 14:35:47.166272 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 14:35:47.176063 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 14:35:47.183116 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 14:35:47.187073 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 14:35:47.190889 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 14:35:47.203034 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 14:35:47.207370 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 14:35:47.217040 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 14:35:47.221797 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 14:35:47.229043 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 14:35:47.234982 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 14:35:47.237170 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 14:35:47.246236 dracut-cmdline[233]: dracut-dracut-053 Jan 29 14:35:47.249080 dracut-cmdline[233]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 14:35:47.288000 systemd-resolved[235]: Positive Trust Anchors: Jan 29 14:35:47.289189 systemd-resolved[235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 14:35:47.289240 systemd-resolved[235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 14:35:47.298167 systemd-resolved[235]: Defaulting to hostname 'linux'. Jan 29 14:35:47.301188 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 14:35:47.303056 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 14:35:47.353869 kernel: SCSI subsystem initialized Jan 29 14:35:47.364856 kernel: Loading iSCSI transport class v2.0-870. Jan 29 14:35:47.378852 kernel: iscsi: registered transport (tcp) Jan 29 14:35:47.406122 kernel: iscsi: registered transport (qla4xxx) Jan 29 14:35:47.406204 kernel: QLogic iSCSI HBA Driver Jan 29 14:35:47.463175 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 14:35:47.472095 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 14:35:47.508661 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 14:35:47.508738 kernel: device-mapper: uevent: version 1.0.3 Jan 29 14:35:47.511830 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 14:35:47.559841 kernel: raid6: sse2x4 gen() 13303 MB/s Jan 29 14:35:47.577846 kernel: raid6: sse2x2 gen() 9651 MB/s Jan 29 14:35:47.596476 kernel: raid6: sse2x1 gen() 10103 MB/s Jan 29 14:35:47.596555 kernel: raid6: using algorithm sse2x4 gen() 13303 MB/s Jan 29 14:35:47.615522 kernel: raid6: .... xor() 7637 MB/s, rmw enabled Jan 29 14:35:47.615642 kernel: raid6: using ssse3x2 recovery algorithm Jan 29 14:35:47.641851 kernel: xor: automatically using best checksumming function avx Jan 29 14:35:47.841911 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 14:35:47.857607 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 14:35:47.864021 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 14:35:47.891334 systemd-udevd[419]: Using default interface naming scheme 'v255'. Jan 29 14:35:47.898415 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 14:35:47.908064 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 14:35:47.928346 dracut-pre-trigger[424]: rd.md=0: removing MD RAID activation Jan 29 14:35:47.970005 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 14:35:47.978060 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 14:35:48.091347 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 14:35:48.100380 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 14:35:48.131653 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 14:35:48.136122 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 14:35:48.137307 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 14:35:48.139748 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 14:35:48.148018 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 14:35:48.180127 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 14:35:48.238841 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 14:35:48.253030 kernel: ACPI: bus type USB registered Jan 29 14:35:48.253096 kernel: usbcore: registered new interface driver usbfs Jan 29 14:35:48.256000 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jan 29 14:35:48.339453 kernel: usbcore: registered new interface driver hub Jan 29 14:35:48.339507 kernel: usbcore: registered new device driver usb Jan 29 14:35:48.339530 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 29 14:35:48.339740 kernel: AVX version of gcm_enc/dec engaged. Jan 29 14:35:48.339774 kernel: AES CTR mode by8 optimization enabled Jan 29 14:35:48.339791 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 29 14:35:48.340055 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jan 29 14:35:48.340260 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 29 14:35:48.340463 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 29 14:35:48.340691 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jan 29 14:35:48.340991 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jan 29 14:35:48.341392 kernel: hub 1-0:1.0: USB hub found Jan 29 14:35:48.341651 kernel: hub 1-0:1.0: 4 ports detected Jan 29 14:35:48.341884 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 29 14:35:48.342100 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 14:35:48.342142 kernel: GPT:17805311 != 125829119 Jan 29 14:35:48.342161 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 14:35:48.342179 kernel: GPT:17805311 != 125829119 Jan 29 14:35:48.342196 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 14:35:48.342213 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 14:35:48.342231 kernel: hub 2-0:1.0: USB hub found Jan 29 14:35:48.342447 kernel: hub 2-0:1.0: 4 ports detected Jan 29 14:35:48.295457 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 14:35:48.295717 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 14:35:48.296766 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 14:35:48.297598 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 14:35:48.297782 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 14:35:48.298682 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 14:35:48.336511 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 14:35:48.361839 kernel: libata version 3.00 loaded. Jan 29 14:35:48.392861 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (474) Jan 29 14:35:48.409358 kernel: ahci 0000:00:1f.2: version 3.0 Jan 29 14:35:48.426402 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 29 14:35:48.426434 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 29 14:35:48.426686 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 29 14:35:48.426990 kernel: scsi host0: ahci Jan 29 14:35:48.427224 kernel: scsi host1: ahci Jan 29 14:35:48.427438 kernel: scsi host2: ahci Jan 29 14:35:48.427660 kernel: scsi host3: ahci Jan 29 14:35:48.429618 kernel: scsi host4: ahci Jan 29 14:35:48.429858 kernel: scsi host5: ahci Jan 29 14:35:48.430053 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Jan 29 14:35:48.430096 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Jan 29 14:35:48.430131 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Jan 29 14:35:48.430157 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Jan 29 14:35:48.430176 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Jan 29 14:35:48.430195 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Jan 29 14:35:48.430213 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (479) Jan 29 14:35:48.422092 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 14:35:48.511783 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 14:35:48.518985 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 14:35:48.519861 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 14:35:48.528013 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 14:35:48.540013 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 14:35:48.546015 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 14:35:48.548990 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 14:35:48.565404 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 14:35:48.565459 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 29 14:35:48.568389 disk-uuid[564]: Primary Header is updated. Jan 29 14:35:48.568389 disk-uuid[564]: Secondary Entries is updated. Jan 29 14:35:48.568389 disk-uuid[564]: Secondary Header is updated. Jan 29 14:35:48.589032 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 14:35:48.722841 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 14:35:48.737134 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 29 14:35:48.737225 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 29 14:35:48.739722 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 29 14:35:48.744781 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 29 14:35:48.744899 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 29 14:35:48.744933 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 29 14:35:48.758897 kernel: usbcore: registered new interface driver usbhid Jan 29 14:35:48.758952 kernel: usbhid: USB HID core driver Jan 29 14:35:48.766939 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 29 14:35:48.767006 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jan 29 14:35:49.589852 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 14:35:49.591179 disk-uuid[569]: The operation has completed successfully. Jan 29 14:35:49.647662 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 14:35:49.647855 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 14:35:49.661060 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 14:35:49.666828 sh[586]: Success Jan 29 14:35:49.683841 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Jan 29 14:35:49.749770 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 14:35:49.765956 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 14:35:49.767884 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 14:35:49.801558 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 29 14:35:49.801667 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 14:35:49.805330 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 14:35:49.805370 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 14:35:49.807016 kernel: BTRFS info (device dm-0): using free space tree Jan 29 14:35:49.819064 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 14:35:49.820711 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 14:35:49.828132 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 14:35:49.832035 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 14:35:49.847297 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 14:35:49.847372 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 14:35:49.847395 kernel: BTRFS info (device vda6): using free space tree Jan 29 14:35:49.854871 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 14:35:49.870548 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 14:35:49.870239 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 14:35:49.878658 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 14:35:49.886004 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 14:35:49.984875 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 14:35:49.995092 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 14:35:50.041500 ignition[685]: Ignition 2.19.0 Jan 29 14:35:50.041523 ignition[685]: Stage: fetch-offline Jan 29 14:35:50.043609 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 14:35:50.041645 ignition[685]: no configs at "/usr/lib/ignition/base.d" Jan 29 14:35:50.041671 ignition[685]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 14:35:50.041867 ignition[685]: parsed url from cmdline: "" Jan 29 14:35:50.041873 ignition[685]: no config URL provided Jan 29 14:35:50.041883 ignition[685]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 14:35:50.041906 ignition[685]: no config at "/usr/lib/ignition/user.ign" Jan 29 14:35:50.051338 systemd-networkd[770]: lo: Link UP Jan 29 14:35:50.041915 ignition[685]: failed to fetch config: resource requires networking Jan 29 14:35:50.051344 systemd-networkd[770]: lo: Gained carrier Jan 29 14:35:50.042236 ignition[685]: Ignition finished successfully Jan 29 14:35:50.053621 systemd-networkd[770]: Enumeration completed Jan 29 14:35:50.053827 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 14:35:50.054196 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 14:35:50.054202 systemd-networkd[770]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 14:35:50.055679 systemd-networkd[770]: eth0: Link UP Jan 29 14:35:50.055685 systemd-networkd[770]: eth0: Gained carrier Jan 29 14:35:50.055696 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 14:35:50.058279 systemd[1]: Reached target network.target - Network. Jan 29 14:35:50.067012 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 14:35:50.085916 ignition[777]: Ignition 2.19.0 Jan 29 14:35:50.085953 ignition[777]: Stage: fetch Jan 29 14:35:50.086229 ignition[777]: no configs at "/usr/lib/ignition/base.d" Jan 29 14:35:50.086249 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 14:35:50.086392 ignition[777]: parsed url from cmdline: "" Jan 29 14:35:50.086399 ignition[777]: no config URL provided Jan 29 14:35:50.086409 ignition[777]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 14:35:50.086424 ignition[777]: no config at "/usr/lib/ignition/user.ign" Jan 29 14:35:50.086640 ignition[777]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 29 14:35:50.086677 ignition[777]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 29 14:35:50.086688 ignition[777]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 29 14:35:50.087031 ignition[777]: GET error: Get "http://169.254.169.254/openstack/latest/user_data": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 29 14:35:50.114959 systemd-networkd[770]: eth0: DHCPv4 address 10.244.17.238/30, gateway 10.244.17.237 acquired from 10.244.17.237 Jan 29 14:35:50.287299 ignition[777]: GET http://169.254.169.254/openstack/latest/user_data: attempt #2 Jan 29 14:35:50.308093 ignition[777]: GET result: OK Jan 29 14:35:50.308361 ignition[777]: parsing config with SHA512: 4a39b22d3f362221684f433ef4c5be8e8514907c42ca3e17e90eb940f6c01a736c085dee059cc9edf632d38c136b7fee82294c15e9d937afd63e8ec99e83b8df Jan 29 14:35:50.314784 unknown[777]: fetched base config from "system" Jan 29 14:35:50.315158 unknown[777]: fetched base config from "system" Jan 29 14:35:50.315790 ignition[777]: fetch: fetch complete Jan 29 14:35:50.315176 unknown[777]: fetched user config from "openstack" Jan 29 14:35:50.315799 ignition[777]: fetch: fetch passed Jan 29 14:35:50.317792 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 14:35:50.315903 ignition[777]: Ignition finished successfully Jan 29 14:35:50.331080 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 14:35:50.356114 ignition[784]: Ignition 2.19.0 Jan 29 14:35:50.356139 ignition[784]: Stage: kargs Jan 29 14:35:50.356442 ignition[784]: no configs at "/usr/lib/ignition/base.d" Jan 29 14:35:50.356477 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 14:35:50.357665 ignition[784]: kargs: kargs passed Jan 29 14:35:50.361341 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 14:35:50.357755 ignition[784]: Ignition finished successfully Jan 29 14:35:50.369315 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 14:35:50.389151 ignition[791]: Ignition 2.19.0 Jan 29 14:35:50.389173 ignition[791]: Stage: disks Jan 29 14:35:50.389454 ignition[791]: no configs at "/usr/lib/ignition/base.d" Jan 29 14:35:50.389490 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 14:35:50.393940 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 14:35:50.392111 ignition[791]: disks: disks passed Jan 29 14:35:50.392191 ignition[791]: Ignition finished successfully Jan 29 14:35:50.396374 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 14:35:50.397874 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 14:35:50.399321 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 14:35:50.400969 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 14:35:50.402603 systemd[1]: Reached target basic.target - Basic System. Jan 29 14:35:50.420086 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 14:35:50.439956 systemd-fsck[799]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 29 14:35:50.443454 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 14:35:50.451938 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 14:35:50.565042 kernel: EXT4-fs (vda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 29 14:35:50.566242 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 14:35:50.567776 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 14:35:50.574915 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 14:35:50.576922 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 14:35:50.579148 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 14:35:50.581936 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 29 14:35:50.583853 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 14:35:50.583931 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 14:35:50.597628 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 14:35:50.600114 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (807) Jan 29 14:35:50.601435 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 14:35:50.601481 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 14:35:50.601519 kernel: BTRFS info (device vda6): using free space tree Jan 29 14:35:50.614023 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 14:35:50.621832 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 14:35:50.625488 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 14:35:50.694374 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 14:35:50.702488 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Jan 29 14:35:50.709651 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 14:35:50.718142 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 14:35:50.832209 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 14:35:50.836932 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 14:35:50.840003 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 14:35:50.854463 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 14:35:50.857695 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 14:35:50.880253 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 14:35:50.890215 ignition[924]: INFO : Ignition 2.19.0 Jan 29 14:35:50.890215 ignition[924]: INFO : Stage: mount Jan 29 14:35:50.893935 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 14:35:50.893935 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 14:35:50.893935 ignition[924]: INFO : mount: mount passed Jan 29 14:35:50.893935 ignition[924]: INFO : Ignition finished successfully Jan 29 14:35:50.892702 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 14:35:51.185656 systemd-networkd[770]: eth0: Gained IPv6LL Jan 29 14:35:52.694887 systemd-networkd[770]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:47b:24:19ff:fef4:11ee/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:47b:24:19ff:fef4:11ee/64 assigned by NDisc. Jan 29 14:35:52.694906 systemd-networkd[770]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 29 14:35:57.753991 coreos-metadata[809]: Jan 29 14:35:57.753 WARN failed to locate config-drive, using the metadata service API instead Jan 29 14:35:57.778747 coreos-metadata[809]: Jan 29 14:35:57.778 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 29 14:35:57.792847 coreos-metadata[809]: Jan 29 14:35:57.792 INFO Fetch successful Jan 29 14:35:57.793760 coreos-metadata[809]: Jan 29 14:35:57.793 INFO wrote hostname srv-rni4s.gb1.brightbox.com to /sysroot/etc/hostname Jan 29 14:35:57.796155 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 29 14:35:57.796364 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 29 14:35:57.818378 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 14:35:57.838076 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 14:35:57.851864 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (940) Jan 29 14:35:57.857334 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 14:35:57.857382 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 14:35:57.859246 kernel: BTRFS info (device vda6): using free space tree Jan 29 14:35:57.864827 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 14:35:57.868626 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 14:35:57.906859 ignition[958]: INFO : Ignition 2.19.0 Jan 29 14:35:57.906859 ignition[958]: INFO : Stage: files Jan 29 14:35:57.908892 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 14:35:57.908892 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 14:35:57.908892 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Jan 29 14:35:57.911819 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 14:35:57.911819 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 14:35:57.914840 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 14:35:57.915946 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 14:35:57.915946 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 14:35:57.915571 unknown[958]: wrote ssh authorized keys file for user: core Jan 29 14:35:57.919769 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 14:35:57.919769 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 29 14:35:58.109325 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 14:35:58.718943 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 14:35:58.720723 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 29 14:35:58.720723 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 14:35:58.720723 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 14:35:58.720723 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 14:35:58.720723 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 14:35:58.720723 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 14:35:58.720723 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 14:35:58.720723 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 14:35:58.735841 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 14:35:58.735841 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 14:35:58.735841 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 14:35:58.735841 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 14:35:58.735841 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 14:35:58.735841 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 29 14:35:59.315487 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 29 14:36:00.426514 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 14:36:00.426514 ignition[958]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 29 14:36:00.426514 ignition[958]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 14:36:00.426514 ignition[958]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 14:36:00.426514 ignition[958]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 29 14:36:00.426514 ignition[958]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 29 14:36:00.426514 ignition[958]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 14:36:00.426514 ignition[958]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 14:36:00.426514 ignition[958]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 14:36:00.426514 ignition[958]: INFO : files: files passed Jan 29 14:36:00.426514 ignition[958]: INFO : Ignition finished successfully Jan 29 14:36:00.431956 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 14:36:00.446178 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 14:36:00.451123 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 14:36:00.452555 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 14:36:00.452736 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 14:36:00.484182 initrd-setup-root-after-ignition[986]: grep: Jan 29 14:36:00.484182 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 14:36:00.486573 initrd-setup-root-after-ignition[986]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 14:36:00.486573 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 14:36:00.489403 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 14:36:00.491067 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 14:36:00.499147 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 14:36:00.542948 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 14:36:00.543158 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 14:36:00.545642 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 14:36:00.546644 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 14:36:00.547614 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 14:36:00.554110 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 14:36:00.575076 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 14:36:00.583155 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 14:36:00.608768 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 14:36:00.609964 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 14:36:00.611721 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 14:36:00.613253 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 14:36:00.613577 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 14:36:00.614983 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 14:36:00.615939 systemd[1]: Stopped target basic.target - Basic System. Jan 29 14:36:00.617503 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 14:36:00.619339 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 14:36:00.621113 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 14:36:00.622053 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 14:36:00.623027 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 14:36:00.624150 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 14:36:00.625693 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 14:36:00.627281 systemd[1]: Stopped target swap.target - Swaps. Jan 29 14:36:00.628705 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 14:36:00.629018 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 14:36:00.631007 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 14:36:00.632081 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 14:36:00.633794 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 14:36:00.634270 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 14:36:00.635421 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 14:36:00.635724 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 14:36:00.637499 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 14:36:00.637758 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 14:36:00.639544 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 14:36:00.639798 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 14:36:00.649343 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 14:36:00.650776 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 14:36:00.651092 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 14:36:00.661183 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 14:36:00.663100 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 14:36:00.663330 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 14:36:00.667251 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 14:36:00.667646 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 14:36:00.684116 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 14:36:00.686020 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 14:36:00.690035 ignition[1011]: INFO : Ignition 2.19.0 Jan 29 14:36:00.692077 ignition[1011]: INFO : Stage: umount Jan 29 14:36:00.693025 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 14:36:00.693025 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 14:36:00.695590 ignition[1011]: INFO : umount: umount passed Jan 29 14:36:00.695590 ignition[1011]: INFO : Ignition finished successfully Jan 29 14:36:00.696469 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 14:36:00.696657 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 14:36:00.702305 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 14:36:00.702421 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 14:36:00.703961 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 14:36:00.704053 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 14:36:00.704766 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 14:36:00.704856 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 14:36:00.705567 systemd[1]: Stopped target network.target - Network. Jan 29 14:36:00.706191 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 14:36:00.706261 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 14:36:00.707792 systemd[1]: Stopped target paths.target - Path Units. Jan 29 14:36:00.710221 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 14:36:00.712180 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 14:36:00.713487 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 14:36:00.718344 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 14:36:00.719073 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 14:36:00.719152 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 14:36:00.719871 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 14:36:00.719934 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 14:36:00.721923 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 14:36:00.722014 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 14:36:00.723347 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 14:36:00.723420 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 14:36:00.725184 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 14:36:00.727485 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 14:36:00.730614 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 14:36:00.732925 systemd-networkd[770]: eth0: DHCPv6 lease lost Jan 29 14:36:00.736902 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 14:36:00.737858 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 14:36:00.739867 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 14:36:00.740031 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 14:36:00.743559 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 14:36:00.743846 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 14:36:00.762605 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 14:36:00.763396 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 14:36:00.763493 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 14:36:00.764629 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 14:36:00.764697 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 14:36:00.765553 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 14:36:00.765620 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 14:36:00.767441 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 14:36:00.767517 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 14:36:00.769263 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 14:36:00.783345 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 14:36:00.783559 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 14:36:00.785376 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 14:36:00.785581 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 14:36:00.787285 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 14:36:00.787435 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 14:36:00.788998 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 14:36:00.789060 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 14:36:00.790421 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 14:36:00.790492 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 14:36:00.792686 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 14:36:00.792759 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 14:36:00.794223 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 14:36:00.794305 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 14:36:00.805112 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 14:36:00.808198 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 14:36:00.808312 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 14:36:00.812114 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 14:36:00.812189 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 14:36:00.815104 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 14:36:00.815269 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 14:36:00.855773 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 14:36:00.856024 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 14:36:00.858280 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 14:36:00.859104 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 14:36:00.859196 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 14:36:00.867033 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 14:36:00.885908 systemd[1]: Switching root. Jan 29 14:36:00.923132 systemd-journald[201]: Journal stopped Jan 29 14:36:02.408458 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Jan 29 14:36:02.408564 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 14:36:02.408598 kernel: SELinux: policy capability open_perms=1 Jan 29 14:36:02.408618 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 14:36:02.408643 kernel: SELinux: policy capability always_check_network=0 Jan 29 14:36:02.408662 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 14:36:02.408702 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 14:36:02.408729 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 14:36:02.408750 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 14:36:02.408770 kernel: audit: type=1403 audit(1738161361.159:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 14:36:02.408792 systemd[1]: Successfully loaded SELinux policy in 54.313ms. Jan 29 14:36:02.408928 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.762ms. Jan 29 14:36:02.408956 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 14:36:02.408977 systemd[1]: Detected virtualization kvm. Jan 29 14:36:02.409012 systemd[1]: Detected architecture x86-64. Jan 29 14:36:02.409035 systemd[1]: Detected first boot. Jan 29 14:36:02.409055 systemd[1]: Hostname set to . Jan 29 14:36:02.409077 systemd[1]: Initializing machine ID from VM UUID. Jan 29 14:36:02.409097 zram_generator::config[1054]: No configuration found. Jan 29 14:36:02.409122 systemd[1]: Populated /etc with preset unit settings. Jan 29 14:36:02.409142 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 14:36:02.409163 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 14:36:02.409198 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 14:36:02.409221 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 14:36:02.409242 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 14:36:02.409261 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 14:36:02.409296 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 14:36:02.409318 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 14:36:02.409338 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 14:36:02.409359 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 14:36:02.409380 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 14:36:02.409414 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 14:36:02.409437 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 14:36:02.409459 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 14:36:02.409479 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 14:36:02.409500 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 14:36:02.409520 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 14:36:02.409541 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 14:36:02.409569 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 14:36:02.409597 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 14:36:02.409632 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 14:36:02.409654 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 14:36:02.409677 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 14:36:02.409698 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 14:36:02.409719 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 14:36:02.409740 systemd[1]: Reached target slices.target - Slice Units. Jan 29 14:36:02.409773 systemd[1]: Reached target swap.target - Swaps. Jan 29 14:36:02.409796 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 14:36:02.409837 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 14:36:02.409859 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 14:36:02.409893 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 14:36:02.409935 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 14:36:02.409968 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 14:36:02.409990 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 14:36:02.410011 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 14:36:02.410031 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 14:36:02.410052 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 14:36:02.410072 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 14:36:02.410092 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 14:36:02.410113 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 14:36:02.410134 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 14:36:02.410167 systemd[1]: Reached target machines.target - Containers. Jan 29 14:36:02.410191 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 14:36:02.410212 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 14:36:02.410234 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 14:36:02.410255 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 14:36:02.410289 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 14:36:02.410312 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 14:36:02.410332 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 14:36:02.410366 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 14:36:02.410389 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 14:36:02.410417 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 14:36:02.410440 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 14:36:02.410461 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 14:36:02.410481 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 14:36:02.410501 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 14:36:02.410521 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 14:36:02.410542 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 14:36:02.410576 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 14:36:02.410599 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 14:36:02.410620 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 14:36:02.410641 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 14:36:02.410661 systemd[1]: Stopped verity-setup.service. Jan 29 14:36:02.410682 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 14:36:02.410703 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 14:36:02.410724 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 14:36:02.410757 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 14:36:02.410781 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 14:36:02.410896 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 14:36:02.410928 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 14:36:02.410950 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 14:36:02.411023 systemd-journald[1154]: Collecting audit messages is disabled. Jan 29 14:36:02.411060 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 14:36:02.411082 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 14:36:02.411104 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 14:36:02.411125 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 14:36:02.411145 kernel: ACPI: bus type drm_connector registered Jan 29 14:36:02.411165 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 14:36:02.411187 systemd-journald[1154]: Journal started Jan 29 14:36:02.411236 systemd-journald[1154]: Runtime Journal (/run/log/journal/30cb4c5658a74992a913221bb6e2166c) is 4.7M, max 38.0M, 33.2M free. Jan 29 14:36:01.966588 systemd[1]: Queued start job for default target multi-user.target. Jan 29 14:36:01.988772 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 14:36:01.989546 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 14:36:02.413842 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 14:36:02.416882 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 14:36:02.422239 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 14:36:02.422347 kernel: fuse: init (API version 7.39) Jan 29 14:36:02.427368 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 14:36:02.427762 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 14:36:02.429091 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 14:36:02.429342 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 14:36:02.430644 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 14:36:02.431788 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 14:36:02.433253 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 14:36:02.453449 kernel: loop: module loaded Jan 29 14:36:02.455190 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 14:36:02.455518 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 14:36:02.460485 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 14:36:02.468909 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 14:36:02.485922 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 14:36:02.486874 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 14:36:02.486939 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 14:36:02.491422 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 14:36:02.500046 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 14:36:02.509907 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 14:36:02.512068 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 14:36:02.522948 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 14:36:02.527016 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 14:36:02.528638 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 14:36:02.544131 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 14:36:02.545069 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 14:36:02.548757 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 14:36:02.555973 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 14:36:02.566179 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 14:36:02.570713 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 14:36:02.572848 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 14:36:02.574092 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 14:36:02.600847 kernel: loop0: detected capacity change from 0 to 8 Jan 29 14:36:02.611478 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 14:36:02.616123 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 14:36:02.616438 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 14:36:02.624273 systemd-journald[1154]: Time spent on flushing to /var/log/journal/30cb4c5658a74992a913221bb6e2166c is 48.476ms for 1144 entries. Jan 29 14:36:02.624273 systemd-journald[1154]: System Journal (/var/log/journal/30cb4c5658a74992a913221bb6e2166c) is 8.0M, max 584.8M, 576.8M free. Jan 29 14:36:02.719010 systemd-journald[1154]: Received client request to flush runtime journal. Jan 29 14:36:02.719121 kernel: loop1: detected capacity change from 0 to 140768 Jan 29 14:36:02.630513 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 14:36:02.690028 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 14:36:02.726099 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 14:36:02.729711 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 14:36:02.733052 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 14:36:02.734755 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 14:36:02.746198 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 14:36:02.753077 kernel: loop2: detected capacity change from 0 to 210664 Jan 29 14:36:02.778520 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 14:36:02.779010 udevadm[1206]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 29 14:36:02.792129 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 14:36:02.808836 kernel: loop3: detected capacity change from 0 to 142488 Jan 29 14:36:02.862851 kernel: loop4: detected capacity change from 0 to 8 Jan 29 14:36:02.864889 kernel: loop5: detected capacity change from 0 to 140768 Jan 29 14:36:02.890843 kernel: loop6: detected capacity change from 0 to 210664 Jan 29 14:36:02.910443 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Jan 29 14:36:02.910474 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Jan 29 14:36:02.926460 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 14:36:02.931928 kernel: loop7: detected capacity change from 0 to 142488 Jan 29 14:36:02.969370 (sd-merge)[1212]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 29 14:36:02.971464 (sd-merge)[1212]: Merged extensions into '/usr'. Jan 29 14:36:02.982744 systemd[1]: Reloading requested from client PID 1187 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 14:36:02.982772 systemd[1]: Reloading... Jan 29 14:36:03.112897 zram_generator::config[1236]: No configuration found. Jan 29 14:36:03.386260 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 14:36:03.465974 ldconfig[1182]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 14:36:03.485037 systemd[1]: Reloading finished in 501 ms. Jan 29 14:36:03.541557 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 14:36:03.543175 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 14:36:03.559144 systemd[1]: Starting ensure-sysext.service... Jan 29 14:36:03.562021 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 14:36:03.595579 systemd[1]: Reloading requested from client PID 1295 ('systemctl') (unit ensure-sysext.service)... Jan 29 14:36:03.595606 systemd[1]: Reloading... Jan 29 14:36:03.635511 systemd-tmpfiles[1296]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 14:36:03.636131 systemd-tmpfiles[1296]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 14:36:03.637936 systemd-tmpfiles[1296]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 14:36:03.638374 systemd-tmpfiles[1296]: ACLs are not supported, ignoring. Jan 29 14:36:03.638504 systemd-tmpfiles[1296]: ACLs are not supported, ignoring. Jan 29 14:36:03.650678 systemd-tmpfiles[1296]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 14:36:03.650699 systemd-tmpfiles[1296]: Skipping /boot Jan 29 14:36:03.680166 systemd-tmpfiles[1296]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 14:36:03.680189 systemd-tmpfiles[1296]: Skipping /boot Jan 29 14:36:03.743856 zram_generator::config[1323]: No configuration found. Jan 29 14:36:03.928229 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 14:36:04.000110 systemd[1]: Reloading finished in 403 ms. Jan 29 14:36:04.025968 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 14:36:04.031524 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 14:36:04.050653 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 14:36:04.056040 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 14:36:04.068075 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 14:36:04.074614 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 14:36:04.080831 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 14:36:04.085037 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 14:36:04.095489 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 14:36:04.095785 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 14:36:04.109052 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 14:36:04.118140 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 14:36:04.122106 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 14:36:04.123589 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 14:36:04.123749 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 14:36:04.139334 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 14:36:04.141618 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 14:36:04.149543 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 14:36:04.152090 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 14:36:04.152393 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 14:36:04.162925 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 14:36:04.163724 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 14:36:04.165750 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 14:36:04.177397 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 14:36:04.178124 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 14:36:04.179726 systemd-udevd[1392]: Using default interface naming scheme 'v255'. Jan 29 14:36:04.180708 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 14:36:04.181962 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 14:36:04.192197 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 14:36:04.192622 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 14:36:04.199576 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 14:36:04.200687 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 14:36:04.200903 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 14:36:04.200979 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 14:36:04.202219 systemd[1]: Finished ensure-sysext.service. Jan 29 14:36:04.220086 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 14:36:04.234027 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 14:36:04.248005 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 14:36:04.256635 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 14:36:04.258277 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 14:36:04.272546 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 14:36:04.273874 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 14:36:04.281701 augenrules[1429]: No rules Jan 29 14:36:04.283784 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 14:36:04.291453 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 14:36:04.296917 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 14:36:04.297948 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 14:36:04.307913 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 14:36:04.361730 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 14:36:04.376498 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 14:36:04.531638 systemd-networkd[1417]: lo: Link UP Jan 29 14:36:04.531654 systemd-networkd[1417]: lo: Gained carrier Jan 29 14:36:04.536931 systemd-networkd[1417]: Enumeration completed Jan 29 14:36:04.537102 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 14:36:04.541445 systemd-networkd[1417]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 14:36:04.541459 systemd-networkd[1417]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 14:36:04.544907 systemd-resolved[1391]: Positive Trust Anchors: Jan 29 14:36:04.546079 systemd-resolved[1391]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 14:36:04.546128 systemd-resolved[1391]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 14:36:04.548300 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 14:36:04.551368 systemd-networkd[1417]: eth0: Link UP Jan 29 14:36:04.551382 systemd-networkd[1417]: eth0: Gained carrier Jan 29 14:36:04.551410 systemd-networkd[1417]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 14:36:04.559830 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1428) Jan 29 14:36:04.560911 systemd-resolved[1391]: Using system hostname 'srv-rni4s.gb1.brightbox.com'. Jan 29 14:36:04.563637 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 14:36:04.566539 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 14:36:04.567397 systemd[1]: Reached target network.target - Network. Jan 29 14:36:04.568252 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 14:36:04.569989 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 14:36:04.585941 systemd-networkd[1417]: eth0: DHCPv4 address 10.244.17.238/30, gateway 10.244.17.237 acquired from 10.244.17.237 Jan 29 14:36:04.587445 systemd-timesyncd[1412]: Network configuration changed, trying to establish connection. Jan 29 14:36:04.597151 systemd-networkd[1417]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 14:36:04.624962 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 14:36:04.639835 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 29 14:36:04.653840 kernel: ACPI: button: Power Button [PWRF] Jan 29 14:36:04.699956 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 14:36:04.711111 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 14:36:04.730859 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 29 14:36:04.738716 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 29 14:36:04.739021 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 29 14:36:04.745439 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 14:36:04.752832 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 29 14:36:04.804169 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 14:36:04.874544 systemd-timesyncd[1412]: Contacted time server 85.199.214.102:123 (0.flatcar.pool.ntp.org). Jan 29 14:36:04.874867 systemd-timesyncd[1412]: Initial clock synchronization to Wed 2025-01-29 14:36:05.257258 UTC. Jan 29 14:36:05.004969 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 14:36:05.041973 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 14:36:05.051249 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 14:36:05.075890 lvm[1469]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 14:36:05.112760 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 14:36:05.114863 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 14:36:05.115697 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 14:36:05.116682 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 14:36:05.117763 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 14:36:05.119022 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 14:36:05.120000 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 14:36:05.120860 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 14:36:05.121686 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 14:36:05.121742 systemd[1]: Reached target paths.target - Path Units. Jan 29 14:36:05.122444 systemd[1]: Reached target timers.target - Timer Units. Jan 29 14:36:05.124340 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 14:36:05.127766 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 14:36:05.134496 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 14:36:05.137249 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 14:36:05.138869 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 14:36:05.139757 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 14:36:05.140477 systemd[1]: Reached target basic.target - Basic System. Jan 29 14:36:05.141261 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 14:36:05.141308 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 14:36:05.142997 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 14:36:05.149179 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 14:36:05.153097 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 14:36:05.156397 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 14:36:05.156566 lvm[1473]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 14:36:05.160091 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 14:36:05.160936 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 14:36:05.165090 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 14:36:05.174023 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 14:36:05.181074 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 14:36:05.189720 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 14:36:05.197061 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 14:36:05.200315 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 14:36:05.201434 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 14:36:05.205118 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 14:36:05.208054 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 14:36:05.233220 jq[1477]: false Jan 29 14:36:05.236614 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 14:36:05.236928 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 14:36:05.254472 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 14:36:05.255935 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 14:36:05.277928 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 14:36:05.298390 dbus-daemon[1476]: [system] SELinux support is enabled Jan 29 14:36:05.299381 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 14:36:05.307090 extend-filesystems[1478]: Found loop4 Jan 29 14:36:05.307090 extend-filesystems[1478]: Found loop5 Jan 29 14:36:05.307090 extend-filesystems[1478]: Found loop6 Jan 29 14:36:05.307090 extend-filesystems[1478]: Found loop7 Jan 29 14:36:05.307090 extend-filesystems[1478]: Found vda Jan 29 14:36:05.307090 extend-filesystems[1478]: Found vda1 Jan 29 14:36:05.307090 extend-filesystems[1478]: Found vda2 Jan 29 14:36:05.307090 extend-filesystems[1478]: Found vda3 Jan 29 14:36:05.307090 extend-filesystems[1478]: Found usr Jan 29 14:36:05.307090 extend-filesystems[1478]: Found vda4 Jan 29 14:36:05.307090 extend-filesystems[1478]: Found vda6 Jan 29 14:36:05.307090 extend-filesystems[1478]: Found vda7 Jan 29 14:36:05.307090 extend-filesystems[1478]: Found vda9 Jan 29 14:36:05.307090 extend-filesystems[1478]: Checking size of /dev/vda9 Jan 29 14:36:05.472066 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Jan 29 14:36:05.472284 update_engine[1484]: I20250129 14:36:05.358081 1484 main.cc:92] Flatcar Update Engine starting Jan 29 14:36:05.472284 update_engine[1484]: I20250129 14:36:05.365445 1484 update_check_scheduler.cc:74] Next update check in 9m18s Jan 29 14:36:05.472804 tar[1487]: linux-amd64/helm Jan 29 14:36:05.484062 extend-filesystems[1478]: Resized partition /dev/vda9 Jan 29 14:36:05.487958 jq[1485]: true Jan 29 14:36:05.319695 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 14:36:05.312643 dbus-daemon[1476]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1417 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 29 14:36:05.494489 extend-filesystems[1518]: resize2fs 1.47.1 (20-May-2024) Jan 29 14:36:05.320631 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 14:36:05.323844 dbus-daemon[1476]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 29 14:36:05.507516 jq[1510]: true Jan 29 14:36:05.320670 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 14:36:05.323339 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 14:36:05.323383 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 14:36:05.358816 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 29 14:36:05.365298 systemd[1]: Started update-engine.service - Update Engine. Jan 29 14:36:05.369442 (ntainerd)[1501]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 14:36:05.525747 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1415) Jan 29 14:36:05.373075 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 14:36:05.386287 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 14:36:05.386579 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 14:36:05.399885 systemd-logind[1483]: Watching system buttons on /dev/input/event2 (Power Button) Jan 29 14:36:05.399935 systemd-logind[1483]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 14:36:05.402824 systemd-logind[1483]: New seat seat0. Jan 29 14:36:05.444156 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 14:36:05.716608 bash[1535]: Updated "/home/core/.ssh/authorized_keys" Jan 29 14:36:05.717387 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 14:36:05.736238 systemd[1]: Starting sshkeys.service... Jan 29 14:36:05.793827 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 29 14:36:05.811991 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 14:36:05.825385 extend-filesystems[1518]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 14:36:05.825385 extend-filesystems[1518]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 29 14:36:05.825385 extend-filesystems[1518]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 29 14:36:05.823415 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 14:36:05.834312 extend-filesystems[1478]: Resized filesystem in /dev/vda9 Jan 29 14:36:05.827089 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 14:36:05.827374 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 14:36:05.910158 locksmithd[1516]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 14:36:05.937331 dbus-daemon[1476]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 29 14:36:05.937620 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 29 14:36:05.938865 dbus-daemon[1476]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1512 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 29 14:36:05.951874 systemd[1]: Starting polkit.service - Authorization Manager... Jan 29 14:36:05.960880 containerd[1501]: time="2025-01-29T14:36:05.959934661Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 29 14:36:05.991259 polkitd[1552]: Started polkitd version 121 Jan 29 14:36:06.000948 polkitd[1552]: Loading rules from directory /etc/polkit-1/rules.d Jan 29 14:36:06.001044 polkitd[1552]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 29 14:36:06.007365 polkitd[1552]: Finished loading, compiling and executing 2 rules Jan 29 14:36:06.008059 dbus-daemon[1476]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 29 14:36:06.008329 systemd[1]: Started polkit.service - Authorization Manager. Jan 29 14:36:06.010918 polkitd[1552]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 29 14:36:06.022252 containerd[1501]: time="2025-01-29T14:36:06.021947547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 14:36:06.024887 containerd[1501]: time="2025-01-29T14:36:06.024610002Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 14:36:06.024887 containerd[1501]: time="2025-01-29T14:36:06.024660765Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 14:36:06.024887 containerd[1501]: time="2025-01-29T14:36:06.024688764Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 14:36:06.025068 containerd[1501]: time="2025-01-29T14:36:06.025022501Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 14:36:06.025068 containerd[1501]: time="2025-01-29T14:36:06.025061038Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 14:36:06.025224 containerd[1501]: time="2025-01-29T14:36:06.025179034Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 14:36:06.025224 containerd[1501]: time="2025-01-29T14:36:06.025213496Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 14:36:06.025516 containerd[1501]: time="2025-01-29T14:36:06.025472949Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 14:36:06.025588 containerd[1501]: time="2025-01-29T14:36:06.025512622Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 14:36:06.025588 containerd[1501]: time="2025-01-29T14:36:06.025537004Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 14:36:06.025588 containerd[1501]: time="2025-01-29T14:36:06.025554905Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 14:36:06.025726 containerd[1501]: time="2025-01-29T14:36:06.025697573Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 14:36:06.029305 containerd[1501]: time="2025-01-29T14:36:06.029236734Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 14:36:06.029466 containerd[1501]: time="2025-01-29T14:36:06.029423107Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 14:36:06.029466 containerd[1501]: time="2025-01-29T14:36:06.029461369Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 14:36:06.029639 containerd[1501]: time="2025-01-29T14:36:06.029602987Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 14:36:06.029728 containerd[1501]: time="2025-01-29T14:36:06.029700925Z" level=info msg="metadata content store policy set" policy=shared Jan 29 14:36:06.044185 containerd[1501]: time="2025-01-29T14:36:06.044125705Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 14:36:06.044425 containerd[1501]: time="2025-01-29T14:36:06.044240037Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 14:36:06.044425 containerd[1501]: time="2025-01-29T14:36:06.044271749Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 14:36:06.044425 containerd[1501]: time="2025-01-29T14:36:06.044367314Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 14:36:06.044425 containerd[1501]: time="2025-01-29T14:36:06.044400813Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 14:36:06.045032 containerd[1501]: time="2025-01-29T14:36:06.044636191Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 14:36:06.046436 containerd[1501]: time="2025-01-29T14:36:06.046265031Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 14:36:06.046573 systemd-hostnamed[1512]: Hostname set to (static) Jan 29 14:36:06.049556 containerd[1501]: time="2025-01-29T14:36:06.048990996Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 14:36:06.049556 containerd[1501]: time="2025-01-29T14:36:06.049029160Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 14:36:06.049556 containerd[1501]: time="2025-01-29T14:36:06.049052871Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 14:36:06.049556 containerd[1501]: time="2025-01-29T14:36:06.049076740Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 14:36:06.049556 containerd[1501]: time="2025-01-29T14:36:06.049100761Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 14:36:06.049556 containerd[1501]: time="2025-01-29T14:36:06.049122402Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 14:36:06.049556 containerd[1501]: time="2025-01-29T14:36:06.049145032Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 14:36:06.049556 containerd[1501]: time="2025-01-29T14:36:06.049175711Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 14:36:06.049556 containerd[1501]: time="2025-01-29T14:36:06.049204053Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 14:36:06.049556 containerd[1501]: time="2025-01-29T14:36:06.049224123Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 14:36:06.049556 containerd[1501]: time="2025-01-29T14:36:06.049252786Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 14:36:06.049556 containerd[1501]: time="2025-01-29T14:36:06.049295278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 14:36:06.049556 containerd[1501]: time="2025-01-29T14:36:06.049322311Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 14:36:06.049556 containerd[1501]: time="2025-01-29T14:36:06.049345120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 14:36:06.050113 containerd[1501]: time="2025-01-29T14:36:06.049368185Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 14:36:06.050113 containerd[1501]: time="2025-01-29T14:36:06.049388512Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 14:36:06.050113 containerd[1501]: time="2025-01-29T14:36:06.049410444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 14:36:06.050113 containerd[1501]: time="2025-01-29T14:36:06.049464601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 14:36:06.050113 containerd[1501]: time="2025-01-29T14:36:06.049491490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 14:36:06.050113 containerd[1501]: time="2025-01-29T14:36:06.049516725Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 14:36:06.050113 containerd[1501]: time="2025-01-29T14:36:06.049541850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 14:36:06.050113 containerd[1501]: time="2025-01-29T14:36:06.049562558Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 14:36:06.050113 containerd[1501]: time="2025-01-29T14:36:06.049583190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 14:36:06.050113 containerd[1501]: time="2025-01-29T14:36:06.049603044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 14:36:06.050113 containerd[1501]: time="2025-01-29T14:36:06.049635658Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 14:36:06.050113 containerd[1501]: time="2025-01-29T14:36:06.049675021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 14:36:06.050113 containerd[1501]: time="2025-01-29T14:36:06.049699827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 14:36:06.050113 containerd[1501]: time="2025-01-29T14:36:06.049721036Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 14:36:06.050587 containerd[1501]: time="2025-01-29T14:36:06.050019078Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 14:36:06.050587 containerd[1501]: time="2025-01-29T14:36:06.050077473Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 14:36:06.050587 containerd[1501]: time="2025-01-29T14:36:06.050100974Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 14:36:06.050587 containerd[1501]: time="2025-01-29T14:36:06.050121980Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 14:36:06.050587 containerd[1501]: time="2025-01-29T14:36:06.050140704Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 14:36:06.050587 containerd[1501]: time="2025-01-29T14:36:06.050162359Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 14:36:06.050587 containerd[1501]: time="2025-01-29T14:36:06.050195275Z" level=info msg="NRI interface is disabled by configuration." Jan 29 14:36:06.050587 containerd[1501]: time="2025-01-29T14:36:06.050215645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 14:36:06.050919 containerd[1501]: time="2025-01-29T14:36:06.050606593Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 14:36:06.050919 containerd[1501]: time="2025-01-29T14:36:06.050708190Z" level=info msg="Connect containerd service" Jan 29 14:36:06.050919 containerd[1501]: time="2025-01-29T14:36:06.050783923Z" level=info msg="using legacy CRI server" Jan 29 14:36:06.050919 containerd[1501]: time="2025-01-29T14:36:06.050800575Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 14:36:06.054858 containerd[1501]: time="2025-01-29T14:36:06.053096434Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 14:36:06.056514 containerd[1501]: time="2025-01-29T14:36:06.056414008Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 14:36:06.056858 containerd[1501]: time="2025-01-29T14:36:06.056610168Z" level=info msg="Start subscribing containerd event" Jan 29 14:36:06.056858 containerd[1501]: time="2025-01-29T14:36:06.056698424Z" level=info msg="Start recovering state" Jan 29 14:36:06.057893 containerd[1501]: time="2025-01-29T14:36:06.057861195Z" level=info msg="Start event monitor" Jan 29 14:36:06.057952 containerd[1501]: time="2025-01-29T14:36:06.057913846Z" level=info msg="Start snapshots syncer" Jan 29 14:36:06.057952 containerd[1501]: time="2025-01-29T14:36:06.057939194Z" level=info msg="Start cni network conf syncer for default" Jan 29 14:36:06.058035 containerd[1501]: time="2025-01-29T14:36:06.057959529Z" level=info msg="Start streaming server" Jan 29 14:36:06.061733 containerd[1501]: time="2025-01-29T14:36:06.060135021Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 14:36:06.061733 containerd[1501]: time="2025-01-29T14:36:06.060232756Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 14:36:06.061733 containerd[1501]: time="2025-01-29T14:36:06.060365617Z" level=info msg="containerd successfully booted in 0.101819s" Jan 29 14:36:06.060497 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 14:36:06.225756 systemd-networkd[1417]: eth0: Gained IPv6LL Jan 29 14:36:06.232256 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 14:36:06.236809 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 14:36:06.248192 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 14:36:06.259060 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 14:36:06.334971 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 14:36:06.463878 tar[1487]: linux-amd64/LICENSE Jan 29 14:36:06.463878 tar[1487]: linux-amd64/README.md Jan 29 14:36:06.492347 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 14:36:06.600917 sshd_keygen[1513]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 14:36:06.643834 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 14:36:06.664935 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 14:36:06.672384 systemd[1]: Started sshd@0-10.244.17.238:22-139.178.68.195:39782.service - OpenSSH per-connection server daemon (139.178.68.195:39782). Jan 29 14:36:06.687372 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 14:36:06.687801 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 14:36:06.702151 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 14:36:06.735427 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 14:36:06.746285 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 14:36:06.754504 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 14:36:06.755798 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 14:36:07.318187 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 14:36:07.321323 (kubelet)[1604]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 14:36:07.624581 sshd[1589]: Accepted publickey for core from 139.178.68.195 port 39782 ssh2: RSA SHA256:0vZJraS5L9jVCttGjAqyyzs9a0MPbdpNAxJdtCuEsy8 Jan 29 14:36:07.627874 sshd[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 14:36:07.645810 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 14:36:07.656341 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 14:36:07.664813 systemd-logind[1483]: New session 1 of user core. Jan 29 14:36:07.686249 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 14:36:07.701346 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 14:36:07.711090 (systemd)[1612]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 14:36:07.863522 systemd[1612]: Queued start job for default target default.target. Jan 29 14:36:07.874024 systemd[1612]: Created slice app.slice - User Application Slice. Jan 29 14:36:07.874073 systemd[1612]: Reached target paths.target - Paths. Jan 29 14:36:07.874098 systemd[1612]: Reached target timers.target - Timers. Jan 29 14:36:07.878047 systemd[1612]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 14:36:07.898240 systemd[1612]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 14:36:07.898504 systemd[1612]: Reached target sockets.target - Sockets. Jan 29 14:36:07.898544 systemd[1612]: Reached target basic.target - Basic System. Jan 29 14:36:07.898901 systemd[1612]: Reached target default.target - Main User Target. Jan 29 14:36:07.898978 systemd[1612]: Startup finished in 175ms. Jan 29 14:36:07.899023 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 14:36:07.906158 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 14:36:08.074984 kubelet[1604]: E0129 14:36:08.074715 1604 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 14:36:08.077777 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 14:36:08.078153 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 14:36:08.079060 systemd[1]: kubelet.service: Consumed 1.060s CPU time. Jan 29 14:36:08.223916 systemd-networkd[1417]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:47b:24:19ff:fef4:11ee/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:47b:24:19ff:fef4:11ee/64 assigned by NDisc. Jan 29 14:36:08.223931 systemd-networkd[1417]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 29 14:36:08.568446 systemd[1]: Started sshd@1-10.244.17.238:22-139.178.68.195:58346.service - OpenSSH per-connection server daemon (139.178.68.195:58346). Jan 29 14:36:09.489547 sshd[1627]: Accepted publickey for core from 139.178.68.195 port 58346 ssh2: RSA SHA256:0vZJraS5L9jVCttGjAqyyzs9a0MPbdpNAxJdtCuEsy8 Jan 29 14:36:09.492123 sshd[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 14:36:09.499691 systemd-logind[1483]: New session 2 of user core. Jan 29 14:36:09.517307 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 14:36:10.126276 sshd[1627]: pam_unix(sshd:session): session closed for user core Jan 29 14:36:10.131331 systemd[1]: sshd@1-10.244.17.238:22-139.178.68.195:58346.service: Deactivated successfully. Jan 29 14:36:10.134160 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 14:36:10.135721 systemd-logind[1483]: Session 2 logged out. Waiting for processes to exit. Jan 29 14:36:10.137951 systemd-logind[1483]: Removed session 2. Jan 29 14:36:10.291523 systemd[1]: Started sshd@2-10.244.17.238:22-139.178.68.195:58352.service - OpenSSH per-connection server daemon (139.178.68.195:58352). Jan 29 14:36:11.195424 sshd[1635]: Accepted publickey for core from 139.178.68.195 port 58352 ssh2: RSA SHA256:0vZJraS5L9jVCttGjAqyyzs9a0MPbdpNAxJdtCuEsy8 Jan 29 14:36:11.198075 sshd[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 14:36:11.205393 systemd-logind[1483]: New session 3 of user core. Jan 29 14:36:11.216314 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 14:36:11.832173 sshd[1635]: pam_unix(sshd:session): session closed for user core Jan 29 14:36:11.833193 login[1596]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 29 14:36:11.839731 systemd[1]: sshd@2-10.244.17.238:22-139.178.68.195:58352.service: Deactivated successfully. Jan 29 14:36:11.840777 login[1597]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 29 14:36:11.843286 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 14:36:11.844382 systemd-logind[1483]: Session 3 logged out. Waiting for processes to exit. Jan 29 14:36:11.851354 systemd-logind[1483]: New session 4 of user core. Jan 29 14:36:11.860254 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 14:36:11.861968 systemd-logind[1483]: Removed session 3. Jan 29 14:36:11.866924 systemd-logind[1483]: New session 5 of user core. Jan 29 14:36:11.875314 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 14:36:12.417492 coreos-metadata[1475]: Jan 29 14:36:12.417 WARN failed to locate config-drive, using the metadata service API instead Jan 29 14:36:12.446711 coreos-metadata[1475]: Jan 29 14:36:12.446 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 29 14:36:12.452867 coreos-metadata[1475]: Jan 29 14:36:12.452 INFO Fetch failed with 404: resource not found Jan 29 14:36:12.452867 coreos-metadata[1475]: Jan 29 14:36:12.452 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 29 14:36:12.453536 coreos-metadata[1475]: Jan 29 14:36:12.453 INFO Fetch successful Jan 29 14:36:12.453716 coreos-metadata[1475]: Jan 29 14:36:12.453 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 29 14:36:12.467114 coreos-metadata[1475]: Jan 29 14:36:12.467 INFO Fetch successful Jan 29 14:36:12.467293 coreos-metadata[1475]: Jan 29 14:36:12.467 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 29 14:36:12.482254 coreos-metadata[1475]: Jan 29 14:36:12.482 INFO Fetch successful Jan 29 14:36:12.482492 coreos-metadata[1475]: Jan 29 14:36:12.482 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 29 14:36:12.496856 coreos-metadata[1475]: Jan 29 14:36:12.496 INFO Fetch successful Jan 29 14:36:12.497053 coreos-metadata[1475]: Jan 29 14:36:12.497 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 29 14:36:12.514558 coreos-metadata[1475]: Jan 29 14:36:12.514 INFO Fetch successful Jan 29 14:36:12.556587 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 14:36:12.558559 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 14:36:12.932563 coreos-metadata[1548]: Jan 29 14:36:12.932 WARN failed to locate config-drive, using the metadata service API instead Jan 29 14:36:12.956597 coreos-metadata[1548]: Jan 29 14:36:12.956 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 29 14:36:12.986755 coreos-metadata[1548]: Jan 29 14:36:12.986 INFO Fetch successful Jan 29 14:36:12.986755 coreos-metadata[1548]: Jan 29 14:36:12.986 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 29 14:36:13.018890 coreos-metadata[1548]: Jan 29 14:36:13.018 INFO Fetch successful Jan 29 14:36:13.021105 unknown[1548]: wrote ssh authorized keys file for user: core Jan 29 14:36:13.043376 update-ssh-keys[1674]: Updated "/home/core/.ssh/authorized_keys" Jan 29 14:36:13.044260 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 14:36:13.048373 systemd[1]: Finished sshkeys.service. Jan 29 14:36:13.050283 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 14:36:13.053935 systemd[1]: Startup finished in 1.506s (kernel) + 14.400s (initrd) + 11.944s (userspace) = 27.851s. Jan 29 14:36:18.117302 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 14:36:18.127149 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 14:36:18.281371 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 14:36:18.296536 (kubelet)[1685]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 14:36:18.362063 kubelet[1685]: E0129 14:36:18.361977 1685 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 14:36:18.366995 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 14:36:18.367275 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 14:36:22.105193 systemd[1]: Started sshd@3-10.244.17.238:22-139.178.68.195:48438.service - OpenSSH per-connection server daemon (139.178.68.195:48438). Jan 29 14:36:22.995659 sshd[1695]: Accepted publickey for core from 139.178.68.195 port 48438 ssh2: RSA SHA256:0vZJraS5L9jVCttGjAqyyzs9a0MPbdpNAxJdtCuEsy8 Jan 29 14:36:22.997875 sshd[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 14:36:23.005838 systemd-logind[1483]: New session 6 of user core. Jan 29 14:36:23.013032 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 14:36:23.615324 sshd[1695]: pam_unix(sshd:session): session closed for user core Jan 29 14:36:23.620484 systemd[1]: sshd@3-10.244.17.238:22-139.178.68.195:48438.service: Deactivated successfully. Jan 29 14:36:23.622451 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 14:36:23.623330 systemd-logind[1483]: Session 6 logged out. Waiting for processes to exit. Jan 29 14:36:23.624935 systemd-logind[1483]: Removed session 6. Jan 29 14:36:23.779546 systemd[1]: Started sshd@4-10.244.17.238:22-139.178.68.195:48448.service - OpenSSH per-connection server daemon (139.178.68.195:48448). Jan 29 14:36:24.660364 sshd[1702]: Accepted publickey for core from 139.178.68.195 port 48448 ssh2: RSA SHA256:0vZJraS5L9jVCttGjAqyyzs9a0MPbdpNAxJdtCuEsy8 Jan 29 14:36:24.662605 sshd[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 14:36:24.669883 systemd-logind[1483]: New session 7 of user core. Jan 29 14:36:24.678093 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 14:36:25.282743 sshd[1702]: pam_unix(sshd:session): session closed for user core Jan 29 14:36:25.288022 systemd[1]: sshd@4-10.244.17.238:22-139.178.68.195:48448.service: Deactivated successfully. Jan 29 14:36:25.290253 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 14:36:25.291139 systemd-logind[1483]: Session 7 logged out. Waiting for processes to exit. Jan 29 14:36:25.292742 systemd-logind[1483]: Removed session 7. Jan 29 14:36:25.434904 systemd[1]: Started sshd@5-10.244.17.238:22-139.178.68.195:43334.service - OpenSSH per-connection server daemon (139.178.68.195:43334). Jan 29 14:36:26.324470 sshd[1709]: Accepted publickey for core from 139.178.68.195 port 43334 ssh2: RSA SHA256:0vZJraS5L9jVCttGjAqyyzs9a0MPbdpNAxJdtCuEsy8 Jan 29 14:36:26.326667 sshd[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 14:36:26.335464 systemd-logind[1483]: New session 8 of user core. Jan 29 14:36:26.337030 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 14:36:26.941308 sshd[1709]: pam_unix(sshd:session): session closed for user core Jan 29 14:36:26.945337 systemd-logind[1483]: Session 8 logged out. Waiting for processes to exit. Jan 29 14:36:26.945889 systemd[1]: sshd@5-10.244.17.238:22-139.178.68.195:43334.service: Deactivated successfully. Jan 29 14:36:26.948036 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 14:36:26.950028 systemd-logind[1483]: Removed session 8. Jan 29 14:36:27.098798 systemd[1]: Started sshd@6-10.244.17.238:22-139.178.68.195:43344.service - OpenSSH per-connection server daemon (139.178.68.195:43344). Jan 29 14:36:27.998538 sshd[1716]: Accepted publickey for core from 139.178.68.195 port 43344 ssh2: RSA SHA256:0vZJraS5L9jVCttGjAqyyzs9a0MPbdpNAxJdtCuEsy8 Jan 29 14:36:28.000642 sshd[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 14:36:28.007227 systemd-logind[1483]: New session 9 of user core. Jan 29 14:36:28.018074 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 14:36:28.489735 sudo[1719]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 14:36:28.490252 sudo[1719]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 14:36:28.493619 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 14:36:28.500928 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 14:36:28.514563 sudo[1719]: pam_unix(sudo:session): session closed for user root Jan 29 14:36:28.658538 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 14:36:28.662110 sshd[1716]: pam_unix(sshd:session): session closed for user core Jan 29 14:36:28.671215 systemd[1]: sshd@6-10.244.17.238:22-139.178.68.195:43344.service: Deactivated successfully. Jan 29 14:36:28.671336 (kubelet)[1729]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 14:36:28.674764 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 14:36:28.676704 systemd-logind[1483]: Session 9 logged out. Waiting for processes to exit. Jan 29 14:36:28.679690 systemd-logind[1483]: Removed session 9. Jan 29 14:36:28.745852 kubelet[1729]: E0129 14:36:28.745607 1729 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 14:36:28.748818 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 14:36:28.749065 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 14:36:28.824873 systemd[1]: Started sshd@7-10.244.17.238:22-139.178.68.195:43352.service - OpenSSH per-connection server daemon (139.178.68.195:43352). Jan 29 14:36:29.702493 sshd[1739]: Accepted publickey for core from 139.178.68.195 port 43352 ssh2: RSA SHA256:0vZJraS5L9jVCttGjAqyyzs9a0MPbdpNAxJdtCuEsy8 Jan 29 14:36:29.705231 sshd[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 14:36:29.712695 systemd-logind[1483]: New session 10 of user core. Jan 29 14:36:29.721036 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 14:36:30.179306 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 14:36:30.179765 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 14:36:30.185934 sudo[1743]: pam_unix(sudo:session): session closed for user root Jan 29 14:36:30.193695 sudo[1742]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 29 14:36:30.194157 sudo[1742]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 14:36:30.218230 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 29 14:36:30.221141 auditctl[1746]: No rules Jan 29 14:36:30.221673 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 14:36:30.221997 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 29 14:36:30.229302 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 14:36:30.264682 augenrules[1764]: No rules Jan 29 14:36:30.265511 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 14:36:30.267268 sudo[1742]: pam_unix(sudo:session): session closed for user root Jan 29 14:36:30.413608 sshd[1739]: pam_unix(sshd:session): session closed for user core Jan 29 14:36:30.418914 systemd[1]: sshd@7-10.244.17.238:22-139.178.68.195:43352.service: Deactivated successfully. Jan 29 14:36:30.421620 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 14:36:30.423405 systemd-logind[1483]: Session 10 logged out. Waiting for processes to exit. Jan 29 14:36:30.425096 systemd-logind[1483]: Removed session 10. Jan 29 14:36:30.578330 systemd[1]: Started sshd@8-10.244.17.238:22-139.178.68.195:43354.service - OpenSSH per-connection server daemon (139.178.68.195:43354). Jan 29 14:36:31.461319 sshd[1772]: Accepted publickey for core from 139.178.68.195 port 43354 ssh2: RSA SHA256:0vZJraS5L9jVCttGjAqyyzs9a0MPbdpNAxJdtCuEsy8 Jan 29 14:36:31.463481 sshd[1772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 14:36:31.470723 systemd-logind[1483]: New session 11 of user core. Jan 29 14:36:31.481018 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 14:36:31.940480 sudo[1775]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 14:36:31.940952 sudo[1775]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 14:36:32.431255 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 14:36:32.431470 (dockerd)[1791]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 14:36:32.860730 dockerd[1791]: time="2025-01-29T14:36:32.859871090Z" level=info msg="Starting up" Jan 29 14:36:32.973759 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport770069457-merged.mount: Deactivated successfully. Jan 29 14:36:33.023194 dockerd[1791]: time="2025-01-29T14:36:33.022915526Z" level=info msg="Loading containers: start." Jan 29 14:36:33.170881 kernel: Initializing XFRM netlink socket Jan 29 14:36:33.277487 systemd-networkd[1417]: docker0: Link UP Jan 29 14:36:33.299712 dockerd[1791]: time="2025-01-29T14:36:33.299639754Z" level=info msg="Loading containers: done." Jan 29 14:36:33.320080 dockerd[1791]: time="2025-01-29T14:36:33.319921304Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 14:36:33.320080 dockerd[1791]: time="2025-01-29T14:36:33.320061699Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 29 14:36:33.320382 dockerd[1791]: time="2025-01-29T14:36:33.320289186Z" level=info msg="Daemon has completed initialization" Jan 29 14:36:33.370987 dockerd[1791]: time="2025-01-29T14:36:33.370759131Z" level=info msg="API listen on /run/docker.sock" Jan 29 14:36:33.371542 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 14:36:33.970949 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2603100634-merged.mount: Deactivated successfully. Jan 29 14:36:34.779528 containerd[1501]: time="2025-01-29T14:36:34.779339487Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 29 14:36:35.498677 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount774630650.mount: Deactivated successfully. Jan 29 14:36:37.276643 containerd[1501]: time="2025-01-29T14:36:37.276484355Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:36:37.278425 containerd[1501]: time="2025-01-29T14:36:37.278360880Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677020" Jan 29 14:36:37.280481 containerd[1501]: time="2025-01-29T14:36:37.280401895Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:36:37.284127 containerd[1501]: time="2025-01-29T14:36:37.284077697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:36:37.286302 containerd[1501]: time="2025-01-29T14:36:37.285896562Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 2.506415011s" Jan 29 14:36:37.286302 containerd[1501]: time="2025-01-29T14:36:37.285962428Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 29 14:36:37.322070 containerd[1501]: time="2025-01-29T14:36:37.322019436Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 29 14:36:38.263606 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 29 14:36:38.867504 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 29 14:36:38.878248 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 14:36:39.076923 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 14:36:39.091328 (kubelet)[2014]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 14:36:39.227894 kubelet[2014]: E0129 14:36:39.227459 2014 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 14:36:39.233016 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 14:36:39.233281 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 14:36:39.626320 containerd[1501]: time="2025-01-29T14:36:39.626202630Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:36:39.627982 containerd[1501]: time="2025-01-29T14:36:39.627905786Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605753" Jan 29 14:36:39.629420 containerd[1501]: time="2025-01-29T14:36:39.629310528Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:36:39.634058 containerd[1501]: time="2025-01-29T14:36:39.633949214Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:36:39.635760 containerd[1501]: time="2025-01-29T14:36:39.635569524Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 2.313264984s" Jan 29 14:36:39.635760 containerd[1501]: time="2025-01-29T14:36:39.635617032Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 29 14:36:39.672299 containerd[1501]: time="2025-01-29T14:36:39.672231838Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 29 14:36:41.373637 containerd[1501]: time="2025-01-29T14:36:41.373502585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:36:41.375665 containerd[1501]: time="2025-01-29T14:36:41.375599026Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783072" Jan 29 14:36:41.376728 containerd[1501]: time="2025-01-29T14:36:41.376656256Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:36:41.381039 containerd[1501]: time="2025-01-29T14:36:41.380919080Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:36:41.383269 containerd[1501]: time="2025-01-29T14:36:41.382702939Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.710393453s" Jan 29 14:36:41.383269 containerd[1501]: time="2025-01-29T14:36:41.382779302Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 29 14:36:41.420147 containerd[1501]: time="2025-01-29T14:36:41.420069758Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 29 14:36:43.083039 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount888691661.mount: Deactivated successfully. Jan 29 14:36:43.709942 containerd[1501]: time="2025-01-29T14:36:43.708971638Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:36:43.711815 containerd[1501]: time="2025-01-29T14:36:43.711544767Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058345" Jan 29 14:36:43.712674 containerd[1501]: time="2025-01-29T14:36:43.712632418Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:36:43.716336 containerd[1501]: time="2025-01-29T14:36:43.716274231Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:36:43.718026 containerd[1501]: time="2025-01-29T14:36:43.717897124Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 2.297439112s" Jan 29 14:36:43.718026 containerd[1501]: time="2025-01-29T14:36:43.717972720Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 29 14:36:43.753844 containerd[1501]: time="2025-01-29T14:36:43.753554755Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 14:36:44.338419 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2018290297.mount: Deactivated successfully. Jan 29 14:36:45.672993 containerd[1501]: time="2025-01-29T14:36:45.672892910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:36:45.674674 containerd[1501]: time="2025-01-29T14:36:45.674565968Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jan 29 14:36:45.675425 containerd[1501]: time="2025-01-29T14:36:45.675329384Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:36:45.679710 containerd[1501]: time="2025-01-29T14:36:45.679623863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:36:45.681503 containerd[1501]: time="2025-01-29T14:36:45.681261242Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.927653154s" Jan 29 14:36:45.681503 containerd[1501]: time="2025-01-29T14:36:45.681313981Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 29 14:36:45.713879 containerd[1501]: time="2025-01-29T14:36:45.713145771Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 29 14:36:46.293209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount338330090.mount: Deactivated successfully. Jan 29 14:36:46.302138 containerd[1501]: time="2025-01-29T14:36:46.301048456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:36:46.304024 containerd[1501]: time="2025-01-29T14:36:46.303937930Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Jan 29 14:36:46.307421 containerd[1501]: time="2025-01-29T14:36:46.305553037Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:36:46.312257 containerd[1501]: time="2025-01-29T14:36:46.310173946Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:36:46.312257 containerd[1501]: time="2025-01-29T14:36:46.311375478Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 598.171567ms" Jan 29 14:36:46.312257 containerd[1501]: time="2025-01-29T14:36:46.311412657Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 29 14:36:46.343729 containerd[1501]: time="2025-01-29T14:36:46.343662142Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 29 14:36:46.966453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2728379880.mount: Deactivated successfully. Jan 29 14:36:49.367232 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 29 14:36:49.378071 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 14:36:49.677147 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 14:36:49.679285 (kubelet)[2161]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 14:36:49.785994 containerd[1501]: time="2025-01-29T14:36:49.785897548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:36:49.787905 kubelet[2161]: E0129 14:36:49.787456 2161 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 14:36:49.789067 containerd[1501]: time="2025-01-29T14:36:49.788577202Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238579" Jan 29 14:36:49.791264 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 14:36:49.792033 containerd[1501]: time="2025-01-29T14:36:49.790546070Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:36:49.791518 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 14:36:49.795877 containerd[1501]: time="2025-01-29T14:36:49.795025802Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:36:49.796865 containerd[1501]: time="2025-01-29T14:36:49.796827958Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.453093017s" Jan 29 14:36:49.797075 containerd[1501]: time="2025-01-29T14:36:49.797030281Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 29 14:36:50.215617 update_engine[1484]: I20250129 14:36:50.215327 1484 update_attempter.cc:509] Updating boot flags... Jan 29 14:36:50.273307 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2182) Jan 29 14:36:50.355843 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2183) Jan 29 14:36:53.678911 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 14:36:53.690270 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 14:36:53.720742 systemd[1]: Reloading requested from client PID 2244 ('systemctl') (unit session-11.scope)... Jan 29 14:36:53.720784 systemd[1]: Reloading... Jan 29 14:36:53.895851 zram_generator::config[2280]: No configuration found. Jan 29 14:36:54.046798 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 14:36:54.157122 systemd[1]: Reloading finished in 435 ms. Jan 29 14:36:54.222227 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 14:36:54.222379 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 14:36:54.222908 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 14:36:54.232621 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 14:36:54.369569 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 14:36:54.384261 (kubelet)[2350]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 14:36:54.538798 kubelet[2350]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 14:36:54.538798 kubelet[2350]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 14:36:54.538798 kubelet[2350]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 14:36:54.540201 kubelet[2350]: I0129 14:36:54.540084 2350 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 14:36:54.915586 kubelet[2350]: I0129 14:36:54.915499 2350 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 14:36:54.915586 kubelet[2350]: I0129 14:36:54.915563 2350 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 14:36:54.915916 kubelet[2350]: I0129 14:36:54.915878 2350 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 14:36:54.950136 kubelet[2350]: I0129 14:36:54.949972 2350 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 14:36:54.953454 kubelet[2350]: E0129 14:36:54.953152 2350 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.244.17.238:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.244.17.238:6443: connect: connection refused Jan 29 14:36:54.974631 kubelet[2350]: I0129 14:36:54.974481 2350 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 14:36:54.974954 kubelet[2350]: I0129 14:36:54.974899 2350 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 14:36:54.975240 kubelet[2350]: I0129 14:36:54.974947 2350 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-rni4s.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 14:36:54.976111 kubelet[2350]: I0129 14:36:54.976025 2350 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 14:36:54.976111 kubelet[2350]: I0129 14:36:54.976067 2350 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 14:36:54.976398 kubelet[2350]: I0129 14:36:54.976336 2350 state_mem.go:36] "Initialized new in-memory state store" Jan 29 14:36:54.977422 kubelet[2350]: I0129 14:36:54.977387 2350 kubelet.go:400] "Attempting to sync node with API server" Jan 29 14:36:54.977422 kubelet[2350]: I0129 14:36:54.977418 2350 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 14:36:54.977581 kubelet[2350]: I0129 14:36:54.977478 2350 kubelet.go:312] "Adding apiserver pod source" Jan 29 14:36:54.977581 kubelet[2350]: I0129 14:36:54.977519 2350 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 14:36:54.983097 kubelet[2350]: W0129 14:36:54.982891 2350 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.244.17.238:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.244.17.238:6443: connect: connection refused Jan 29 14:36:54.983097 kubelet[2350]: E0129 14:36:54.983012 2350 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.244.17.238:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.244.17.238:6443: connect: connection refused Jan 29 14:36:54.983523 kubelet[2350]: W0129 14:36:54.983373 2350 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.244.17.238:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-rni4s.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.17.238:6443: connect: connection refused Jan 29 14:36:54.983523 kubelet[2350]: E0129 14:36:54.983420 2350 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.244.17.238:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-rni4s.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.17.238:6443: connect: connection refused Jan 29 14:36:54.983731 kubelet[2350]: I0129 14:36:54.983548 2350 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 14:36:54.986608 kubelet[2350]: I0129 14:36:54.985497 2350 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 14:36:54.986608 kubelet[2350]: W0129 14:36:54.985671 2350 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 14:36:54.987432 kubelet[2350]: I0129 14:36:54.986972 2350 server.go:1264] "Started kubelet" Jan 29 14:36:54.991080 kubelet[2350]: I0129 14:36:54.990888 2350 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 14:36:54.999031 kubelet[2350]: I0129 14:36:54.998958 2350 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 14:36:55.000749 kubelet[2350]: I0129 14:36:55.000704 2350 server.go:455] "Adding debug handlers to kubelet server" Jan 29 14:36:55.002338 kubelet[2350]: I0129 14:36:55.002263 2350 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 14:36:55.002961 kubelet[2350]: I0129 14:36:55.002526 2350 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 14:36:55.002961 kubelet[2350]: I0129 14:36:55.002608 2350 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 14:36:55.006534 kubelet[2350]: I0129 14:36:55.006307 2350 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 14:36:55.006534 kubelet[2350]: I0129 14:36:55.006417 2350 reconciler.go:26] "Reconciler: start to sync state" Jan 29 14:36:55.011321 kubelet[2350]: E0129 14:36:55.008692 2350 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.17.238:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-rni4s.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.17.238:6443: connect: connection refused" interval="200ms" Jan 29 14:36:55.011970 kubelet[2350]: I0129 14:36:55.011929 2350 factory.go:221] Registration of the systemd container factory successfully Jan 29 14:36:55.012237 kubelet[2350]: I0129 14:36:55.012201 2350 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 14:36:55.014841 kubelet[2350]: E0129 14:36:55.014672 2350 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.244.17.238:6443/api/v1/namespaces/default/events\": dial tcp 10.244.17.238:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-rni4s.gb1.brightbox.com.181f30990696051f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-rni4s.gb1.brightbox.com,UID:srv-rni4s.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-rni4s.gb1.brightbox.com,},FirstTimestamp:2025-01-29 14:36:54.986925343 +0000 UTC m=+0.596824513,LastTimestamp:2025-01-29 14:36:54.986925343 +0000 UTC m=+0.596824513,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-rni4s.gb1.brightbox.com,}" Jan 29 14:36:55.015741 kubelet[2350]: W0129 14:36:55.015709 2350 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.244.17.238:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.17.238:6443: connect: connection refused Jan 29 14:36:55.015930 kubelet[2350]: E0129 14:36:55.015906 2350 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.244.17.238:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.17.238:6443: connect: connection refused Jan 29 14:36:55.016351 kubelet[2350]: I0129 14:36:55.016325 2350 factory.go:221] Registration of the containerd container factory successfully Jan 29 14:36:55.078065 kubelet[2350]: I0129 14:36:55.078013 2350 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 14:36:55.079907 kubelet[2350]: I0129 14:36:55.079881 2350 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 14:36:55.080056 kubelet[2350]: I0129 14:36:55.080036 2350 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 14:36:55.080661 kubelet[2350]: I0129 14:36:55.080175 2350 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 14:36:55.080661 kubelet[2350]: E0129 14:36:55.080270 2350 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 14:36:55.090704 kubelet[2350]: W0129 14:36:55.090646 2350 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.244.17.238:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.17.238:6443: connect: connection refused Jan 29 14:36:55.090941 kubelet[2350]: E0129 14:36:55.090915 2350 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.244.17.238:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.17.238:6443: connect: connection refused Jan 29 14:36:55.096583 kubelet[2350]: I0129 14:36:55.096533 2350 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 14:36:55.096583 kubelet[2350]: I0129 14:36:55.096562 2350 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 14:36:55.096583 kubelet[2350]: I0129 14:36:55.096623 2350 state_mem.go:36] "Initialized new in-memory state store" Jan 29 14:36:55.098618 kubelet[2350]: I0129 14:36:55.098525 2350 policy_none.go:49] "None policy: Start" Jan 29 14:36:55.099526 kubelet[2350]: I0129 14:36:55.099492 2350 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 14:36:55.100137 kubelet[2350]: I0129 14:36:55.099752 2350 state_mem.go:35] "Initializing new in-memory state store" Jan 29 14:36:55.105773 kubelet[2350]: I0129 14:36:55.105726 2350 kubelet_node_status.go:73] "Attempting to register node" node="srv-rni4s.gb1.brightbox.com" Jan 29 14:36:55.106642 kubelet[2350]: E0129 14:36:55.106532 2350 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.244.17.238:6443/api/v1/nodes\": dial tcp 10.244.17.238:6443: connect: connection refused" node="srv-rni4s.gb1.brightbox.com" Jan 29 14:36:55.113635 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 14:36:55.130542 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 14:36:55.135094 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 14:36:55.142417 kubelet[2350]: I0129 14:36:55.142043 2350 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 14:36:55.142538 kubelet[2350]: I0129 14:36:55.142414 2350 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 14:36:55.142790 kubelet[2350]: I0129 14:36:55.142625 2350 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 14:36:55.145447 kubelet[2350]: E0129 14:36:55.145259 2350 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-rni4s.gb1.brightbox.com\" not found" Jan 29 14:36:55.179422 kubelet[2350]: E0129 14:36:55.179140 2350 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.244.17.238:6443/api/v1/namespaces/default/events\": dial tcp 10.244.17.238:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-rni4s.gb1.brightbox.com.181f30990696051f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-rni4s.gb1.brightbox.com,UID:srv-rni4s.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-rni4s.gb1.brightbox.com,},FirstTimestamp:2025-01-29 14:36:54.986925343 +0000 UTC m=+0.596824513,LastTimestamp:2025-01-29 14:36:54.986925343 +0000 UTC m=+0.596824513,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-rni4s.gb1.brightbox.com,}" Jan 29 14:36:55.181523 kubelet[2350]: I0129 14:36:55.181455 2350 topology_manager.go:215] "Topology Admit Handler" podUID="a440675ebd0c4f962509fe4c8dab64bd" podNamespace="kube-system" podName="kube-apiserver-srv-rni4s.gb1.brightbox.com" Jan 29 14:36:55.184591 kubelet[2350]: I0129 14:36:55.184273 2350 topology_manager.go:215] "Topology Admit Handler" podUID="dd0d9a82815b1d82bfede134b7d44f09" podNamespace="kube-system" podName="kube-controller-manager-srv-rni4s.gb1.brightbox.com" Jan 29 14:36:55.186689 kubelet[2350]: I0129 14:36:55.186656 2350 topology_manager.go:215] "Topology Admit Handler" podUID="4ab6b5e26a6a62b4024471c40f75b96e" podNamespace="kube-system" podName="kube-scheduler-srv-rni4s.gb1.brightbox.com" Jan 29 14:36:55.197543 systemd[1]: Created slice kubepods-burstable-poda440675ebd0c4f962509fe4c8dab64bd.slice - libcontainer container kubepods-burstable-poda440675ebd0c4f962509fe4c8dab64bd.slice. Jan 29 14:36:55.207761 systemd[1]: Created slice kubepods-burstable-pod4ab6b5e26a6a62b4024471c40f75b96e.slice - libcontainer container kubepods-burstable-pod4ab6b5e26a6a62b4024471c40f75b96e.slice. Jan 29 14:36:55.209841 kubelet[2350]: E0129 14:36:55.209776 2350 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.17.238:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-rni4s.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.17.238:6443: connect: connection refused" interval="400ms" Jan 29 14:36:55.214884 systemd[1]: Created slice kubepods-burstable-poddd0d9a82815b1d82bfede134b7d44f09.slice - libcontainer container kubepods-burstable-poddd0d9a82815b1d82bfede134b7d44f09.slice. Jan 29 14:36:55.307278 kubelet[2350]: I0129 14:36:55.307211 2350 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a440675ebd0c4f962509fe4c8dab64bd-ca-certs\") pod \"kube-apiserver-srv-rni4s.gb1.brightbox.com\" (UID: \"a440675ebd0c4f962509fe4c8dab64bd\") " pod="kube-system/kube-apiserver-srv-rni4s.gb1.brightbox.com" Jan 29 14:36:55.307562 kubelet[2350]: I0129 14:36:55.307532 2350 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd0d9a82815b1d82bfede134b7d44f09-k8s-certs\") pod \"kube-controller-manager-srv-rni4s.gb1.brightbox.com\" (UID: \"dd0d9a82815b1d82bfede134b7d44f09\") " pod="kube-system/kube-controller-manager-srv-rni4s.gb1.brightbox.com" Jan 29 14:36:55.307712 kubelet[2350]: I0129 14:36:55.307686 2350 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4ab6b5e26a6a62b4024471c40f75b96e-kubeconfig\") pod \"kube-scheduler-srv-rni4s.gb1.brightbox.com\" (UID: \"4ab6b5e26a6a62b4024471c40f75b96e\") " pod="kube-system/kube-scheduler-srv-rni4s.gb1.brightbox.com" Jan 29 14:36:55.307891 kubelet[2350]: I0129 14:36:55.307862 2350 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd0d9a82815b1d82bfede134b7d44f09-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-rni4s.gb1.brightbox.com\" (UID: \"dd0d9a82815b1d82bfede134b7d44f09\") " pod="kube-system/kube-controller-manager-srv-rni4s.gb1.brightbox.com" Jan 29 14:36:55.308095 kubelet[2350]: I0129 14:36:55.308042 2350 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a440675ebd0c4f962509fe4c8dab64bd-k8s-certs\") pod \"kube-apiserver-srv-rni4s.gb1.brightbox.com\" (UID: \"a440675ebd0c4f962509fe4c8dab64bd\") " pod="kube-system/kube-apiserver-srv-rni4s.gb1.brightbox.com" Jan 29 14:36:55.308231 kubelet[2350]: I0129 14:36:55.308204 2350 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a440675ebd0c4f962509fe4c8dab64bd-usr-share-ca-certificates\") pod \"kube-apiserver-srv-rni4s.gb1.brightbox.com\" (UID: \"a440675ebd0c4f962509fe4c8dab64bd\") " pod="kube-system/kube-apiserver-srv-rni4s.gb1.brightbox.com" Jan 29 14:36:55.308973 kubelet[2350]: I0129 14:36:55.308917 2350 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd0d9a82815b1d82bfede134b7d44f09-ca-certs\") pod \"kube-controller-manager-srv-rni4s.gb1.brightbox.com\" (UID: \"dd0d9a82815b1d82bfede134b7d44f09\") " pod="kube-system/kube-controller-manager-srv-rni4s.gb1.brightbox.com" Jan 29 14:36:55.309058 kubelet[2350]: I0129 14:36:55.308986 2350 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd0d9a82815b1d82bfede134b7d44f09-flexvolume-dir\") pod \"kube-controller-manager-srv-rni4s.gb1.brightbox.com\" (UID: \"dd0d9a82815b1d82bfede134b7d44f09\") " pod="kube-system/kube-controller-manager-srv-rni4s.gb1.brightbox.com" Jan 29 14:36:55.309058 kubelet[2350]: I0129 14:36:55.309016 2350 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd0d9a82815b1d82bfede134b7d44f09-kubeconfig\") pod \"kube-controller-manager-srv-rni4s.gb1.brightbox.com\" (UID: \"dd0d9a82815b1d82bfede134b7d44f09\") " pod="kube-system/kube-controller-manager-srv-rni4s.gb1.brightbox.com" Jan 29 14:36:55.310768 kubelet[2350]: I0129 14:36:55.310743 2350 kubelet_node_status.go:73] "Attempting to register node" node="srv-rni4s.gb1.brightbox.com" Jan 29 14:36:55.311639 kubelet[2350]: E0129 14:36:55.311599 2350 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.244.17.238:6443/api/v1/nodes\": dial tcp 10.244.17.238:6443: connect: connection refused" node="srv-rni4s.gb1.brightbox.com" Jan 29 14:36:55.506401 containerd[1501]: time="2025-01-29T14:36:55.506210207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-rni4s.gb1.brightbox.com,Uid:a440675ebd0c4f962509fe4c8dab64bd,Namespace:kube-system,Attempt:0,}" Jan 29 14:36:55.519639 containerd[1501]: time="2025-01-29T14:36:55.519559760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-rni4s.gb1.brightbox.com,Uid:dd0d9a82815b1d82bfede134b7d44f09,Namespace:kube-system,Attempt:0,}" Jan 29 14:36:55.520004 containerd[1501]: time="2025-01-29T14:36:55.519711858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-rni4s.gb1.brightbox.com,Uid:4ab6b5e26a6a62b4024471c40f75b96e,Namespace:kube-system,Attempt:0,}" Jan 29 14:36:55.612236 kubelet[2350]: E0129 14:36:55.612168 2350 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.17.238:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-rni4s.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.17.238:6443: connect: connection refused" interval="800ms" Jan 29 14:36:55.714659 kubelet[2350]: I0129 14:36:55.714615 2350 kubelet_node_status.go:73] "Attempting to register node" node="srv-rni4s.gb1.brightbox.com" Jan 29 14:36:55.715524 kubelet[2350]: E0129 14:36:55.715480 2350 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.244.17.238:6443/api/v1/nodes\": dial tcp 10.244.17.238:6443: connect: connection refused" node="srv-rni4s.gb1.brightbox.com" Jan 29 14:36:56.115792 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount808423677.mount: Deactivated successfully. Jan 29 14:36:56.142600 containerd[1501]: time="2025-01-29T14:36:56.142408694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 14:36:56.143954 containerd[1501]: time="2025-01-29T14:36:56.143896883Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 14:36:56.145185 containerd[1501]: time="2025-01-29T14:36:56.145132779Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 14:36:56.145436 containerd[1501]: time="2025-01-29T14:36:56.145397185Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 29 14:36:56.146079 containerd[1501]: time="2025-01-29T14:36:56.145892471Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 14:36:56.147836 containerd[1501]: time="2025-01-29T14:36:56.147543155Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 14:36:56.147966 containerd[1501]: time="2025-01-29T14:36:56.147922738Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 14:36:56.151790 containerd[1501]: time="2025-01-29T14:36:56.151750299Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 14:36:56.154172 containerd[1501]: time="2025-01-29T14:36:56.153845733Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 633.98132ms" Jan 29 14:36:56.158105 containerd[1501]: time="2025-01-29T14:36:56.158010928Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 651.610635ms" Jan 29 14:36:56.164162 containerd[1501]: time="2025-01-29T14:36:56.164101840Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 644.264868ms" Jan 29 14:36:56.256710 kubelet[2350]: W0129 14:36:56.256404 2350 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.244.17.238:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.244.17.238:6443: connect: connection refused Jan 29 14:36:56.256710 kubelet[2350]: E0129 14:36:56.256557 2350 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.244.17.238:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.244.17.238:6443: connect: connection refused Jan 29 14:36:56.276030 kubelet[2350]: W0129 14:36:56.275750 2350 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.244.17.238:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.17.238:6443: connect: connection refused Jan 29 14:36:56.276030 kubelet[2350]: E0129 14:36:56.275851 2350 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.244.17.238:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.17.238:6443: connect: connection refused Jan 29 14:36:56.368088 containerd[1501]: time="2025-01-29T14:36:56.366303111Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 14:36:56.368088 containerd[1501]: time="2025-01-29T14:36:56.366399129Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 14:36:56.368088 containerd[1501]: time="2025-01-29T14:36:56.366423070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 14:36:56.368088 containerd[1501]: time="2025-01-29T14:36:56.366572462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 14:36:56.376068 containerd[1501]: time="2025-01-29T14:36:56.375863988Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 14:36:56.376068 containerd[1501]: time="2025-01-29T14:36:56.375971902Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 14:36:56.376263 containerd[1501]: time="2025-01-29T14:36:56.376080090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 14:36:56.377514 containerd[1501]: time="2025-01-29T14:36:56.376356417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 14:36:56.383881 containerd[1501]: time="2025-01-29T14:36:56.383465410Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 14:36:56.383881 containerd[1501]: time="2025-01-29T14:36:56.383548642Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 14:36:56.383881 containerd[1501]: time="2025-01-29T14:36:56.383569849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 14:36:56.383881 containerd[1501]: time="2025-01-29T14:36:56.383680133Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 14:36:56.409039 systemd[1]: Started cri-containerd-ee2f95402e163b3787ea6ee051fbde76f4225d71ba2081477e25eac77d6cfd13.scope - libcontainer container ee2f95402e163b3787ea6ee051fbde76f4225d71ba2081477e25eac77d6cfd13. Jan 29 14:36:56.413460 kubelet[2350]: E0129 14:36:56.413201 2350 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.17.238:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-rni4s.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.17.238:6443: connect: connection refused" interval="1.6s" Jan 29 14:36:56.417024 systemd[1]: Started cri-containerd-adba9191385b166ea45596dcea2c893079a251e7d5b1c185750ffba9015d4807.scope - libcontainer container adba9191385b166ea45596dcea2c893079a251e7d5b1c185750ffba9015d4807. Jan 29 14:36:56.442339 systemd[1]: Started cri-containerd-4983c54fe1bfc07d104f453653b246476781afbb2541bcba30cf263ff060cd8a.scope - libcontainer container 4983c54fe1bfc07d104f453653b246476781afbb2541bcba30cf263ff060cd8a. Jan 29 14:36:56.464791 kubelet[2350]: W0129 14:36:56.464719 2350 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.244.17.238:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-rni4s.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.17.238:6443: connect: connection refused Jan 29 14:36:56.466183 kubelet[2350]: E0129 14:36:56.464798 2350 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.244.17.238:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-rni4s.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.17.238:6443: connect: connection refused Jan 29 14:36:56.502980 containerd[1501]: time="2025-01-29T14:36:56.502765745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-rni4s.gb1.brightbox.com,Uid:a440675ebd0c4f962509fe4c8dab64bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee2f95402e163b3787ea6ee051fbde76f4225d71ba2081477e25eac77d6cfd13\"" Jan 29 14:36:56.514444 containerd[1501]: time="2025-01-29T14:36:56.514394352Z" level=info msg="CreateContainer within sandbox \"ee2f95402e163b3787ea6ee051fbde76f4225d71ba2081477e25eac77d6cfd13\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 14:36:56.519679 kubelet[2350]: I0129 14:36:56.519223 2350 kubelet_node_status.go:73] "Attempting to register node" node="srv-rni4s.gb1.brightbox.com" Jan 29 14:36:56.519679 kubelet[2350]: E0129 14:36:56.519636 2350 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.244.17.238:6443/api/v1/nodes\": dial tcp 10.244.17.238:6443: connect: connection refused" node="srv-rni4s.gb1.brightbox.com" Jan 29 14:36:56.554911 containerd[1501]: time="2025-01-29T14:36:56.554418651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-rni4s.gb1.brightbox.com,Uid:dd0d9a82815b1d82bfede134b7d44f09,Namespace:kube-system,Attempt:0,} returns sandbox id \"4983c54fe1bfc07d104f453653b246476781afbb2541bcba30cf263ff060cd8a\"" Jan 29 14:36:56.559668 containerd[1501]: time="2025-01-29T14:36:56.559380969Z" level=info msg="CreateContainer within sandbox \"4983c54fe1bfc07d104f453653b246476781afbb2541bcba30cf263ff060cd8a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 14:36:56.562035 containerd[1501]: time="2025-01-29T14:36:56.561933531Z" level=info msg="CreateContainer within sandbox \"ee2f95402e163b3787ea6ee051fbde76f4225d71ba2081477e25eac77d6cfd13\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"147f0032916bf00da34baa1370cbd1dee2a339e1c67e1806df5e58e8d9286d02\"" Jan 29 14:36:56.564706 containerd[1501]: time="2025-01-29T14:36:56.563307960Z" level=info msg="StartContainer for \"147f0032916bf00da34baa1370cbd1dee2a339e1c67e1806df5e58e8d9286d02\"" Jan 29 14:36:56.569114 containerd[1501]: time="2025-01-29T14:36:56.569076848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-rni4s.gb1.brightbox.com,Uid:4ab6b5e26a6a62b4024471c40f75b96e,Namespace:kube-system,Attempt:0,} returns sandbox id \"adba9191385b166ea45596dcea2c893079a251e7d5b1c185750ffba9015d4807\"" Jan 29 14:36:56.574332 containerd[1501]: time="2025-01-29T14:36:56.574291249Z" level=info msg="CreateContainer within sandbox \"adba9191385b166ea45596dcea2c893079a251e7d5b1c185750ffba9015d4807\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 14:36:56.585252 containerd[1501]: time="2025-01-29T14:36:56.585199759Z" level=info msg="CreateContainer within sandbox \"4983c54fe1bfc07d104f453653b246476781afbb2541bcba30cf263ff060cd8a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4015a68511afed8883c0b7cb1ff30e86043010ba2b7df568692276da5d092d9c\"" Jan 29 14:36:56.586169 containerd[1501]: time="2025-01-29T14:36:56.586120734Z" level=info msg="StartContainer for \"4015a68511afed8883c0b7cb1ff30e86043010ba2b7df568692276da5d092d9c\"" Jan 29 14:36:56.587206 kubelet[2350]: W0129 14:36:56.587127 2350 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.244.17.238:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.17.238:6443: connect: connection refused Jan 29 14:36:56.587352 kubelet[2350]: E0129 14:36:56.587221 2350 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.244.17.238:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.17.238:6443: connect: connection refused Jan 29 14:36:56.608480 containerd[1501]: time="2025-01-29T14:36:56.608430951Z" level=info msg="CreateContainer within sandbox \"adba9191385b166ea45596dcea2c893079a251e7d5b1c185750ffba9015d4807\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cf3d8c9d5acd5fd69592f6ac80b1c15c42bbe171fad9af4f12d2bcd0a1456e31\"" Jan 29 14:36:56.609160 systemd[1]: Started cri-containerd-147f0032916bf00da34baa1370cbd1dee2a339e1c67e1806df5e58e8d9286d02.scope - libcontainer container 147f0032916bf00da34baa1370cbd1dee2a339e1c67e1806df5e58e8d9286d02. Jan 29 14:36:56.610180 containerd[1501]: time="2025-01-29T14:36:56.610051643Z" level=info msg="StartContainer for \"cf3d8c9d5acd5fd69592f6ac80b1c15c42bbe171fad9af4f12d2bcd0a1456e31\"" Jan 29 14:36:56.648025 systemd[1]: Started cri-containerd-4015a68511afed8883c0b7cb1ff30e86043010ba2b7df568692276da5d092d9c.scope - libcontainer container 4015a68511afed8883c0b7cb1ff30e86043010ba2b7df568692276da5d092d9c. Jan 29 14:36:56.674082 systemd[1]: Started cri-containerd-cf3d8c9d5acd5fd69592f6ac80b1c15c42bbe171fad9af4f12d2bcd0a1456e31.scope - libcontainer container cf3d8c9d5acd5fd69592f6ac80b1c15c42bbe171fad9af4f12d2bcd0a1456e31. Jan 29 14:36:56.719344 containerd[1501]: time="2025-01-29T14:36:56.719083907Z" level=info msg="StartContainer for \"147f0032916bf00da34baa1370cbd1dee2a339e1c67e1806df5e58e8d9286d02\" returns successfully" Jan 29 14:36:56.749174 containerd[1501]: time="2025-01-29T14:36:56.749125477Z" level=info msg="StartContainer for \"4015a68511afed8883c0b7cb1ff30e86043010ba2b7df568692276da5d092d9c\" returns successfully" Jan 29 14:36:56.786320 containerd[1501]: time="2025-01-29T14:36:56.786243530Z" level=info msg="StartContainer for \"cf3d8c9d5acd5fd69592f6ac80b1c15c42bbe171fad9af4f12d2bcd0a1456e31\" returns successfully" Jan 29 14:36:57.125497 kubelet[2350]: E0129 14:36:57.125423 2350 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.244.17.238:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.244.17.238:6443: connect: connection refused Jan 29 14:36:58.132870 kubelet[2350]: I0129 14:36:58.132155 2350 kubelet_node_status.go:73] "Attempting to register node" node="srv-rni4s.gb1.brightbox.com" Jan 29 14:36:59.940737 kubelet[2350]: E0129 14:36:59.940395 2350 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-rni4s.gb1.brightbox.com\" not found" node="srv-rni4s.gb1.brightbox.com" Jan 29 14:36:59.986141 kubelet[2350]: I0129 14:36:59.985836 2350 apiserver.go:52] "Watching apiserver" Jan 29 14:37:00.007314 kubelet[2350]: I0129 14:37:00.007262 2350 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 14:37:00.061935 kubelet[2350]: I0129 14:37:00.061845 2350 kubelet_node_status.go:76] "Successfully registered node" node="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:01.127943 kubelet[2350]: W0129 14:37:01.127765 2350 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 14:37:02.013077 systemd[1]: Reloading requested from client PID 2624 ('systemctl') (unit session-11.scope)... Jan 29 14:37:02.013130 systemd[1]: Reloading... Jan 29 14:37:02.127006 zram_generator::config[2659]: No configuration found. Jan 29 14:37:02.325157 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 14:37:02.456254 systemd[1]: Reloading finished in 442 ms. Jan 29 14:37:02.520138 kubelet[2350]: E0129 14:37:02.519484 2350 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{srv-rni4s.gb1.brightbox.com.181f30990696051f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-rni4s.gb1.brightbox.com,UID:srv-rni4s.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-rni4s.gb1.brightbox.com,},FirstTimestamp:2025-01-29 14:36:54.986925343 +0000 UTC m=+0.596824513,LastTimestamp:2025-01-29 14:36:54.986925343 +0000 UTC m=+0.596824513,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-rni4s.gb1.brightbox.com,}" Jan 29 14:37:02.521019 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 14:37:02.538399 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 14:37:02.539554 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 14:37:02.539877 systemd[1]: kubelet.service: Consumed 1.045s CPU time, 112.1M memory peak, 0B memory swap peak. Jan 29 14:37:02.547343 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 14:37:02.760776 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 14:37:02.774385 (kubelet)[2727]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 14:37:02.891879 kubelet[2727]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 14:37:02.891879 kubelet[2727]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 14:37:02.891879 kubelet[2727]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 14:37:02.894837 kubelet[2727]: I0129 14:37:02.894015 2727 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 14:37:02.902653 kubelet[2727]: I0129 14:37:02.902611 2727 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 14:37:02.902933 kubelet[2727]: I0129 14:37:02.902913 2727 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 14:37:02.903317 kubelet[2727]: I0129 14:37:02.903284 2727 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 14:37:02.905443 kubelet[2727]: I0129 14:37:02.905416 2727 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 14:37:02.907863 kubelet[2727]: I0129 14:37:02.907826 2727 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 14:37:02.927100 kubelet[2727]: I0129 14:37:02.927039 2727 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 14:37:02.927563 kubelet[2727]: I0129 14:37:02.927472 2727 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 14:37:02.927974 kubelet[2727]: I0129 14:37:02.927538 2727 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-rni4s.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 14:37:02.928157 kubelet[2727]: I0129 14:37:02.928005 2727 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 14:37:02.928157 kubelet[2727]: I0129 14:37:02.928026 2727 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 14:37:02.928157 kubelet[2727]: I0129 14:37:02.928128 2727 state_mem.go:36] "Initialized new in-memory state store" Jan 29 14:37:02.928404 kubelet[2727]: I0129 14:37:02.928371 2727 kubelet.go:400] "Attempting to sync node with API server" Jan 29 14:37:02.929323 kubelet[2727]: I0129 14:37:02.929279 2727 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 14:37:02.929387 kubelet[2727]: I0129 14:37:02.929362 2727 kubelet.go:312] "Adding apiserver pod source" Jan 29 14:37:02.929457 kubelet[2727]: I0129 14:37:02.929408 2727 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 14:37:02.951435 kubelet[2727]: I0129 14:37:02.949280 2727 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 14:37:02.951919 kubelet[2727]: I0129 14:37:02.951890 2727 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 14:37:02.958473 kubelet[2727]: I0129 14:37:02.958439 2727 server.go:1264] "Started kubelet" Jan 29 14:37:02.965472 kubelet[2727]: I0129 14:37:02.961712 2727 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 14:37:02.965472 kubelet[2727]: I0129 14:37:02.964446 2727 server.go:455] "Adding debug handlers to kubelet server" Jan 29 14:37:02.966880 kubelet[2727]: I0129 14:37:02.966857 2727 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 14:37:02.969197 kubelet[2727]: I0129 14:37:02.969121 2727 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 14:37:02.971202 kubelet[2727]: I0129 14:37:02.969533 2727 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 14:37:02.978088 kubelet[2727]: I0129 14:37:02.978062 2727 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 14:37:02.980438 kubelet[2727]: E0129 14:37:02.979862 2727 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 14:37:02.982345 kubelet[2727]: I0129 14:37:02.982310 2727 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 14:37:02.982760 kubelet[2727]: I0129 14:37:02.982704 2727 reconciler.go:26] "Reconciler: start to sync state" Jan 29 14:37:02.984324 kubelet[2727]: I0129 14:37:02.983869 2727 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 14:37:02.988220 kubelet[2727]: I0129 14:37:02.988183 2727 factory.go:221] Registration of the containerd container factory successfully Jan 29 14:37:02.988376 kubelet[2727]: I0129 14:37:02.988356 2727 factory.go:221] Registration of the systemd container factory successfully Jan 29 14:37:03.034192 kubelet[2727]: I0129 14:37:03.034025 2727 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 14:37:03.040100 kubelet[2727]: I0129 14:37:03.039560 2727 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 14:37:03.040100 kubelet[2727]: I0129 14:37:03.039649 2727 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 14:37:03.040100 kubelet[2727]: I0129 14:37:03.040014 2727 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 14:37:03.042706 kubelet[2727]: E0129 14:37:03.042671 2727 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 14:37:03.100291 kubelet[2727]: I0129 14:37:03.099874 2727 kubelet_node_status.go:73] "Attempting to register node" node="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:03.117259 kubelet[2727]: I0129 14:37:03.117219 2727 kubelet_node_status.go:112] "Node was previously registered" node="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:03.119235 kubelet[2727]: I0129 14:37:03.119087 2727 kubelet_node_status.go:76] "Successfully registered node" node="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:03.125836 kubelet[2727]: I0129 14:37:03.125765 2727 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 14:37:03.125836 kubelet[2727]: I0129 14:37:03.125791 2727 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 14:37:03.126850 kubelet[2727]: I0129 14:37:03.126044 2727 state_mem.go:36] "Initialized new in-memory state store" Jan 29 14:37:03.126850 kubelet[2727]: I0129 14:37:03.126341 2727 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 14:37:03.126850 kubelet[2727]: I0129 14:37:03.126361 2727 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 14:37:03.126850 kubelet[2727]: I0129 14:37:03.126412 2727 policy_none.go:49] "None policy: Start" Jan 29 14:37:03.129041 kubelet[2727]: I0129 14:37:03.129020 2727 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 14:37:03.129186 kubelet[2727]: I0129 14:37:03.129168 2727 state_mem.go:35] "Initializing new in-memory state store" Jan 29 14:37:03.129457 kubelet[2727]: I0129 14:37:03.129434 2727 state_mem.go:75] "Updated machine memory state" Jan 29 14:37:03.142929 kubelet[2727]: I0129 14:37:03.142897 2727 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 14:37:03.143580 kubelet[2727]: E0129 14:37:03.143127 2727 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 14:37:03.144630 kubelet[2727]: I0129 14:37:03.143646 2727 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 14:37:03.144923 kubelet[2727]: I0129 14:37:03.144902 2727 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 14:37:03.349952 kubelet[2727]: I0129 14:37:03.345726 2727 topology_manager.go:215] "Topology Admit Handler" podUID="4ab6b5e26a6a62b4024471c40f75b96e" podNamespace="kube-system" podName="kube-scheduler-srv-rni4s.gb1.brightbox.com" Jan 29 14:37:03.349952 kubelet[2727]: I0129 14:37:03.345975 2727 topology_manager.go:215] "Topology Admit Handler" podUID="a440675ebd0c4f962509fe4c8dab64bd" podNamespace="kube-system" podName="kube-apiserver-srv-rni4s.gb1.brightbox.com" Jan 29 14:37:03.349952 kubelet[2727]: I0129 14:37:03.346563 2727 topology_manager.go:215] "Topology Admit Handler" podUID="dd0d9a82815b1d82bfede134b7d44f09" podNamespace="kube-system" podName="kube-controller-manager-srv-rni4s.gb1.brightbox.com" Jan 29 14:37:03.358141 kubelet[2727]: W0129 14:37:03.357773 2727 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 14:37:03.364966 kubelet[2727]: W0129 14:37:03.364940 2727 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 14:37:03.369696 kubelet[2727]: W0129 14:37:03.369420 2727 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 14:37:03.369696 kubelet[2727]: E0129 14:37:03.369519 2727 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-srv-rni4s.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-rni4s.gb1.brightbox.com" Jan 29 14:37:03.386174 kubelet[2727]: I0129 14:37:03.385720 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd0d9a82815b1d82bfede134b7d44f09-flexvolume-dir\") pod \"kube-controller-manager-srv-rni4s.gb1.brightbox.com\" (UID: \"dd0d9a82815b1d82bfede134b7d44f09\") " pod="kube-system/kube-controller-manager-srv-rni4s.gb1.brightbox.com" Jan 29 14:37:03.386174 kubelet[2727]: I0129 14:37:03.385777 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd0d9a82815b1d82bfede134b7d44f09-k8s-certs\") pod \"kube-controller-manager-srv-rni4s.gb1.brightbox.com\" (UID: \"dd0d9a82815b1d82bfede134b7d44f09\") " pod="kube-system/kube-controller-manager-srv-rni4s.gb1.brightbox.com" Jan 29 14:37:03.386174 kubelet[2727]: I0129 14:37:03.385858 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a440675ebd0c4f962509fe4c8dab64bd-usr-share-ca-certificates\") pod \"kube-apiserver-srv-rni4s.gb1.brightbox.com\" (UID: \"a440675ebd0c4f962509fe4c8dab64bd\") " pod="kube-system/kube-apiserver-srv-rni4s.gb1.brightbox.com" Jan 29 14:37:03.386174 kubelet[2727]: I0129 14:37:03.385913 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd0d9a82815b1d82bfede134b7d44f09-ca-certs\") pod \"kube-controller-manager-srv-rni4s.gb1.brightbox.com\" (UID: \"dd0d9a82815b1d82bfede134b7d44f09\") " pod="kube-system/kube-controller-manager-srv-rni4s.gb1.brightbox.com" Jan 29 14:37:03.386174 kubelet[2727]: I0129 14:37:03.385945 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd0d9a82815b1d82bfede134b7d44f09-kubeconfig\") pod \"kube-controller-manager-srv-rni4s.gb1.brightbox.com\" (UID: \"dd0d9a82815b1d82bfede134b7d44f09\") " pod="kube-system/kube-controller-manager-srv-rni4s.gb1.brightbox.com" Jan 29 14:37:03.386559 kubelet[2727]: I0129 14:37:03.385979 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd0d9a82815b1d82bfede134b7d44f09-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-rni4s.gb1.brightbox.com\" (UID: \"dd0d9a82815b1d82bfede134b7d44f09\") " pod="kube-system/kube-controller-manager-srv-rni4s.gb1.brightbox.com" Jan 29 14:37:03.386559 kubelet[2727]: I0129 14:37:03.386012 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4ab6b5e26a6a62b4024471c40f75b96e-kubeconfig\") pod \"kube-scheduler-srv-rni4s.gb1.brightbox.com\" (UID: \"4ab6b5e26a6a62b4024471c40f75b96e\") " pod="kube-system/kube-scheduler-srv-rni4s.gb1.brightbox.com" Jan 29 14:37:03.386559 kubelet[2727]: I0129 14:37:03.386038 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a440675ebd0c4f962509fe4c8dab64bd-ca-certs\") pod \"kube-apiserver-srv-rni4s.gb1.brightbox.com\" (UID: \"a440675ebd0c4f962509fe4c8dab64bd\") " pod="kube-system/kube-apiserver-srv-rni4s.gb1.brightbox.com" Jan 29 14:37:03.386559 kubelet[2727]: I0129 14:37:03.386063 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a440675ebd0c4f962509fe4c8dab64bd-k8s-certs\") pod \"kube-apiserver-srv-rni4s.gb1.brightbox.com\" (UID: \"a440675ebd0c4f962509fe4c8dab64bd\") " pod="kube-system/kube-apiserver-srv-rni4s.gb1.brightbox.com" Jan 29 14:37:03.945735 kubelet[2727]: I0129 14:37:03.945677 2727 apiserver.go:52] "Watching apiserver" Jan 29 14:37:03.983029 kubelet[2727]: I0129 14:37:03.982858 2727 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 14:37:04.104550 kubelet[2727]: W0129 14:37:04.104060 2727 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 14:37:04.104550 kubelet[2727]: E0129 14:37:04.104151 2727 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-srv-rni4s.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-rni4s.gb1.brightbox.com" Jan 29 14:37:04.139873 kubelet[2727]: I0129 14:37:04.139774 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-rni4s.gb1.brightbox.com" podStartSLOduration=3.139737534 podStartE2EDuration="3.139737534s" podCreationTimestamp="2025-01-29 14:37:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 14:37:04.11375893 +0000 UTC m=+1.306450205" watchObservedRunningTime="2025-01-29 14:37:04.139737534 +0000 UTC m=+1.332428804" Jan 29 14:37:04.170156 kubelet[2727]: I0129 14:37:04.170081 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-rni4s.gb1.brightbox.com" podStartSLOduration=1.170058195 podStartE2EDuration="1.170058195s" podCreationTimestamp="2025-01-29 14:37:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 14:37:04.167712103 +0000 UTC m=+1.360403378" watchObservedRunningTime="2025-01-29 14:37:04.170058195 +0000 UTC m=+1.362749461" Jan 29 14:37:04.170481 kubelet[2727]: I0129 14:37:04.170186 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-rni4s.gb1.brightbox.com" podStartSLOduration=1.170178931 podStartE2EDuration="1.170178931s" podCreationTimestamp="2025-01-29 14:37:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 14:37:04.143957478 +0000 UTC m=+1.336648759" watchObservedRunningTime="2025-01-29 14:37:04.170178931 +0000 UTC m=+1.362870208" Jan 29 14:37:07.474697 sudo[1775]: pam_unix(sudo:session): session closed for user root Jan 29 14:37:07.622394 sshd[1772]: pam_unix(sshd:session): session closed for user core Jan 29 14:37:07.632254 systemd[1]: sshd@8-10.244.17.238:22-139.178.68.195:43354.service: Deactivated successfully. Jan 29 14:37:07.636842 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 14:37:07.637298 systemd[1]: session-11.scope: Consumed 6.347s CPU time, 190.5M memory peak, 0B memory swap peak. Jan 29 14:37:07.638287 systemd-logind[1483]: Session 11 logged out. Waiting for processes to exit. Jan 29 14:37:07.640786 systemd-logind[1483]: Removed session 11. Jan 29 14:37:15.952873 kubelet[2727]: I0129 14:37:15.952758 2727 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 14:37:15.954270 containerd[1501]: time="2025-01-29T14:37:15.954012412Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 14:37:15.957146 kubelet[2727]: I0129 14:37:15.954424 2727 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 14:37:16.876656 kubelet[2727]: I0129 14:37:16.874219 2727 topology_manager.go:215] "Topology Admit Handler" podUID="85ff70ff-c37f-4006-b085-4c8e5552ab32" podNamespace="kube-system" podName="kube-proxy-g85rr" Jan 29 14:37:16.892613 systemd[1]: Created slice kubepods-besteffort-pod85ff70ff_c37f_4006_b085_4c8e5552ab32.slice - libcontainer container kubepods-besteffort-pod85ff70ff_c37f_4006_b085_4c8e5552ab32.slice. Jan 29 14:37:17.003785 kubelet[2727]: I0129 14:37:17.002652 2727 topology_manager.go:215] "Topology Admit Handler" podUID="36917d7f-c4d0-4f96-8ea3-822395642bc8" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-bkf95" Jan 29 14:37:17.018978 systemd[1]: Created slice kubepods-besteffort-pod36917d7f_c4d0_4f96_8ea3_822395642bc8.slice - libcontainer container kubepods-besteffort-pod36917d7f_c4d0_4f96_8ea3_822395642bc8.slice. Jan 29 14:37:17.074231 kubelet[2727]: I0129 14:37:17.074147 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/85ff70ff-c37f-4006-b085-4c8e5552ab32-lib-modules\") pod \"kube-proxy-g85rr\" (UID: \"85ff70ff-c37f-4006-b085-4c8e5552ab32\") " pod="kube-system/kube-proxy-g85rr" Jan 29 14:37:17.074882 kubelet[2727]: I0129 14:37:17.074818 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jdnk\" (UniqueName: \"kubernetes.io/projected/85ff70ff-c37f-4006-b085-4c8e5552ab32-kube-api-access-9jdnk\") pod \"kube-proxy-g85rr\" (UID: \"85ff70ff-c37f-4006-b085-4c8e5552ab32\") " pod="kube-system/kube-proxy-g85rr" Jan 29 14:37:17.075000 kubelet[2727]: I0129 14:37:17.074896 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/85ff70ff-c37f-4006-b085-4c8e5552ab32-xtables-lock\") pod \"kube-proxy-g85rr\" (UID: \"85ff70ff-c37f-4006-b085-4c8e5552ab32\") " pod="kube-system/kube-proxy-g85rr" Jan 29 14:37:17.075000 kubelet[2727]: I0129 14:37:17.074989 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/85ff70ff-c37f-4006-b085-4c8e5552ab32-kube-proxy\") pod \"kube-proxy-g85rr\" (UID: \"85ff70ff-c37f-4006-b085-4c8e5552ab32\") " pod="kube-system/kube-proxy-g85rr" Jan 29 14:37:17.175931 kubelet[2727]: I0129 14:37:17.175636 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/36917d7f-c4d0-4f96-8ea3-822395642bc8-var-lib-calico\") pod \"tigera-operator-7bc55997bb-bkf95\" (UID: \"36917d7f-c4d0-4f96-8ea3-822395642bc8\") " pod="tigera-operator/tigera-operator-7bc55997bb-bkf95" Jan 29 14:37:17.175931 kubelet[2727]: I0129 14:37:17.175739 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pjl9\" (UniqueName: \"kubernetes.io/projected/36917d7f-c4d0-4f96-8ea3-822395642bc8-kube-api-access-8pjl9\") pod \"tigera-operator-7bc55997bb-bkf95\" (UID: \"36917d7f-c4d0-4f96-8ea3-822395642bc8\") " pod="tigera-operator/tigera-operator-7bc55997bb-bkf95" Jan 29 14:37:17.215614 containerd[1501]: time="2025-01-29T14:37:17.215500559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g85rr,Uid:85ff70ff-c37f-4006-b085-4c8e5552ab32,Namespace:kube-system,Attempt:0,}" Jan 29 14:37:17.261049 containerd[1501]: time="2025-01-29T14:37:17.260882356Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 14:37:17.262161 containerd[1501]: time="2025-01-29T14:37:17.262064615Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 14:37:17.262161 containerd[1501]: time="2025-01-29T14:37:17.262100891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 14:37:17.262742 containerd[1501]: time="2025-01-29T14:37:17.262274508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 14:37:17.303127 systemd[1]: Started cri-containerd-76846fe33c016510bca889432ffac7c65b3f5c9b80f52952fe4adade5263d9af.scope - libcontainer container 76846fe33c016510bca889432ffac7c65b3f5c9b80f52952fe4adade5263d9af. Jan 29 14:37:17.326959 containerd[1501]: time="2025-01-29T14:37:17.326578522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-bkf95,Uid:36917d7f-c4d0-4f96-8ea3-822395642bc8,Namespace:tigera-operator,Attempt:0,}" Jan 29 14:37:17.351127 containerd[1501]: time="2025-01-29T14:37:17.351075670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g85rr,Uid:85ff70ff-c37f-4006-b085-4c8e5552ab32,Namespace:kube-system,Attempt:0,} returns sandbox id \"76846fe33c016510bca889432ffac7c65b3f5c9b80f52952fe4adade5263d9af\"" Jan 29 14:37:17.358906 containerd[1501]: time="2025-01-29T14:37:17.358740083Z" level=info msg="CreateContainer within sandbox \"76846fe33c016510bca889432ffac7c65b3f5c9b80f52952fe4adade5263d9af\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 14:37:17.379288 containerd[1501]: time="2025-01-29T14:37:17.379103287Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 14:37:17.380541 containerd[1501]: time="2025-01-29T14:37:17.380385996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 14:37:17.380541 containerd[1501]: time="2025-01-29T14:37:17.380434515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 14:37:17.381013 containerd[1501]: time="2025-01-29T14:37:17.380785184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 14:37:17.390486 containerd[1501]: time="2025-01-29T14:37:17.389606777Z" level=info msg="CreateContainer within sandbox \"76846fe33c016510bca889432ffac7c65b3f5c9b80f52952fe4adade5263d9af\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6a1f3d846f91a6af11e96eb10110284353668e1e74e56a8f275a6ba7823781e1\"" Jan 29 14:37:17.392650 containerd[1501]: time="2025-01-29T14:37:17.392618831Z" level=info msg="StartContainer for \"6a1f3d846f91a6af11e96eb10110284353668e1e74e56a8f275a6ba7823781e1\"" Jan 29 14:37:17.411098 systemd[1]: Started cri-containerd-8a5e4a61f97b28d7c57746aaebfdd624bec075a9a4d2fd8e74c8fbab74d9ecf2.scope - libcontainer container 8a5e4a61f97b28d7c57746aaebfdd624bec075a9a4d2fd8e74c8fbab74d9ecf2. Jan 29 14:37:17.457032 systemd[1]: Started cri-containerd-6a1f3d846f91a6af11e96eb10110284353668e1e74e56a8f275a6ba7823781e1.scope - libcontainer container 6a1f3d846f91a6af11e96eb10110284353668e1e74e56a8f275a6ba7823781e1. Jan 29 14:37:17.504144 containerd[1501]: time="2025-01-29T14:37:17.504093246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-bkf95,Uid:36917d7f-c4d0-4f96-8ea3-822395642bc8,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8a5e4a61f97b28d7c57746aaebfdd624bec075a9a4d2fd8e74c8fbab74d9ecf2\"" Jan 29 14:37:17.506902 containerd[1501]: time="2025-01-29T14:37:17.506872144Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 29 14:37:17.518061 containerd[1501]: time="2025-01-29T14:37:17.518021755Z" level=info msg="StartContainer for \"6a1f3d846f91a6af11e96eb10110284353668e1e74e56a8f275a6ba7823781e1\" returns successfully" Jan 29 14:37:18.128696 kubelet[2727]: I0129 14:37:18.128045 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-g85rr" podStartSLOduration=2.127975278 podStartE2EDuration="2.127975278s" podCreationTimestamp="2025-01-29 14:37:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 14:37:18.127095606 +0000 UTC m=+15.319786881" watchObservedRunningTime="2025-01-29 14:37:18.127975278 +0000 UTC m=+15.320666548" Jan 29 14:37:19.332004 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3008337929.mount: Deactivated successfully. Jan 29 14:37:20.281028 containerd[1501]: time="2025-01-29T14:37:20.280472489Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:37:20.282628 containerd[1501]: time="2025-01-29T14:37:20.282558443Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Jan 29 14:37:20.283839 containerd[1501]: time="2025-01-29T14:37:20.283764348Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:37:20.286486 containerd[1501]: time="2025-01-29T14:37:20.286424813Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:37:20.287857 containerd[1501]: time="2025-01-29T14:37:20.287610998Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.779755835s" Jan 29 14:37:20.287857 containerd[1501]: time="2025-01-29T14:37:20.287666907Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 29 14:37:20.293022 containerd[1501]: time="2025-01-29T14:37:20.292945787Z" level=info msg="CreateContainer within sandbox \"8a5e4a61f97b28d7c57746aaebfdd624bec075a9a4d2fd8e74c8fbab74d9ecf2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 29 14:37:20.316534 containerd[1501]: time="2025-01-29T14:37:20.316439357Z" level=info msg="CreateContainer within sandbox \"8a5e4a61f97b28d7c57746aaebfdd624bec075a9a4d2fd8e74c8fbab74d9ecf2\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"54a993a17bd1ad71a6873ecab0c80873dc320da663144f293b0bbf2e9e7c566a\"" Jan 29 14:37:20.317738 containerd[1501]: time="2025-01-29T14:37:20.317635563Z" level=info msg="StartContainer for \"54a993a17bd1ad71a6873ecab0c80873dc320da663144f293b0bbf2e9e7c566a\"" Jan 29 14:37:20.375108 systemd[1]: Started cri-containerd-54a993a17bd1ad71a6873ecab0c80873dc320da663144f293b0bbf2e9e7c566a.scope - libcontainer container 54a993a17bd1ad71a6873ecab0c80873dc320da663144f293b0bbf2e9e7c566a. Jan 29 14:37:20.420744 containerd[1501]: time="2025-01-29T14:37:20.420262200Z" level=info msg="StartContainer for \"54a993a17bd1ad71a6873ecab0c80873dc320da663144f293b0bbf2e9e7c566a\" returns successfully" Jan 29 14:37:21.141155 kubelet[2727]: I0129 14:37:21.140000 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-bkf95" podStartSLOduration=2.356591615 podStartE2EDuration="5.13994561s" podCreationTimestamp="2025-01-29 14:37:16 +0000 UTC" firstStartedPulling="2025-01-29 14:37:17.505609538 +0000 UTC m=+14.698300801" lastFinishedPulling="2025-01-29 14:37:20.288963526 +0000 UTC m=+17.481654796" observedRunningTime="2025-01-29 14:37:21.139759946 +0000 UTC m=+18.332451228" watchObservedRunningTime="2025-01-29 14:37:21.13994561 +0000 UTC m=+18.332636873" Jan 29 14:37:23.792389 kubelet[2727]: I0129 14:37:23.792249 2727 topology_manager.go:215] "Topology Admit Handler" podUID="c990bf3d-6953-4265-9c69-aca2f2c4441a" podNamespace="calico-system" podName="calico-typha-6565994bdd-cj2v2" Jan 29 14:37:23.824402 systemd[1]: Created slice kubepods-besteffort-podc990bf3d_6953_4265_9c69_aca2f2c4441a.slice - libcontainer container kubepods-besteffort-podc990bf3d_6953_4265_9c69_aca2f2c4441a.slice. Jan 29 14:37:23.916892 kubelet[2727]: I0129 14:37:23.916827 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c990bf3d-6953-4265-9c69-aca2f2c4441a-tigera-ca-bundle\") pod \"calico-typha-6565994bdd-cj2v2\" (UID: \"c990bf3d-6953-4265-9c69-aca2f2c4441a\") " pod="calico-system/calico-typha-6565994bdd-cj2v2" Jan 29 14:37:23.917239 kubelet[2727]: I0129 14:37:23.917208 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c990bf3d-6953-4265-9c69-aca2f2c4441a-typha-certs\") pod \"calico-typha-6565994bdd-cj2v2\" (UID: \"c990bf3d-6953-4265-9c69-aca2f2c4441a\") " pod="calico-system/calico-typha-6565994bdd-cj2v2" Jan 29 14:37:23.917422 kubelet[2727]: I0129 14:37:23.917395 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz25j\" (UniqueName: \"kubernetes.io/projected/c990bf3d-6953-4265-9c69-aca2f2c4441a-kube-api-access-gz25j\") pod \"calico-typha-6565994bdd-cj2v2\" (UID: \"c990bf3d-6953-4265-9c69-aca2f2c4441a\") " pod="calico-system/calico-typha-6565994bdd-cj2v2" Jan 29 14:37:23.956995 kubelet[2727]: I0129 14:37:23.956925 2727 topology_manager.go:215] "Topology Admit Handler" podUID="bde8eccf-d1f1-447e-84fc-13f58369a03e" podNamespace="calico-system" podName="calico-node-csznj" Jan 29 14:37:23.969782 systemd[1]: Created slice kubepods-besteffort-podbde8eccf_d1f1_447e_84fc_13f58369a03e.slice - libcontainer container kubepods-besteffort-podbde8eccf_d1f1_447e_84fc_13f58369a03e.slice. Jan 29 14:37:23.974044 kubelet[2727]: W0129 14:37:23.973994 2727 reflector.go:547] object-"calico-system"/"node-certs": failed to list *v1.Secret: secrets "node-certs" is forbidden: User "system:node:srv-rni4s.gb1.brightbox.com" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'srv-rni4s.gb1.brightbox.com' and this object Jan 29 14:37:23.974164 kubelet[2727]: E0129 14:37:23.974091 2727 reflector.go:150] object-"calico-system"/"node-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "node-certs" is forbidden: User "system:node:srv-rni4s.gb1.brightbox.com" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'srv-rni4s.gb1.brightbox.com' and this object Jan 29 14:37:23.979820 kubelet[2727]: W0129 14:37:23.979763 2727 reflector.go:547] object-"calico-system"/"cni-config": failed to list *v1.ConfigMap: configmaps "cni-config" is forbidden: User "system:node:srv-rni4s.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'srv-rni4s.gb1.brightbox.com' and this object Jan 29 14:37:23.980556 kubelet[2727]: E0129 14:37:23.980522 2727 reflector.go:150] object-"calico-system"/"cni-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cni-config" is forbidden: User "system:node:srv-rni4s.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'srv-rni4s.gb1.brightbox.com' and this object Jan 29 14:37:24.118719 kubelet[2727]: I0129 14:37:24.118666 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/bde8eccf-d1f1-447e-84fc-13f58369a03e-cni-log-dir\") pod \"calico-node-csznj\" (UID: \"bde8eccf-d1f1-447e-84fc-13f58369a03e\") " pod="calico-system/calico-node-csznj" Jan 29 14:37:24.118961 kubelet[2727]: I0129 14:37:24.118732 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9pf9\" (UniqueName: \"kubernetes.io/projected/bde8eccf-d1f1-447e-84fc-13f58369a03e-kube-api-access-c9pf9\") pod \"calico-node-csznj\" (UID: \"bde8eccf-d1f1-447e-84fc-13f58369a03e\") " pod="calico-system/calico-node-csznj" Jan 29 14:37:24.118961 kubelet[2727]: I0129 14:37:24.118766 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bde8eccf-d1f1-447e-84fc-13f58369a03e-xtables-lock\") pod \"calico-node-csznj\" (UID: \"bde8eccf-d1f1-447e-84fc-13f58369a03e\") " pod="calico-system/calico-node-csznj" Jan 29 14:37:24.118961 kubelet[2727]: I0129 14:37:24.118793 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bde8eccf-d1f1-447e-84fc-13f58369a03e-var-lib-calico\") pod \"calico-node-csznj\" (UID: \"bde8eccf-d1f1-447e-84fc-13f58369a03e\") " pod="calico-system/calico-node-csznj" Jan 29 14:37:24.118961 kubelet[2727]: I0129 14:37:24.118848 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/bde8eccf-d1f1-447e-84fc-13f58369a03e-cni-net-dir\") pod \"calico-node-csznj\" (UID: \"bde8eccf-d1f1-447e-84fc-13f58369a03e\") " pod="calico-system/calico-node-csznj" Jan 29 14:37:24.118961 kubelet[2727]: I0129 14:37:24.118890 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bde8eccf-d1f1-447e-84fc-13f58369a03e-tigera-ca-bundle\") pod \"calico-node-csznj\" (UID: \"bde8eccf-d1f1-447e-84fc-13f58369a03e\") " pod="calico-system/calico-node-csznj" Jan 29 14:37:24.119210 kubelet[2727]: I0129 14:37:24.118923 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/bde8eccf-d1f1-447e-84fc-13f58369a03e-cni-bin-dir\") pod \"calico-node-csznj\" (UID: \"bde8eccf-d1f1-447e-84fc-13f58369a03e\") " pod="calico-system/calico-node-csznj" Jan 29 14:37:24.119210 kubelet[2727]: I0129 14:37:24.118949 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/bde8eccf-d1f1-447e-84fc-13f58369a03e-flexvol-driver-host\") pod \"calico-node-csznj\" (UID: \"bde8eccf-d1f1-447e-84fc-13f58369a03e\") " pod="calico-system/calico-node-csznj" Jan 29 14:37:24.119210 kubelet[2727]: I0129 14:37:24.118980 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bde8eccf-d1f1-447e-84fc-13f58369a03e-lib-modules\") pod \"calico-node-csznj\" (UID: \"bde8eccf-d1f1-447e-84fc-13f58369a03e\") " pod="calico-system/calico-node-csznj" Jan 29 14:37:24.119210 kubelet[2727]: I0129 14:37:24.119007 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/bde8eccf-d1f1-447e-84fc-13f58369a03e-var-run-calico\") pod \"calico-node-csznj\" (UID: \"bde8eccf-d1f1-447e-84fc-13f58369a03e\") " pod="calico-system/calico-node-csznj" Jan 29 14:37:24.119210 kubelet[2727]: I0129 14:37:24.119039 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/bde8eccf-d1f1-447e-84fc-13f58369a03e-policysync\") pod \"calico-node-csznj\" (UID: \"bde8eccf-d1f1-447e-84fc-13f58369a03e\") " pod="calico-system/calico-node-csznj" Jan 29 14:37:24.121545 kubelet[2727]: I0129 14:37:24.119069 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/bde8eccf-d1f1-447e-84fc-13f58369a03e-node-certs\") pod \"calico-node-csznj\" (UID: \"bde8eccf-d1f1-447e-84fc-13f58369a03e\") " pod="calico-system/calico-node-csznj" Jan 29 14:37:24.143778 containerd[1501]: time="2025-01-29T14:37:24.143681380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6565994bdd-cj2v2,Uid:c990bf3d-6953-4265-9c69-aca2f2c4441a,Namespace:calico-system,Attempt:0,}" Jan 29 14:37:24.206016 containerd[1501]: time="2025-01-29T14:37:24.205365535Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 14:37:24.208823 containerd[1501]: time="2025-01-29T14:37:24.206448154Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 14:37:24.218828 containerd[1501]: time="2025-01-29T14:37:24.208532366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 14:37:24.218828 containerd[1501]: time="2025-01-29T14:37:24.211057656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 14:37:24.220477 kubelet[2727]: I0129 14:37:24.220049 2727 topology_manager.go:215] "Topology Admit Handler" podUID="9171cee1-001f-4815-a918-01b00e67d3d3" podNamespace="calico-system" podName="csi-node-driver-zh8xr" Jan 29 14:37:24.220477 kubelet[2727]: E0129 14:37:24.220404 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zh8xr" podUID="9171cee1-001f-4815-a918-01b00e67d3d3" Jan 29 14:37:24.224821 kubelet[2727]: E0129 14:37:24.224284 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.224821 kubelet[2727]: W0129 14:37:24.224427 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.224821 kubelet[2727]: E0129 14:37:24.224540 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:24.229417 kubelet[2727]: E0129 14:37:24.229302 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.229417 kubelet[2727]: W0129 14:37:24.229330 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.229417 kubelet[2727]: E0129 14:37:24.229352 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:24.229785 kubelet[2727]: E0129 14:37:24.229708 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.229785 kubelet[2727]: W0129 14:37:24.229730 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.229785 kubelet[2727]: E0129 14:37:24.229769 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:24.231197 kubelet[2727]: E0129 14:37:24.231144 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.231197 kubelet[2727]: W0129 14:37:24.231190 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.231394 kubelet[2727]: E0129 14:37:24.231262 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:24.231636 kubelet[2727]: E0129 14:37:24.231612 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.231636 kubelet[2727]: W0129 14:37:24.231632 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.231818 kubelet[2727]: E0129 14:37:24.231774 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:24.233597 kubelet[2727]: E0129 14:37:24.233570 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.233597 kubelet[2727]: W0129 14:37:24.233593 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.233823 kubelet[2727]: E0129 14:37:24.233611 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:24.235637 kubelet[2727]: E0129 14:37:24.234545 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.235637 kubelet[2727]: W0129 14:37:24.234569 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.235637 kubelet[2727]: E0129 14:37:24.234588 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:24.236945 kubelet[2727]: E0129 14:37:24.236918 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.237879 kubelet[2727]: W0129 14:37:24.236941 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.237969 kubelet[2727]: E0129 14:37:24.237883 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:24.238536 kubelet[2727]: E0129 14:37:24.238291 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.238536 kubelet[2727]: W0129 14:37:24.238343 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.238536 kubelet[2727]: E0129 14:37:24.238363 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:24.272971 systemd[1]: Started cri-containerd-df130c88186aedd04a86bf39f6277ada377a0ed955aa53dcf7f7363429765a51.scope - libcontainer container df130c88186aedd04a86bf39f6277ada377a0ed955aa53dcf7f7363429765a51. Jan 29 14:37:24.283710 kubelet[2727]: E0129 14:37:24.283651 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.283887 kubelet[2727]: W0129 14:37:24.283842 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.283887 kubelet[2727]: E0129 14:37:24.283879 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:24.322240 kubelet[2727]: E0129 14:37:24.322199 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.322862 kubelet[2727]: W0129 14:37:24.322234 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.322958 kubelet[2727]: E0129 14:37:24.322880 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:24.322958 kubelet[2727]: I0129 14:37:24.322946 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmb5x\" (UniqueName: \"kubernetes.io/projected/9171cee1-001f-4815-a918-01b00e67d3d3-kube-api-access-qmb5x\") pod \"csi-node-driver-zh8xr\" (UID: \"9171cee1-001f-4815-a918-01b00e67d3d3\") " pod="calico-system/csi-node-driver-zh8xr" Jan 29 14:37:24.326898 kubelet[2727]: E0129 14:37:24.326853 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.326898 kubelet[2727]: W0129 14:37:24.326884 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.327043 kubelet[2727]: E0129 14:37:24.326936 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:24.327043 kubelet[2727]: I0129 14:37:24.326966 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9171cee1-001f-4815-a918-01b00e67d3d3-varrun\") pod \"csi-node-driver-zh8xr\" (UID: \"9171cee1-001f-4815-a918-01b00e67d3d3\") " pod="calico-system/csi-node-driver-zh8xr" Jan 29 14:37:24.327516 kubelet[2727]: E0129 14:37:24.327360 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.327516 kubelet[2727]: W0129 14:37:24.327382 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.327516 kubelet[2727]: E0129 14:37:24.327406 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:24.328230 kubelet[2727]: E0129 14:37:24.328123 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.328230 kubelet[2727]: W0129 14:37:24.328146 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.328230 kubelet[2727]: E0129 14:37:24.328181 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:24.329310 kubelet[2727]: E0129 14:37:24.329186 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.331883 kubelet[2727]: W0129 14:37:24.329209 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.331964 kubelet[2727]: E0129 14:37:24.331904 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:24.332843 kubelet[2727]: I0129 14:37:24.331938 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9171cee1-001f-4815-a918-01b00e67d3d3-registration-dir\") pod \"csi-node-driver-zh8xr\" (UID: \"9171cee1-001f-4815-a918-01b00e67d3d3\") " pod="calico-system/csi-node-driver-zh8xr" Jan 29 14:37:24.333924 kubelet[2727]: E0129 14:37:24.333893 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.333924 kubelet[2727]: W0129 14:37:24.333921 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.334056 kubelet[2727]: E0129 14:37:24.333947 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:24.334294 kubelet[2727]: E0129 14:37:24.334257 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.334294 kubelet[2727]: W0129 14:37:24.334279 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.335960 kubelet[2727]: E0129 14:37:24.335930 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:24.337024 kubelet[2727]: E0129 14:37:24.336997 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.337024 kubelet[2727]: W0129 14:37:24.337020 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.337159 kubelet[2727]: E0129 14:37:24.337115 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:24.338618 kubelet[2727]: E0129 14:37:24.338593 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.338618 kubelet[2727]: W0129 14:37:24.338616 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.338759 kubelet[2727]: E0129 14:37:24.338642 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:24.341070 kubelet[2727]: E0129 14:37:24.341040 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.341070 kubelet[2727]: W0129 14:37:24.341064 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.341935 kubelet[2727]: E0129 14:37:24.341903 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:24.342213 kubelet[2727]: E0129 14:37:24.342181 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.342213 kubelet[2727]: W0129 14:37:24.342203 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.342333 kubelet[2727]: E0129 14:37:24.342220 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:24.342333 kubelet[2727]: I0129 14:37:24.342258 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9171cee1-001f-4815-a918-01b00e67d3d3-kubelet-dir\") pod \"csi-node-driver-zh8xr\" (UID: \"9171cee1-001f-4815-a918-01b00e67d3d3\") " pod="calico-system/csi-node-driver-zh8xr" Jan 29 14:37:24.342769 kubelet[2727]: E0129 14:37:24.342726 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.342769 kubelet[2727]: W0129 14:37:24.342749 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.342961 kubelet[2727]: E0129 14:37:24.342774 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:24.342961 kubelet[2727]: I0129 14:37:24.342821 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9171cee1-001f-4815-a918-01b00e67d3d3-socket-dir\") pod \"csi-node-driver-zh8xr\" (UID: \"9171cee1-001f-4815-a918-01b00e67d3d3\") " pod="calico-system/csi-node-driver-zh8xr" Jan 29 14:37:24.345828 kubelet[2727]: E0129 14:37:24.344130 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.345828 kubelet[2727]: W0129 14:37:24.344154 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.345828 kubelet[2727]: E0129 14:37:24.344252 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:24.345828 kubelet[2727]: E0129 14:37:24.344943 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.345828 kubelet[2727]: W0129 14:37:24.344958 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.345828 kubelet[2727]: E0129 14:37:24.344993 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:24.347326 kubelet[2727]: E0129 14:37:24.346940 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.347326 kubelet[2727]: W0129 14:37:24.346964 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.347326 kubelet[2727]: E0129 14:37:24.346983 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:24.347326 kubelet[2727]: E0129 14:37:24.347303 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.347326 kubelet[2727]: W0129 14:37:24.347319 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.347608 kubelet[2727]: E0129 14:37:24.347338 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:24.421345 containerd[1501]: time="2025-01-29T14:37:24.421175427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6565994bdd-cj2v2,Uid:c990bf3d-6953-4265-9c69-aca2f2c4441a,Namespace:calico-system,Attempt:0,} returns sandbox id \"df130c88186aedd04a86bf39f6277ada377a0ed955aa53dcf7f7363429765a51\"" Jan 29 14:37:24.426828 containerd[1501]: time="2025-01-29T14:37:24.426453030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 29 14:37:24.443868 kubelet[2727]: E0129 14:37:24.443829 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.443868 kubelet[2727]: W0129 14:37:24.443864 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.444064 kubelet[2727]: E0129 14:37:24.443894 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:24.444309 kubelet[2727]: E0129 14:37:24.444286 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.444377 kubelet[2727]: W0129 14:37:24.444330 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.444377 kubelet[2727]: E0129 14:37:24.444362 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:24.444700 kubelet[2727]: E0129 14:37:24.444657 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.444700 kubelet[2727]: W0129 14:37:24.444698 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.444838 kubelet[2727]: E0129 14:37:24.444741 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:24.445135 kubelet[2727]: E0129 14:37:24.445111 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.445201 kubelet[2727]: W0129 14:37:24.445148 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.445201 kubelet[2727]: E0129 14:37:24.445179 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:24.445517 kubelet[2727]: E0129 14:37:24.445496 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.445580 kubelet[2727]: W0129 14:37:24.445533 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.445580 kubelet[2727]: E0129 14:37:24.445563 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:24.445941 kubelet[2727]: E0129 14:37:24.445918 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.445941 kubelet[2727]: W0129 14:37:24.445939 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.446151 kubelet[2727]: E0129 14:37:24.446065 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:24.446602 kubelet[2727]: E0129 14:37:24.446579 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.446708 kubelet[2727]: W0129 14:37:24.446603 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.446908 kubelet[2727]: E0129 14:37:24.446864 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:24.448590 kubelet[2727]: E0129 14:37:24.448565 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.448590 kubelet[2727]: W0129 14:37:24.448587 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.448744 kubelet[2727]: E0129 14:37:24.448619 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:24.449026 kubelet[2727]: E0129 14:37:24.449005 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.449100 kubelet[2727]: W0129 14:37:24.449047 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.449203 kubelet[2727]: E0129 14:37:24.449072 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:24.450842 kubelet[2727]: E0129 14:37:24.449862 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.450842 kubelet[2727]: W0129 14:37:24.449884 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.450842 kubelet[2727]: E0129 14:37:24.449911 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:24.450842 kubelet[2727]: E0129 14:37:24.450702 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.452851 kubelet[2727]: W0129 14:37:24.450719 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.452945 kubelet[2727]: E0129 14:37:24.452917 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:24.453258 kubelet[2727]: E0129 14:37:24.453226 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.453258 kubelet[2727]: W0129 14:37:24.453256 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.453378 kubelet[2727]: E0129 14:37:24.453356 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:24.453667 kubelet[2727]: E0129 14:37:24.453645 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.453667 kubelet[2727]: W0129 14:37:24.453665 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.453828 kubelet[2727]: E0129 14:37:24.453787 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:24.454095 kubelet[2727]: E0129 14:37:24.454072 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.454167 kubelet[2727]: W0129 14:37:24.454100 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.454167 kubelet[2727]: E0129 14:37:24.454124 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:24.456256 kubelet[2727]: E0129 14:37:24.456232 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.456675 kubelet[2727]: W0129 14:37:24.456649 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.456936 kubelet[2727]: E0129 14:37:24.456831 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:24.457176 kubelet[2727]: E0129 14:37:24.457155 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.457246 kubelet[2727]: W0129 14:37:24.457176 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.457246 kubelet[2727]: E0129 14:37:24.457234 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:24.457850 kubelet[2727]: E0129 14:37:24.457826 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.457850 kubelet[2727]: W0129 14:37:24.457847 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.458000 kubelet[2727]: E0129 14:37:24.457923 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:24.458305 kubelet[2727]: E0129 14:37:24.458252 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.458305 kubelet[2727]: W0129 14:37:24.458266 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.458431 kubelet[2727]: E0129 14:37:24.458408 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:24.458693 kubelet[2727]: E0129 14:37:24.458614 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.458693 kubelet[2727]: W0129 14:37:24.458634 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.458917 kubelet[2727]: E0129 14:37:24.458846 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:24.459181 kubelet[2727]: E0129 14:37:24.459012 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.459181 kubelet[2727]: W0129 14:37:24.459049 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.459181 kubelet[2727]: E0129 14:37:24.459080 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:24.459397 kubelet[2727]: E0129 14:37:24.459381 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.459397 kubelet[2727]: W0129 14:37:24.459395 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.459558 kubelet[2727]: E0129 14:37:24.459440 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:24.460831 kubelet[2727]: E0129 14:37:24.459765 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.460831 kubelet[2727]: W0129 14:37:24.459850 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.460831 kubelet[2727]: E0129 14:37:24.459971 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:24.460831 kubelet[2727]: E0129 14:37:24.460466 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.460831 kubelet[2727]: W0129 14:37:24.460481 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.460831 kubelet[2727]: E0129 14:37:24.460611 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:24.463135 kubelet[2727]: E0129 14:37:24.463111 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.463135 kubelet[2727]: W0129 14:37:24.463133 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.463880 kubelet[2727]: E0129 14:37:24.463183 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:24.463880 kubelet[2727]: E0129 14:37:24.463557 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.463880 kubelet[2727]: W0129 14:37:24.463571 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.463880 kubelet[2727]: E0129 14:37:24.463734 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:24.464432 kubelet[2727]: E0129 14:37:24.464031 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.464432 kubelet[2727]: W0129 14:37:24.464045 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.464432 kubelet[2727]: E0129 14:37:24.464083 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:24.465944 kubelet[2727]: E0129 14:37:24.465922 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.465944 kubelet[2727]: W0129 14:37:24.465942 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.466085 kubelet[2727]: E0129 14:37:24.465959 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:24.560559 kubelet[2727]: E0129 14:37:24.559506 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.560559 kubelet[2727]: W0129 14:37:24.559538 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.560559 kubelet[2727]: E0129 14:37:24.559569 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:24.662136 kubelet[2727]: E0129 14:37:24.661909 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.662136 kubelet[2727]: W0129 14:37:24.661963 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.662136 kubelet[2727]: E0129 14:37:24.662044 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:24.764391 kubelet[2727]: E0129 14:37:24.764104 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.764391 kubelet[2727]: W0129 14:37:24.764143 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.764391 kubelet[2727]: E0129 14:37:24.764173 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:24.866528 kubelet[2727]: E0129 14:37:24.866127 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.866528 kubelet[2727]: W0129 14:37:24.866165 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.866528 kubelet[2727]: E0129 14:37:24.866197 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:24.967369 kubelet[2727]: E0129 14:37:24.967029 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:24.967369 kubelet[2727]: W0129 14:37:24.967067 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:24.967369 kubelet[2727]: E0129 14:37:24.967112 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:25.027384 kubelet[2727]: E0129 14:37:25.027199 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:25.027384 kubelet[2727]: W0129 14:37:25.027232 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:25.027733 kubelet[2727]: E0129 14:37:25.027677 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:25.178837 containerd[1501]: time="2025-01-29T14:37:25.178692317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-csznj,Uid:bde8eccf-d1f1-447e-84fc-13f58369a03e,Namespace:calico-system,Attempt:0,}" Jan 29 14:37:25.230312 containerd[1501]: time="2025-01-29T14:37:25.230080206Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 14:37:25.230312 containerd[1501]: time="2025-01-29T14:37:25.230242842Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 14:37:25.230312 containerd[1501]: time="2025-01-29T14:37:25.230269072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 14:37:25.231178 containerd[1501]: time="2025-01-29T14:37:25.230469608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 14:37:25.270965 systemd[1]: run-containerd-runc-k8s.io-47c11c7023de2e575694e6b90ddc00a2b2319b50c630f4bfa72a8ae25488c20f-runc.yvHOiv.mount: Deactivated successfully. Jan 29 14:37:25.282148 systemd[1]: Started cri-containerd-47c11c7023de2e575694e6b90ddc00a2b2319b50c630f4bfa72a8ae25488c20f.scope - libcontainer container 47c11c7023de2e575694e6b90ddc00a2b2319b50c630f4bfa72a8ae25488c20f. Jan 29 14:37:25.332096 containerd[1501]: time="2025-01-29T14:37:25.331891757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-csznj,Uid:bde8eccf-d1f1-447e-84fc-13f58369a03e,Namespace:calico-system,Attempt:0,} returns sandbox id \"47c11c7023de2e575694e6b90ddc00a2b2319b50c630f4bfa72a8ae25488c20f\"" Jan 29 14:37:26.043781 kubelet[2727]: E0129 14:37:26.042560 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zh8xr" podUID="9171cee1-001f-4815-a918-01b00e67d3d3" Jan 29 14:37:27.480932 containerd[1501]: time="2025-01-29T14:37:27.480852685Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:37:27.482425 containerd[1501]: time="2025-01-29T14:37:27.482364578Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 29 14:37:27.483354 containerd[1501]: time="2025-01-29T14:37:27.483281081Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:37:27.486219 containerd[1501]: time="2025-01-29T14:37:27.486157572Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:37:27.487616 containerd[1501]: time="2025-01-29T14:37:27.487416360Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 3.060915906s" Jan 29 14:37:27.487616 containerd[1501]: time="2025-01-29T14:37:27.487475675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 29 14:37:27.493163 containerd[1501]: time="2025-01-29T14:37:27.492649471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 29 14:37:27.522596 containerd[1501]: time="2025-01-29T14:37:27.522541020Z" level=info msg="CreateContainer within sandbox \"df130c88186aedd04a86bf39f6277ada377a0ed955aa53dcf7f7363429765a51\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 29 14:37:27.555229 containerd[1501]: time="2025-01-29T14:37:27.555151520Z" level=info msg="CreateContainer within sandbox \"df130c88186aedd04a86bf39f6277ada377a0ed955aa53dcf7f7363429765a51\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1a9cdf6b8582e417a6c387dd8cc36671e076e69d87cbf589b4c312610571b3da\"" Jan 29 14:37:27.556295 containerd[1501]: time="2025-01-29T14:37:27.556223221Z" level=info msg="StartContainer for \"1a9cdf6b8582e417a6c387dd8cc36671e076e69d87cbf589b4c312610571b3da\"" Jan 29 14:37:27.672489 systemd[1]: Started cri-containerd-1a9cdf6b8582e417a6c387dd8cc36671e076e69d87cbf589b4c312610571b3da.scope - libcontainer container 1a9cdf6b8582e417a6c387dd8cc36671e076e69d87cbf589b4c312610571b3da. Jan 29 14:37:27.756876 containerd[1501]: time="2025-01-29T14:37:27.754383953Z" level=info msg="StartContainer for \"1a9cdf6b8582e417a6c387dd8cc36671e076e69d87cbf589b4c312610571b3da\" returns successfully" Jan 29 14:37:28.040862 kubelet[2727]: E0129 14:37:28.040591 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zh8xr" podUID="9171cee1-001f-4815-a918-01b00e67d3d3" Jan 29 14:37:28.166669 kubelet[2727]: E0129 14:37:28.166605 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:28.166669 kubelet[2727]: W0129 14:37:28.166656 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:28.167072 kubelet[2727]: E0129 14:37:28.166707 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:28.167072 kubelet[2727]: E0129 14:37:28.167047 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:28.167072 kubelet[2727]: W0129 14:37:28.167063 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:28.167939 kubelet[2727]: E0129 14:37:28.167098 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:28.167939 kubelet[2727]: E0129 14:37:28.167410 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:28.167939 kubelet[2727]: W0129 14:37:28.167425 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:28.167939 kubelet[2727]: E0129 14:37:28.167441 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:28.167939 kubelet[2727]: E0129 14:37:28.167705 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:28.167939 kubelet[2727]: W0129 14:37:28.167720 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:28.167939 kubelet[2727]: E0129 14:37:28.167735 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:28.168964 kubelet[2727]: E0129 14:37:28.168056 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:28.168964 kubelet[2727]: W0129 14:37:28.168071 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:28.168964 kubelet[2727]: E0129 14:37:28.168086 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:28.168964 kubelet[2727]: E0129 14:37:28.168342 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:28.168964 kubelet[2727]: W0129 14:37:28.168356 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:28.168964 kubelet[2727]: E0129 14:37:28.168372 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:28.168964 kubelet[2727]: E0129 14:37:28.168619 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:28.168964 kubelet[2727]: W0129 14:37:28.168634 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:28.168964 kubelet[2727]: E0129 14:37:28.168650 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:28.168964 kubelet[2727]: E0129 14:37:28.168949 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:28.169922 kubelet[2727]: W0129 14:37:28.168964 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:28.169922 kubelet[2727]: E0129 14:37:28.168979 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:28.169922 kubelet[2727]: E0129 14:37:28.169422 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:28.169922 kubelet[2727]: W0129 14:37:28.169437 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:28.169922 kubelet[2727]: E0129 14:37:28.169453 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:28.169922 kubelet[2727]: E0129 14:37:28.169699 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:28.169922 kubelet[2727]: W0129 14:37:28.169714 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:28.169922 kubelet[2727]: E0129 14:37:28.169728 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:28.170517 kubelet[2727]: E0129 14:37:28.170147 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:28.170517 kubelet[2727]: W0129 14:37:28.170162 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:28.170517 kubelet[2727]: E0129 14:37:28.170178 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:28.170517 kubelet[2727]: E0129 14:37:28.170438 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:28.170517 kubelet[2727]: W0129 14:37:28.170452 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:28.170517 kubelet[2727]: E0129 14:37:28.170477 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:28.171057 kubelet[2727]: E0129 14:37:28.170753 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:28.171057 kubelet[2727]: W0129 14:37:28.170767 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:28.171057 kubelet[2727]: E0129 14:37:28.170782 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:28.171356 kubelet[2727]: E0129 14:37:28.171058 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:28.171356 kubelet[2727]: W0129 14:37:28.171108 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:28.171356 kubelet[2727]: E0129 14:37:28.171125 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:28.172113 kubelet[2727]: E0129 14:37:28.172091 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:28.172113 kubelet[2727]: W0129 14:37:28.172112 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:28.172542 kubelet[2727]: E0129 14:37:28.172129 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:28.189825 kubelet[2727]: I0129 14:37:28.188564 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6565994bdd-cj2v2" podStartSLOduration=2.123010521 podStartE2EDuration="5.188199154s" podCreationTimestamp="2025-01-29 14:37:23 +0000 UTC" firstStartedPulling="2025-01-29 14:37:24.426132746 +0000 UTC m=+21.618824002" lastFinishedPulling="2025-01-29 14:37:27.491321358 +0000 UTC m=+24.684012635" observedRunningTime="2025-01-29 14:37:28.187451838 +0000 UTC m=+25.380143117" watchObservedRunningTime="2025-01-29 14:37:28.188199154 +0000 UTC m=+25.380890422" Jan 29 14:37:28.195534 kubelet[2727]: E0129 14:37:28.195442 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:28.195534 kubelet[2727]: W0129 14:37:28.195471 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:28.195534 kubelet[2727]: E0129 14:37:28.195498 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:28.196045 kubelet[2727]: E0129 14:37:28.196021 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:28.196045 kubelet[2727]: W0129 14:37:28.196045 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:28.196300 kubelet[2727]: E0129 14:37:28.196100 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:28.197021 kubelet[2727]: E0129 14:37:28.196983 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:28.197244 kubelet[2727]: W0129 14:37:28.197140 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:28.197244 kubelet[2727]: E0129 14:37:28.197174 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:28.197599 kubelet[2727]: E0129 14:37:28.197453 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:28.197599 kubelet[2727]: W0129 14:37:28.197469 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:28.197599 kubelet[2727]: E0129 14:37:28.197506 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:28.198035 kubelet[2727]: E0129 14:37:28.197859 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:28.198035 kubelet[2727]: W0129 14:37:28.197874 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:28.198233 kubelet[2727]: E0129 14:37:28.198144 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:28.198467 kubelet[2727]: E0129 14:37:28.198446 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:28.198533 kubelet[2727]: W0129 14:37:28.198466 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:28.198857 kubelet[2727]: E0129 14:37:28.198707 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:28.199144 kubelet[2727]: E0129 14:37:28.199030 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:28.199144 kubelet[2727]: W0129 14:37:28.199050 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:28.199352 kubelet[2727]: E0129 14:37:28.199167 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:28.199696 kubelet[2727]: E0129 14:37:28.199420 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:28.199696 kubelet[2727]: W0129 14:37:28.199442 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:28.199696 kubelet[2727]: E0129 14:37:28.199548 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:28.201093 kubelet[2727]: E0129 14:37:28.199719 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:28.201093 kubelet[2727]: W0129 14:37:28.199733 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:28.201093 kubelet[2727]: E0129 14:37:28.199930 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:28.201093 kubelet[2727]: E0129 14:37:28.200180 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:28.201093 kubelet[2727]: W0129 14:37:28.200195 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:28.201093 kubelet[2727]: E0129 14:37:28.200243 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:28.201093 kubelet[2727]: E0129 14:37:28.200893 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:28.201093 kubelet[2727]: W0129 14:37:28.200908 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:28.201093 kubelet[2727]: E0129 14:37:28.201003 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:28.201747 kubelet[2727]: E0129 14:37:28.201297 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:28.201747 kubelet[2727]: W0129 14:37:28.201311 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:28.201747 kubelet[2727]: E0129 14:37:28.201353 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:28.201747 kubelet[2727]: E0129 14:37:28.201574 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:28.201747 kubelet[2727]: W0129 14:37:28.201589 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:28.201747 kubelet[2727]: E0129 14:37:28.201623 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:28.202070 kubelet[2727]: E0129 14:37:28.201917 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:28.202070 kubelet[2727]: W0129 14:37:28.201931 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:28.202070 kubelet[2727]: E0129 14:37:28.201964 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:28.202340 kubelet[2727]: E0129 14:37:28.202317 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:28.202340 kubelet[2727]: W0129 14:37:28.202337 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:28.202477 kubelet[2727]: E0129 14:37:28.202372 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:28.203005 kubelet[2727]: E0129 14:37:28.202977 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:28.203005 kubelet[2727]: W0129 14:37:28.202999 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:28.203135 kubelet[2727]: E0129 14:37:28.203015 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:28.203953 kubelet[2727]: E0129 14:37:28.203927 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:28.203953 kubelet[2727]: W0129 14:37:28.203950 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:28.204182 kubelet[2727]: E0129 14:37:28.203976 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:28.204256 kubelet[2727]: E0129 14:37:28.204237 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:28.204256 kubelet[2727]: W0129 14:37:28.204251 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:28.204387 kubelet[2727]: E0129 14:37:28.204266 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:29.168195 kubelet[2727]: I0129 14:37:29.167402 2727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 14:37:29.180231 kubelet[2727]: E0129 14:37:29.179614 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:29.180722 kubelet[2727]: W0129 14:37:29.180510 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:29.180722 kubelet[2727]: E0129 14:37:29.180666 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:29.182911 kubelet[2727]: E0129 14:37:29.182728 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:29.182911 kubelet[2727]: W0129 14:37:29.182760 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:29.182911 kubelet[2727]: E0129 14:37:29.182779 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:29.184474 kubelet[2727]: E0129 14:37:29.184252 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:29.184474 kubelet[2727]: W0129 14:37:29.184273 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:29.185025 kubelet[2727]: E0129 14:37:29.184326 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:29.185622 kubelet[2727]: E0129 14:37:29.185472 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:29.185622 kubelet[2727]: W0129 14:37:29.185511 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:29.185622 kubelet[2727]: E0129 14:37:29.185530 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:29.186786 kubelet[2727]: E0129 14:37:29.186361 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:29.186786 kubelet[2727]: W0129 14:37:29.186383 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:29.186786 kubelet[2727]: E0129 14:37:29.186401 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:29.187147 kubelet[2727]: E0129 14:37:29.187088 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:29.187147 kubelet[2727]: W0129 14:37:29.187103 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:29.187147 kubelet[2727]: E0129 14:37:29.187119 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:29.189858 kubelet[2727]: E0129 14:37:29.187822 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:29.189858 kubelet[2727]: W0129 14:37:29.187844 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:29.189858 kubelet[2727]: E0129 14:37:29.187860 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:29.189858 kubelet[2727]: E0129 14:37:29.188464 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:29.189858 kubelet[2727]: W0129 14:37:29.188480 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:29.189858 kubelet[2727]: E0129 14:37:29.188496 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:29.189858 kubelet[2727]: E0129 14:37:29.189297 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:29.189858 kubelet[2727]: W0129 14:37:29.189313 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:29.189858 kubelet[2727]: E0129 14:37:29.189329 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:29.190671 kubelet[2727]: E0129 14:37:29.189954 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:29.190671 kubelet[2727]: W0129 14:37:29.189969 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:29.190671 kubelet[2727]: E0129 14:37:29.189985 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:29.190671 kubelet[2727]: E0129 14:37:29.190542 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:29.190671 kubelet[2727]: W0129 14:37:29.190557 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:29.190671 kubelet[2727]: E0129 14:37:29.190572 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:29.192484 kubelet[2727]: E0129 14:37:29.191554 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:29.192484 kubelet[2727]: W0129 14:37:29.191568 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:29.192484 kubelet[2727]: E0129 14:37:29.191584 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:29.192484 kubelet[2727]: E0129 14:37:29.191863 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:29.192484 kubelet[2727]: W0129 14:37:29.191877 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:29.192484 kubelet[2727]: E0129 14:37:29.191892 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:29.192484 kubelet[2727]: E0129 14:37:29.192135 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:29.192484 kubelet[2727]: W0129 14:37:29.192149 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:29.192484 kubelet[2727]: E0129 14:37:29.192164 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:29.192948 kubelet[2727]: E0129 14:37:29.192536 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:29.192948 kubelet[2727]: W0129 14:37:29.192564 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:29.192948 kubelet[2727]: E0129 14:37:29.192580 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:29.206080 kubelet[2727]: E0129 14:37:29.206047 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:29.206080 kubelet[2727]: W0129 14:37:29.206077 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:29.206292 kubelet[2727]: E0129 14:37:29.206098 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:29.206698 kubelet[2727]: E0129 14:37:29.206648 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:29.206698 kubelet[2727]: W0129 14:37:29.206670 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:29.207295 kubelet[2727]: E0129 14:37:29.207187 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:29.207396 kubelet[2727]: E0129 14:37:29.207339 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:29.207396 kubelet[2727]: W0129 14:37:29.207370 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:29.207506 kubelet[2727]: E0129 14:37:29.207395 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:29.208037 kubelet[2727]: E0129 14:37:29.208004 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:29.208037 kubelet[2727]: W0129 14:37:29.208025 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:29.208373 kubelet[2727]: E0129 14:37:29.208059 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:29.208449 kubelet[2727]: E0129 14:37:29.208399 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:29.208449 kubelet[2727]: W0129 14:37:29.208414 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:29.208848 kubelet[2727]: E0129 14:37:29.208447 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:29.208848 kubelet[2727]: E0129 14:37:29.208747 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:29.208848 kubelet[2727]: W0129 14:37:29.208761 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:29.209376 kubelet[2727]: E0129 14:37:29.208931 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:29.209621 kubelet[2727]: E0129 14:37:29.209600 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:29.209700 kubelet[2727]: W0129 14:37:29.209621 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:29.210475 kubelet[2727]: E0129 14:37:29.209894 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:29.210475 kubelet[2727]: E0129 14:37:29.210137 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:29.210475 kubelet[2727]: W0129 14:37:29.210153 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:29.210475 kubelet[2727]: E0129 14:37:29.210291 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:29.211156 kubelet[2727]: E0129 14:37:29.211134 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:29.211156 kubelet[2727]: W0129 14:37:29.211155 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:29.211283 kubelet[2727]: E0129 14:37:29.211195 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:29.211685 kubelet[2727]: E0129 14:37:29.211661 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:29.211685 kubelet[2727]: W0129 14:37:29.211682 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:29.211875 kubelet[2727]: E0129 14:37:29.211716 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:29.212061 kubelet[2727]: E0129 14:37:29.212040 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:29.212126 kubelet[2727]: W0129 14:37:29.212061 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:29.212215 kubelet[2727]: E0129 14:37:29.212192 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:29.212429 kubelet[2727]: E0129 14:37:29.212408 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:29.212510 kubelet[2727]: W0129 14:37:29.212429 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:29.212510 kubelet[2727]: E0129 14:37:29.212452 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:29.213647 kubelet[2727]: E0129 14:37:29.213614 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:29.213647 kubelet[2727]: W0129 14:37:29.213640 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:29.213767 kubelet[2727]: E0129 14:37:29.213665 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:29.213981 kubelet[2727]: E0129 14:37:29.213953 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:29.213981 kubelet[2727]: W0129 14:37:29.213975 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:29.214163 kubelet[2727]: E0129 14:37:29.214108 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:29.214588 kubelet[2727]: E0129 14:37:29.214553 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:29.214588 kubelet[2727]: W0129 14:37:29.214579 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:29.214719 kubelet[2727]: E0129 14:37:29.214604 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:29.215101 kubelet[2727]: E0129 14:37:29.215021 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:29.215101 kubelet[2727]: W0129 14:37:29.215049 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:29.215101 kubelet[2727]: E0129 14:37:29.215074 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:29.215828 kubelet[2727]: E0129 14:37:29.215565 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:29.216465 kubelet[2727]: W0129 14:37:29.215586 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:29.216749 kubelet[2727]: E0129 14:37:29.216552 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:29.217139 kubelet[2727]: E0129 14:37:29.217117 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:37:29.217504 kubelet[2727]: W0129 14:37:29.217302 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:37:29.217504 kubelet[2727]: E0129 14:37:29.217331 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:37:29.286486 containerd[1501]: time="2025-01-29T14:37:29.285893972Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:37:29.287740 containerd[1501]: time="2025-01-29T14:37:29.287097419Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 29 14:37:29.288666 containerd[1501]: time="2025-01-29T14:37:29.288578682Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:37:29.292855 containerd[1501]: time="2025-01-29T14:37:29.292082340Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:37:29.293393 containerd[1501]: time="2025-01-29T14:37:29.293341638Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.800596634s" Jan 29 14:37:29.293549 containerd[1501]: time="2025-01-29T14:37:29.293519584Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 29 14:37:29.298608 containerd[1501]: time="2025-01-29T14:37:29.298562762Z" level=info msg="CreateContainer within sandbox \"47c11c7023de2e575694e6b90ddc00a2b2319b50c630f4bfa72a8ae25488c20f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 14:37:29.318067 containerd[1501]: time="2025-01-29T14:37:29.317997939Z" level=info msg="CreateContainer within sandbox \"47c11c7023de2e575694e6b90ddc00a2b2319b50c630f4bfa72a8ae25488c20f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a69055f40e29bd4488240af653164a0957678e1c4e1fe0dbe80c60e493f3356f\"" Jan 29 14:37:29.319051 containerd[1501]: time="2025-01-29T14:37:29.319019884Z" level=info msg="StartContainer for \"a69055f40e29bd4488240af653164a0957678e1c4e1fe0dbe80c60e493f3356f\"" Jan 29 14:37:29.373053 systemd[1]: Started cri-containerd-a69055f40e29bd4488240af653164a0957678e1c4e1fe0dbe80c60e493f3356f.scope - libcontainer container a69055f40e29bd4488240af653164a0957678e1c4e1fe0dbe80c60e493f3356f. Jan 29 14:37:29.425880 containerd[1501]: time="2025-01-29T14:37:29.424692357Z" level=info msg="StartContainer for \"a69055f40e29bd4488240af653164a0957678e1c4e1fe0dbe80c60e493f3356f\" returns successfully" Jan 29 14:37:29.478648 systemd[1]: cri-containerd-a69055f40e29bd4488240af653164a0957678e1c4e1fe0dbe80c60e493f3356f.scope: Deactivated successfully. Jan 29 14:37:29.533597 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a69055f40e29bd4488240af653164a0957678e1c4e1fe0dbe80c60e493f3356f-rootfs.mount: Deactivated successfully. Jan 29 14:37:29.553683 containerd[1501]: time="2025-01-29T14:37:29.535704199Z" level=info msg="shim disconnected" id=a69055f40e29bd4488240af653164a0957678e1c4e1fe0dbe80c60e493f3356f namespace=k8s.io Jan 29 14:37:29.553683 containerd[1501]: time="2025-01-29T14:37:29.553635172Z" level=warning msg="cleaning up after shim disconnected" id=a69055f40e29bd4488240af653164a0957678e1c4e1fe0dbe80c60e493f3356f namespace=k8s.io Jan 29 14:37:29.553683 containerd[1501]: time="2025-01-29T14:37:29.553670081Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 14:37:30.041335 kubelet[2727]: E0129 14:37:30.041252 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zh8xr" podUID="9171cee1-001f-4815-a918-01b00e67d3d3" Jan 29 14:37:30.174683 containerd[1501]: time="2025-01-29T14:37:30.174311159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 29 14:37:32.041362 kubelet[2727]: E0129 14:37:32.041157 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zh8xr" podUID="9171cee1-001f-4815-a918-01b00e67d3d3" Jan 29 14:37:34.042470 kubelet[2727]: E0129 14:37:34.041572 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zh8xr" podUID="9171cee1-001f-4815-a918-01b00e67d3d3" Jan 29 14:37:36.041278 kubelet[2727]: E0129 14:37:36.040628 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zh8xr" podUID="9171cee1-001f-4815-a918-01b00e67d3d3" Jan 29 14:37:36.340186 containerd[1501]: time="2025-01-29T14:37:36.339852670Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:37:36.341661 containerd[1501]: time="2025-01-29T14:37:36.341518977Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 29 14:37:36.342614 containerd[1501]: time="2025-01-29T14:37:36.342400660Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:37:36.346309 containerd[1501]: time="2025-01-29T14:37:36.346179822Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:37:36.347528 containerd[1501]: time="2025-01-29T14:37:36.347236999Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 6.172842916s" Jan 29 14:37:36.347528 containerd[1501]: time="2025-01-29T14:37:36.347300819Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 29 14:37:36.354604 containerd[1501]: time="2025-01-29T14:37:36.354362530Z" level=info msg="CreateContainer within sandbox \"47c11c7023de2e575694e6b90ddc00a2b2319b50c630f4bfa72a8ae25488c20f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 14:37:36.381615 containerd[1501]: time="2025-01-29T14:37:36.381559218Z" level=info msg="CreateContainer within sandbox \"47c11c7023de2e575694e6b90ddc00a2b2319b50c630f4bfa72a8ae25488c20f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a6ff3da03a0689833e444b96911d0c7b6cb2566ad136ad4744e4e36296fc195d\"" Jan 29 14:37:36.382865 containerd[1501]: time="2025-01-29T14:37:36.382789950Z" level=info msg="StartContainer for \"a6ff3da03a0689833e444b96911d0c7b6cb2566ad136ad4744e4e36296fc195d\"" Jan 29 14:37:36.480152 systemd[1]: Started cri-containerd-a6ff3da03a0689833e444b96911d0c7b6cb2566ad136ad4744e4e36296fc195d.scope - libcontainer container a6ff3da03a0689833e444b96911d0c7b6cb2566ad136ad4744e4e36296fc195d. Jan 29 14:37:36.537308 containerd[1501]: time="2025-01-29T14:37:36.537226151Z" level=info msg="StartContainer for \"a6ff3da03a0689833e444b96911d0c7b6cb2566ad136ad4744e4e36296fc195d\" returns successfully" Jan 29 14:37:37.413193 systemd[1]: cri-containerd-a6ff3da03a0689833e444b96911d0c7b6cb2566ad136ad4744e4e36296fc195d.scope: Deactivated successfully. Jan 29 14:37:37.462413 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a6ff3da03a0689833e444b96911d0c7b6cb2566ad136ad4744e4e36296fc195d-rootfs.mount: Deactivated successfully. Jan 29 14:37:37.521016 kubelet[2727]: I0129 14:37:37.507905 2727 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 29 14:37:37.526051 containerd[1501]: time="2025-01-29T14:37:37.525911650Z" level=info msg="shim disconnected" id=a6ff3da03a0689833e444b96911d0c7b6cb2566ad136ad4744e4e36296fc195d namespace=k8s.io Jan 29 14:37:37.528691 containerd[1501]: time="2025-01-29T14:37:37.526054430Z" level=warning msg="cleaning up after shim disconnected" id=a6ff3da03a0689833e444b96911d0c7b6cb2566ad136ad4744e4e36296fc195d namespace=k8s.io Jan 29 14:37:37.528691 containerd[1501]: time="2025-01-29T14:37:37.526075167Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 14:37:37.556992 kubelet[2727]: I0129 14:37:37.556548 2727 topology_manager.go:215] "Topology Admit Handler" podUID="114e2b6a-4dec-46af-b960-cb1b36f44242" podNamespace="kube-system" podName="coredns-7db6d8ff4d-snc7g" Jan 29 14:37:37.566720 kubelet[2727]: I0129 14:37:37.566670 2727 topology_manager.go:215] "Topology Admit Handler" podUID="9d07ade6-7ccb-46c5-be71-11453b4fb53f" podNamespace="kube-system" podName="coredns-7db6d8ff4d-7nbqm" Jan 29 14:37:37.568545 kubelet[2727]: I0129 14:37:37.566843 2727 topology_manager.go:215] "Topology Admit Handler" podUID="8f491968-0d35-4eb0-81d6-df6823ca943a" podNamespace="calico-apiserver" podName="calico-apiserver-b98d747cb-hclzq" Jan 29 14:37:37.568622 containerd[1501]: time="2025-01-29T14:37:37.566978331Z" level=warning msg="cleanup warnings time=\"2025-01-29T14:37:37Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 14:37:37.577973 kubelet[2727]: I0129 14:37:37.577055 2727 topology_manager.go:215] "Topology Admit Handler" podUID="5ff99ef3-cf8a-4707-99e1-ba45cf048b9d" podNamespace="calico-system" podName="calico-kube-controllers-77d6948c77-z6748" Jan 29 14:37:37.577973 kubelet[2727]: I0129 14:37:37.577353 2727 topology_manager.go:215] "Topology Admit Handler" podUID="41990643-0c0f-44e1-bc9b-12b09a813a6e" podNamespace="calico-apiserver" podName="calico-apiserver-b98d747cb-8t2mn" Jan 29 14:37:37.625875 systemd[1]: Created slice kubepods-besteffort-pod8f491968_0d35_4eb0_81d6_df6823ca943a.slice - libcontainer container kubepods-besteffort-pod8f491968_0d35_4eb0_81d6_df6823ca943a.slice. Jan 29 14:37:37.637535 systemd[1]: Created slice kubepods-burstable-pod114e2b6a_4dec_46af_b960_cb1b36f44242.slice - libcontainer container kubepods-burstable-pod114e2b6a_4dec_46af_b960_cb1b36f44242.slice. Jan 29 14:37:37.653327 systemd[1]: Created slice kubepods-burstable-pod9d07ade6_7ccb_46c5_be71_11453b4fb53f.slice - libcontainer container kubepods-burstable-pod9d07ade6_7ccb_46c5_be71_11453b4fb53f.slice. Jan 29 14:37:37.668326 systemd[1]: Created slice kubepods-besteffort-pod41990643_0c0f_44e1_bc9b_12b09a813a6e.slice - libcontainer container kubepods-besteffort-pod41990643_0c0f_44e1_bc9b_12b09a813a6e.slice. Jan 29 14:37:37.672047 kubelet[2727]: I0129 14:37:37.671541 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8f491968-0d35-4eb0-81d6-df6823ca943a-calico-apiserver-certs\") pod \"calico-apiserver-b98d747cb-hclzq\" (UID: \"8f491968-0d35-4eb0-81d6-df6823ca943a\") " pod="calico-apiserver/calico-apiserver-b98d747cb-hclzq" Jan 29 14:37:37.672047 kubelet[2727]: I0129 14:37:37.671598 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d07ade6-7ccb-46c5-be71-11453b4fb53f-config-volume\") pod \"coredns-7db6d8ff4d-7nbqm\" (UID: \"9d07ade6-7ccb-46c5-be71-11453b4fb53f\") " pod="kube-system/coredns-7db6d8ff4d-7nbqm" Jan 29 14:37:37.672047 kubelet[2727]: I0129 14:37:37.671630 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/41990643-0c0f-44e1-bc9b-12b09a813a6e-calico-apiserver-certs\") pod \"calico-apiserver-b98d747cb-8t2mn\" (UID: \"41990643-0c0f-44e1-bc9b-12b09a813a6e\") " pod="calico-apiserver/calico-apiserver-b98d747cb-8t2mn" Jan 29 14:37:37.672047 kubelet[2727]: I0129 14:37:37.671660 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxlff\" (UniqueName: \"kubernetes.io/projected/114e2b6a-4dec-46af-b960-cb1b36f44242-kube-api-access-pxlff\") pod \"coredns-7db6d8ff4d-snc7g\" (UID: \"114e2b6a-4dec-46af-b960-cb1b36f44242\") " pod="kube-system/coredns-7db6d8ff4d-snc7g" Jan 29 14:37:37.672047 kubelet[2727]: I0129 14:37:37.671690 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ff99ef3-cf8a-4707-99e1-ba45cf048b9d-tigera-ca-bundle\") pod \"calico-kube-controllers-77d6948c77-z6748\" (UID: \"5ff99ef3-cf8a-4707-99e1-ba45cf048b9d\") " pod="calico-system/calico-kube-controllers-77d6948c77-z6748" Jan 29 14:37:37.672726 kubelet[2727]: I0129 14:37:37.672225 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8q7c\" (UniqueName: \"kubernetes.io/projected/41990643-0c0f-44e1-bc9b-12b09a813a6e-kube-api-access-m8q7c\") pod \"calico-apiserver-b98d747cb-8t2mn\" (UID: \"41990643-0c0f-44e1-bc9b-12b09a813a6e\") " pod="calico-apiserver/calico-apiserver-b98d747cb-8t2mn" Jan 29 14:37:37.672726 kubelet[2727]: I0129 14:37:37.672400 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmrws\" (UniqueName: \"kubernetes.io/projected/9d07ade6-7ccb-46c5-be71-11453b4fb53f-kube-api-access-nmrws\") pod \"coredns-7db6d8ff4d-7nbqm\" (UID: \"9d07ade6-7ccb-46c5-be71-11453b4fb53f\") " pod="kube-system/coredns-7db6d8ff4d-7nbqm" Jan 29 14:37:37.672726 kubelet[2727]: I0129 14:37:37.672482 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44ktx\" (UniqueName: \"kubernetes.io/projected/8f491968-0d35-4eb0-81d6-df6823ca943a-kube-api-access-44ktx\") pod \"calico-apiserver-b98d747cb-hclzq\" (UID: \"8f491968-0d35-4eb0-81d6-df6823ca943a\") " pod="calico-apiserver/calico-apiserver-b98d747cb-hclzq" Jan 29 14:37:37.672726 kubelet[2727]: I0129 14:37:37.672519 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/114e2b6a-4dec-46af-b960-cb1b36f44242-config-volume\") pod \"coredns-7db6d8ff4d-snc7g\" (UID: \"114e2b6a-4dec-46af-b960-cb1b36f44242\") " pod="kube-system/coredns-7db6d8ff4d-snc7g" Jan 29 14:37:37.672726 kubelet[2727]: I0129 14:37:37.672692 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkswg\" (UniqueName: \"kubernetes.io/projected/5ff99ef3-cf8a-4707-99e1-ba45cf048b9d-kube-api-access-xkswg\") pod \"calico-kube-controllers-77d6948c77-z6748\" (UID: \"5ff99ef3-cf8a-4707-99e1-ba45cf048b9d\") " pod="calico-system/calico-kube-controllers-77d6948c77-z6748" Jan 29 14:37:37.680139 systemd[1]: Created slice kubepods-besteffort-pod5ff99ef3_cf8a_4707_99e1_ba45cf048b9d.slice - libcontainer container kubepods-besteffort-pod5ff99ef3_cf8a_4707_99e1_ba45cf048b9d.slice. Jan 29 14:37:37.936621 containerd[1501]: time="2025-01-29T14:37:37.935791394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b98d747cb-hclzq,Uid:8f491968-0d35-4eb0-81d6-df6823ca943a,Namespace:calico-apiserver,Attempt:0,}" Jan 29 14:37:37.948256 containerd[1501]: time="2025-01-29T14:37:37.948198640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-snc7g,Uid:114e2b6a-4dec-46af-b960-cb1b36f44242,Namespace:kube-system,Attempt:0,}" Jan 29 14:37:37.960713 containerd[1501]: time="2025-01-29T14:37:37.960357376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7nbqm,Uid:9d07ade6-7ccb-46c5-be71-11453b4fb53f,Namespace:kube-system,Attempt:0,}" Jan 29 14:37:37.981862 containerd[1501]: time="2025-01-29T14:37:37.981770600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b98d747cb-8t2mn,Uid:41990643-0c0f-44e1-bc9b-12b09a813a6e,Namespace:calico-apiserver,Attempt:0,}" Jan 29 14:37:37.991211 containerd[1501]: time="2025-01-29T14:37:37.991151773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77d6948c77-z6748,Uid:5ff99ef3-cf8a-4707-99e1-ba45cf048b9d,Namespace:calico-system,Attempt:0,}" Jan 29 14:37:38.053447 systemd[1]: Created slice kubepods-besteffort-pod9171cee1_001f_4815_a918_01b00e67d3d3.slice - libcontainer container kubepods-besteffort-pod9171cee1_001f_4815_a918_01b00e67d3d3.slice. Jan 29 14:37:38.060342 containerd[1501]: time="2025-01-29T14:37:38.060294436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zh8xr,Uid:9171cee1-001f-4815-a918-01b00e67d3d3,Namespace:calico-system,Attempt:0,}" Jan 29 14:37:38.262882 containerd[1501]: time="2025-01-29T14:37:38.261945619Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 29 14:37:38.370775 containerd[1501]: time="2025-01-29T14:37:38.370681931Z" level=error msg="Failed to destroy network for sandbox \"b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:37:38.376142 containerd[1501]: time="2025-01-29T14:37:38.375943921Z" level=error msg="encountered an error cleaning up failed sandbox \"b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:37:38.376142 containerd[1501]: time="2025-01-29T14:37:38.376058515Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zh8xr,Uid:9171cee1-001f-4815-a918-01b00e67d3d3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:37:38.389837 kubelet[2727]: E0129 14:37:38.376775 2727 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:37:38.389837 kubelet[2727]: E0129 14:37:38.389683 2727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zh8xr" Jan 29 14:37:38.389837 kubelet[2727]: E0129 14:37:38.389739 2727 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zh8xr" Jan 29 14:37:38.390605 kubelet[2727]: E0129 14:37:38.390265 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zh8xr_calico-system(9171cee1-001f-4815-a918-01b00e67d3d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zh8xr_calico-system(9171cee1-001f-4815-a918-01b00e67d3d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zh8xr" podUID="9171cee1-001f-4815-a918-01b00e67d3d3" Jan 29 14:37:38.396479 containerd[1501]: time="2025-01-29T14:37:38.396007711Z" level=error msg="Failed to destroy network for sandbox \"e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:37:38.397076 containerd[1501]: time="2025-01-29T14:37:38.397037040Z" level=error msg="encountered an error cleaning up failed sandbox \"e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:37:38.399933 containerd[1501]: time="2025-01-29T14:37:38.399886809Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77d6948c77-z6748,Uid:5ff99ef3-cf8a-4707-99e1-ba45cf048b9d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:37:38.400742 kubelet[2727]: E0129 14:37:38.400312 2727 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:37:38.400742 kubelet[2727]: E0129 14:37:38.400377 2727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-77d6948c77-z6748" Jan 29 14:37:38.400742 kubelet[2727]: E0129 14:37:38.400409 2727 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-77d6948c77-z6748" Jan 29 14:37:38.400978 kubelet[2727]: E0129 14:37:38.400490 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-77d6948c77-z6748_calico-system(5ff99ef3-cf8a-4707-99e1-ba45cf048b9d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-77d6948c77-z6748_calico-system(5ff99ef3-cf8a-4707-99e1-ba45cf048b9d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-77d6948c77-z6748" podUID="5ff99ef3-cf8a-4707-99e1-ba45cf048b9d" Jan 29 14:37:38.434459 containerd[1501]: time="2025-01-29T14:37:38.433595892Z" level=error msg="Failed to destroy network for sandbox \"666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:37:38.435552 containerd[1501]: time="2025-01-29T14:37:38.435369892Z" level=error msg="encountered an error cleaning up failed sandbox \"666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:37:38.435746 containerd[1501]: time="2025-01-29T14:37:38.435708275Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-snc7g,Uid:114e2b6a-4dec-46af-b960-cb1b36f44242,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:37:38.436965 containerd[1501]: time="2025-01-29T14:37:38.435996814Z" level=error msg="Failed to destroy network for sandbox \"16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:37:38.437407 containerd[1501]: time="2025-01-29T14:37:38.437370163Z" level=error msg="encountered an error cleaning up failed sandbox \"16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:37:38.437941 kubelet[2727]: E0129 14:37:38.437870 2727 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:37:38.438141 kubelet[2727]: E0129 14:37:38.438102 2727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-snc7g" Jan 29 14:37:38.438291 kubelet[2727]: E0129 14:37:38.438255 2727 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-snc7g" Jan 29 14:37:38.438489 kubelet[2727]: E0129 14:37:38.438447 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-snc7g_kube-system(114e2b6a-4dec-46af-b960-cb1b36f44242)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-snc7g_kube-system(114e2b6a-4dec-46af-b960-cb1b36f44242)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-snc7g" podUID="114e2b6a-4dec-46af-b960-cb1b36f44242" Jan 29 14:37:38.440856 containerd[1501]: time="2025-01-29T14:37:38.440784341Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b98d747cb-8t2mn,Uid:41990643-0c0f-44e1-bc9b-12b09a813a6e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:37:38.444284 kubelet[2727]: E0129 14:37:38.441170 2727 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:37:38.444284 kubelet[2727]: E0129 14:37:38.441219 2727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b98d747cb-8t2mn" Jan 29 14:37:38.444284 kubelet[2727]: E0129 14:37:38.441245 2727 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b98d747cb-8t2mn" Jan 29 14:37:38.444576 kubelet[2727]: E0129 14:37:38.441286 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b98d747cb-8t2mn_calico-apiserver(41990643-0c0f-44e1-bc9b-12b09a813a6e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b98d747cb-8t2mn_calico-apiserver(41990643-0c0f-44e1-bc9b-12b09a813a6e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b98d747cb-8t2mn" podUID="41990643-0c0f-44e1-bc9b-12b09a813a6e" Jan 29 14:37:38.446095 containerd[1501]: time="2025-01-29T14:37:38.446012572Z" level=error msg="Failed to destroy network for sandbox \"bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:37:38.447043 containerd[1501]: time="2025-01-29T14:37:38.446923072Z" level=error msg="encountered an error cleaning up failed sandbox \"bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:37:38.447043 containerd[1501]: time="2025-01-29T14:37:38.446998883Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7nbqm,Uid:9d07ade6-7ccb-46c5-be71-11453b4fb53f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:37:38.447575 kubelet[2727]: E0129 14:37:38.447416 2727 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:37:38.447575 kubelet[2727]: E0129 14:37:38.447490 2727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7nbqm" Jan 29 14:37:38.447575 kubelet[2727]: E0129 14:37:38.447531 2727 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7nbqm" Jan 29 14:37:38.448842 kubelet[2727]: E0129 14:37:38.447601 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-7nbqm_kube-system(9d07ade6-7ccb-46c5-be71-11453b4fb53f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-7nbqm_kube-system(9d07ade6-7ccb-46c5-be71-11453b4fb53f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-7nbqm" podUID="9d07ade6-7ccb-46c5-be71-11453b4fb53f" Jan 29 14:37:38.461993 containerd[1501]: time="2025-01-29T14:37:38.461910908Z" level=error msg="Failed to destroy network for sandbox \"a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:37:38.462430 containerd[1501]: time="2025-01-29T14:37:38.462377962Z" level=error msg="encountered an error cleaning up failed sandbox \"a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:37:38.462618 containerd[1501]: time="2025-01-29T14:37:38.462468694Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b98d747cb-hclzq,Uid:8f491968-0d35-4eb0-81d6-df6823ca943a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:37:38.463872 kubelet[2727]: E0129 14:37:38.462839 2727 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:37:38.463872 kubelet[2727]: E0129 14:37:38.462958 2727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b98d747cb-hclzq" Jan 29 14:37:38.463872 kubelet[2727]: E0129 14:37:38.462989 2727 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b98d747cb-hclzq" Jan 29 14:37:38.464068 kubelet[2727]: E0129 14:37:38.463078 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b98d747cb-hclzq_calico-apiserver(8f491968-0d35-4eb0-81d6-df6823ca943a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b98d747cb-hclzq_calico-apiserver(8f491968-0d35-4eb0-81d6-df6823ca943a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b98d747cb-hclzq" podUID="8f491968-0d35-4eb0-81d6-df6823ca943a" Jan 29 14:37:38.478388 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90-shm.mount: Deactivated successfully. Jan 29 14:37:39.250043 kubelet[2727]: I0129 14:37:39.249990 2727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3" Jan 29 14:37:39.255946 kubelet[2727]: I0129 14:37:39.255909 2727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c" Jan 29 14:37:39.262912 kubelet[2727]: I0129 14:37:39.262335 2727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90" Jan 29 14:37:39.266142 kubelet[2727]: I0129 14:37:39.266114 2727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5" Jan 29 14:37:39.271497 kubelet[2727]: I0129 14:37:39.271130 2727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428" Jan 29 14:37:39.274309 kubelet[2727]: I0129 14:37:39.274211 2727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225" Jan 29 14:37:39.314556 containerd[1501]: time="2025-01-29T14:37:39.314092866Z" level=info msg="StopPodSandbox for \"666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428\"" Jan 29 14:37:39.316349 containerd[1501]: time="2025-01-29T14:37:39.315029429Z" level=info msg="StopPodSandbox for \"e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c\"" Jan 29 14:37:39.316349 containerd[1501]: time="2025-01-29T14:37:39.316261589Z" level=info msg="Ensure that sandbox e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c in task-service has been cleanup successfully" Jan 29 14:37:39.316552 containerd[1501]: time="2025-01-29T14:37:39.316521157Z" level=info msg="Ensure that sandbox 666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428 in task-service has been cleanup successfully" Jan 29 14:37:39.318606 containerd[1501]: time="2025-01-29T14:37:39.318568211Z" level=info msg="StopPodSandbox for \"a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90\"" Jan 29 14:37:39.318905 containerd[1501]: time="2025-01-29T14:37:39.318633433Z" level=info msg="StopPodSandbox for \"b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3\"" Jan 29 14:37:39.319118 containerd[1501]: time="2025-01-29T14:37:39.319083698Z" level=info msg="Ensure that sandbox b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3 in task-service has been cleanup successfully" Jan 29 14:37:39.319492 containerd[1501]: time="2025-01-29T14:37:39.319431260Z" level=info msg="Ensure that sandbox a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90 in task-service has been cleanup successfully" Jan 29 14:37:39.320397 containerd[1501]: time="2025-01-29T14:37:39.318673745Z" level=info msg="StopPodSandbox for \"16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225\"" Jan 29 14:37:39.322122 containerd[1501]: time="2025-01-29T14:37:39.322086979Z" level=info msg="Ensure that sandbox 16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225 in task-service has been cleanup successfully" Jan 29 14:37:39.322486 containerd[1501]: time="2025-01-29T14:37:39.318701701Z" level=info msg="StopPodSandbox for \"bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5\"" Jan 29 14:37:39.322941 containerd[1501]: time="2025-01-29T14:37:39.322669177Z" level=info msg="Ensure that sandbox bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5 in task-service has been cleanup successfully" Jan 29 14:37:39.428392 containerd[1501]: time="2025-01-29T14:37:39.428300406Z" level=error msg="StopPodSandbox for \"666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428\" failed" error="failed to destroy network for sandbox \"666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:37:39.428858 kubelet[2727]: E0129 14:37:39.428747 2727 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428" Jan 29 14:37:39.429344 kubelet[2727]: E0129 14:37:39.429246 2727 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428"} Jan 29 14:37:39.429535 kubelet[2727]: E0129 14:37:39.429469 2727 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"114e2b6a-4dec-46af-b960-cb1b36f44242\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 14:37:39.429843 kubelet[2727]: E0129 14:37:39.429720 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"114e2b6a-4dec-46af-b960-cb1b36f44242\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-snc7g" podUID="114e2b6a-4dec-46af-b960-cb1b36f44242" Jan 29 14:37:39.447288 containerd[1501]: time="2025-01-29T14:37:39.447035625Z" level=error msg="StopPodSandbox for \"a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90\" failed" error="failed to destroy network for sandbox \"a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:37:39.448840 kubelet[2727]: E0129 14:37:39.448141 2727 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90" Jan 29 14:37:39.448840 kubelet[2727]: E0129 14:37:39.448229 2727 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90"} Jan 29 14:37:39.448840 kubelet[2727]: E0129 14:37:39.448314 2727 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8f491968-0d35-4eb0-81d6-df6823ca943a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 14:37:39.448840 kubelet[2727]: E0129 14:37:39.448373 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8f491968-0d35-4eb0-81d6-df6823ca943a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b98d747cb-hclzq" podUID="8f491968-0d35-4eb0-81d6-df6823ca943a" Jan 29 14:37:39.477098 containerd[1501]: time="2025-01-29T14:37:39.476988907Z" level=error msg="StopPodSandbox for \"b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3\" failed" error="failed to destroy network for sandbox \"b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:37:39.477531 kubelet[2727]: E0129 14:37:39.477456 2727 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3" Jan 29 14:37:39.477660 kubelet[2727]: E0129 14:37:39.477551 2727 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3"} Jan 29 14:37:39.477660 kubelet[2727]: E0129 14:37:39.477603 2727 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9171cee1-001f-4815-a918-01b00e67d3d3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 14:37:39.478040 kubelet[2727]: E0129 14:37:39.477650 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9171cee1-001f-4815-a918-01b00e67d3d3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zh8xr" podUID="9171cee1-001f-4815-a918-01b00e67d3d3" Jan 29 14:37:39.483626 containerd[1501]: time="2025-01-29T14:37:39.483308568Z" level=error msg="StopPodSandbox for \"bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5\" failed" error="failed to destroy network for sandbox \"bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:37:39.483626 containerd[1501]: time="2025-01-29T14:37:39.483469867Z" level=error msg="StopPodSandbox for \"e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c\" failed" error="failed to destroy network for sandbox \"e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:37:39.484189 kubelet[2727]: E0129 14:37:39.483898 2727 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c" Jan 29 14:37:39.484189 kubelet[2727]: E0129 14:37:39.483962 2727 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c"} Jan 29 14:37:39.484189 kubelet[2727]: E0129 14:37:39.484007 2727 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5ff99ef3-cf8a-4707-99e1-ba45cf048b9d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 14:37:39.484189 kubelet[2727]: E0129 14:37:39.484048 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5ff99ef3-cf8a-4707-99e1-ba45cf048b9d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-77d6948c77-z6748" podUID="5ff99ef3-cf8a-4707-99e1-ba45cf048b9d" Jan 29 14:37:39.485071 kubelet[2727]: E0129 14:37:39.484901 2727 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5" Jan 29 14:37:39.485071 kubelet[2727]: E0129 14:37:39.484957 2727 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5"} Jan 29 14:37:39.485071 kubelet[2727]: E0129 14:37:39.484998 2727 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9d07ade6-7ccb-46c5-be71-11453b4fb53f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 14:37:39.485071 kubelet[2727]: E0129 14:37:39.485035 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9d07ade6-7ccb-46c5-be71-11453b4fb53f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-7nbqm" podUID="9d07ade6-7ccb-46c5-be71-11453b4fb53f" Jan 29 14:37:39.485751 kubelet[2727]: E0129 14:37:39.485703 2727 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225" Jan 29 14:37:39.485896 containerd[1501]: time="2025-01-29T14:37:39.485431448Z" level=error msg="StopPodSandbox for \"16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225\" failed" error="failed to destroy network for sandbox \"16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:37:39.486002 kubelet[2727]: E0129 14:37:39.485751 2727 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225"} Jan 29 14:37:39.486002 kubelet[2727]: E0129 14:37:39.485788 2727 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"41990643-0c0f-44e1-bc9b-12b09a813a6e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 14:37:39.486002 kubelet[2727]: E0129 14:37:39.485849 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"41990643-0c0f-44e1-bc9b-12b09a813a6e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b98d747cb-8t2mn" podUID="41990643-0c0f-44e1-bc9b-12b09a813a6e" Jan 29 14:37:48.653415 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2046722327.mount: Deactivated successfully. Jan 29 14:37:48.931925 containerd[1501]: time="2025-01-29T14:37:48.857716934Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 29 14:37:48.931925 containerd[1501]: time="2025-01-29T14:37:48.927636131Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:37:48.957598 containerd[1501]: time="2025-01-29T14:37:48.957494771Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 10.692179114s" Jan 29 14:37:48.957598 containerd[1501]: time="2025-01-29T14:37:48.957578344Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 29 14:37:48.957883 containerd[1501]: time="2025-01-29T14:37:48.957839675Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:37:48.959438 containerd[1501]: time="2025-01-29T14:37:48.959333425Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:37:48.995481 containerd[1501]: time="2025-01-29T14:37:48.995178126Z" level=info msg="CreateContainer within sandbox \"47c11c7023de2e575694e6b90ddc00a2b2319b50c630f4bfa72a8ae25488c20f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 14:37:49.048095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3208793444.mount: Deactivated successfully. Jan 29 14:37:49.060123 containerd[1501]: time="2025-01-29T14:37:49.060035401Z" level=info msg="CreateContainer within sandbox \"47c11c7023de2e575694e6b90ddc00a2b2319b50c630f4bfa72a8ae25488c20f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"aaf035c40e295dcbf84ee6332b1a7eb733e5696a45fc36ca70f7ea82073c17bb\"" Jan 29 14:37:49.071444 containerd[1501]: time="2025-01-29T14:37:49.071326136Z" level=info msg="StartContainer for \"aaf035c40e295dcbf84ee6332b1a7eb733e5696a45fc36ca70f7ea82073c17bb\"" Jan 29 14:37:49.192158 systemd[1]: Started cri-containerd-aaf035c40e295dcbf84ee6332b1a7eb733e5696a45fc36ca70f7ea82073c17bb.scope - libcontainer container aaf035c40e295dcbf84ee6332b1a7eb733e5696a45fc36ca70f7ea82073c17bb. Jan 29 14:37:49.293696 containerd[1501]: time="2025-01-29T14:37:49.293601990Z" level=info msg="StartContainer for \"aaf035c40e295dcbf84ee6332b1a7eb733e5696a45fc36ca70f7ea82073c17bb\" returns successfully" Jan 29 14:37:49.544440 kubelet[2727]: I0129 14:37:49.492885 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-csznj" podStartSLOduration=2.847369907 podStartE2EDuration="26.471333476s" podCreationTimestamp="2025-01-29 14:37:23 +0000 UTC" firstStartedPulling="2025-01-29 14:37:25.335636188 +0000 UTC m=+22.528327449" lastFinishedPulling="2025-01-29 14:37:48.959599753 +0000 UTC m=+46.152291018" observedRunningTime="2025-01-29 14:37:49.468714244 +0000 UTC m=+46.661405522" watchObservedRunningTime="2025-01-29 14:37:49.471333476 +0000 UTC m=+46.664024752" Jan 29 14:37:49.559059 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 29 14:37:49.560158 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 29 14:37:51.044767 containerd[1501]: time="2025-01-29T14:37:51.044542302Z" level=info msg="StopPodSandbox for \"16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225\"" Jan 29 14:37:51.044767 containerd[1501]: time="2025-01-29T14:37:51.044572587Z" level=info msg="StopPodSandbox for \"b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3\"" Jan 29 14:37:51.049950 containerd[1501]: time="2025-01-29T14:37:51.049192051Z" level=info msg="StopPodSandbox for \"e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c\"" Jan 29 14:37:51.594848 containerd[1501]: 2025-01-29 14:37:51.269 [INFO][3962] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225" Jan 29 14:37:51.594848 containerd[1501]: 2025-01-29 14:37:51.270 [INFO][3962] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225" iface="eth0" netns="/var/run/netns/cni-2fb248b7-d631-4a03-fc53-2c2857eab97c" Jan 29 14:37:51.594848 containerd[1501]: 2025-01-29 14:37:51.272 [INFO][3962] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225" iface="eth0" netns="/var/run/netns/cni-2fb248b7-d631-4a03-fc53-2c2857eab97c" Jan 29 14:37:51.594848 containerd[1501]: 2025-01-29 14:37:51.273 [INFO][3962] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225" iface="eth0" netns="/var/run/netns/cni-2fb248b7-d631-4a03-fc53-2c2857eab97c" Jan 29 14:37:51.594848 containerd[1501]: 2025-01-29 14:37:51.273 [INFO][3962] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225" Jan 29 14:37:51.594848 containerd[1501]: 2025-01-29 14:37:51.274 [INFO][3962] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225" Jan 29 14:37:51.594848 containerd[1501]: 2025-01-29 14:37:51.552 [INFO][4016] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225" HandleID="k8s-pod-network.16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225" Workload="srv--rni4s.gb1.brightbox.com-k8s-calico--apiserver--b98d747cb--8t2mn-eth0" Jan 29 14:37:51.594848 containerd[1501]: 2025-01-29 14:37:51.557 [INFO][4016] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 14:37:51.594848 containerd[1501]: 2025-01-29 14:37:51.557 [INFO][4016] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 14:37:51.594848 containerd[1501]: 2025-01-29 14:37:51.581 [WARNING][4016] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225" HandleID="k8s-pod-network.16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225" Workload="srv--rni4s.gb1.brightbox.com-k8s-calico--apiserver--b98d747cb--8t2mn-eth0" Jan 29 14:37:51.594848 containerd[1501]: 2025-01-29 14:37:51.581 [INFO][4016] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225" HandleID="k8s-pod-network.16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225" Workload="srv--rni4s.gb1.brightbox.com-k8s-calico--apiserver--b98d747cb--8t2mn-eth0" Jan 29 14:37:51.594848 containerd[1501]: 2025-01-29 14:37:51.583 [INFO][4016] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 14:37:51.594848 containerd[1501]: 2025-01-29 14:37:51.588 [INFO][3962] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225" Jan 29 14:37:51.605769 systemd[1]: run-netns-cni\x2d2fb248b7\x2dd631\x2d4a03\x2dfc53\x2d2c2857eab97c.mount: Deactivated successfully. Jan 29 14:37:51.653845 containerd[1501]: 2025-01-29 14:37:51.253 [INFO][3967] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c" Jan 29 14:37:51.653845 containerd[1501]: 2025-01-29 14:37:51.257 [INFO][3967] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c" iface="eth0" netns="/var/run/netns/cni-99b95391-500d-418d-4d59-9cb1f2060c86" Jan 29 14:37:51.653845 containerd[1501]: 2025-01-29 14:37:51.260 [INFO][3967] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c" iface="eth0" netns="/var/run/netns/cni-99b95391-500d-418d-4d59-9cb1f2060c86" Jan 29 14:37:51.653845 containerd[1501]: 2025-01-29 14:37:51.263 [INFO][3967] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c" iface="eth0" netns="/var/run/netns/cni-99b95391-500d-418d-4d59-9cb1f2060c86" Jan 29 14:37:51.653845 containerd[1501]: 2025-01-29 14:37:51.263 [INFO][3967] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c" Jan 29 14:37:51.653845 containerd[1501]: 2025-01-29 14:37:51.263 [INFO][3967] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c" Jan 29 14:37:51.653845 containerd[1501]: 2025-01-29 14:37:51.551 [INFO][4014] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c" HandleID="k8s-pod-network.e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c" Workload="srv--rni4s.gb1.brightbox.com-k8s-calico--kube--controllers--77d6948c77--z6748-eth0" Jan 29 14:37:51.653845 containerd[1501]: 2025-01-29 14:37:51.558 [INFO][4014] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 14:37:51.653845 containerd[1501]: 2025-01-29 14:37:51.584 [INFO][4014] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 14:37:51.653845 containerd[1501]: 2025-01-29 14:37:51.621 [WARNING][4014] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c" HandleID="k8s-pod-network.e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c" Workload="srv--rni4s.gb1.brightbox.com-k8s-calico--kube--controllers--77d6948c77--z6748-eth0" Jan 29 14:37:51.653845 containerd[1501]: 2025-01-29 14:37:51.621 [INFO][4014] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c" HandleID="k8s-pod-network.e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c" Workload="srv--rni4s.gb1.brightbox.com-k8s-calico--kube--controllers--77d6948c77--z6748-eth0" Jan 29 14:37:51.653845 containerd[1501]: 2025-01-29 14:37:51.628 [INFO][4014] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 14:37:51.653845 containerd[1501]: 2025-01-29 14:37:51.649 [INFO][3967] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c" Jan 29 14:37:51.659974 containerd[1501]: time="2025-01-29T14:37:51.656436998Z" level=info msg="TearDown network for sandbox \"e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c\" successfully" Jan 29 14:37:51.659974 containerd[1501]: time="2025-01-29T14:37:51.656484025Z" level=info msg="StopPodSandbox for \"e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c\" returns successfully" Jan 29 14:37:51.660497 containerd[1501]: time="2025-01-29T14:37:51.660461747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77d6948c77-z6748,Uid:5ff99ef3-cf8a-4707-99e1-ba45cf048b9d,Namespace:calico-system,Attempt:1,}" Jan 29 14:37:51.664201 systemd[1]: run-netns-cni\x2d99b95391\x2d500d\x2d418d\x2d4d59\x2d9cb1f2060c86.mount: Deactivated successfully. Jan 29 14:37:51.669504 containerd[1501]: time="2025-01-29T14:37:51.669462994Z" level=info msg="TearDown network for sandbox \"16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225\" successfully" Jan 29 14:37:51.669899 containerd[1501]: time="2025-01-29T14:37:51.669847282Z" level=info msg="StopPodSandbox for \"16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225\" returns successfully" Jan 29 14:37:51.670122 containerd[1501]: 2025-01-29 14:37:51.260 [INFO][3966] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3" Jan 29 14:37:51.670122 containerd[1501]: 2025-01-29 14:37:51.260 [INFO][3966] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3" iface="eth0" netns="/var/run/netns/cni-0873adaf-b90b-5cbf-df3f-d7cdbf472949" Jan 29 14:37:51.670122 containerd[1501]: 2025-01-29 14:37:51.262 [INFO][3966] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3" iface="eth0" netns="/var/run/netns/cni-0873adaf-b90b-5cbf-df3f-d7cdbf472949" Jan 29 14:37:51.670122 containerd[1501]: 2025-01-29 14:37:51.264 [INFO][3966] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3" iface="eth0" netns="/var/run/netns/cni-0873adaf-b90b-5cbf-df3f-d7cdbf472949" Jan 29 14:37:51.670122 containerd[1501]: 2025-01-29 14:37:51.264 [INFO][3966] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3" Jan 29 14:37:51.670122 containerd[1501]: 2025-01-29 14:37:51.264 [INFO][3966] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3" Jan 29 14:37:51.670122 containerd[1501]: 2025-01-29 14:37:51.552 [INFO][4015] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3" HandleID="k8s-pod-network.b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3" Workload="srv--rni4s.gb1.brightbox.com-k8s-csi--node--driver--zh8xr-eth0" Jan 29 14:37:51.670122 containerd[1501]: 2025-01-29 14:37:51.558 [INFO][4015] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 14:37:51.670122 containerd[1501]: 2025-01-29 14:37:51.628 [INFO][4015] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 14:37:51.670122 containerd[1501]: 2025-01-29 14:37:51.648 [WARNING][4015] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3" HandleID="k8s-pod-network.b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3" Workload="srv--rni4s.gb1.brightbox.com-k8s-csi--node--driver--zh8xr-eth0" Jan 29 14:37:51.670122 containerd[1501]: 2025-01-29 14:37:51.648 [INFO][4015] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3" HandleID="k8s-pod-network.b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3" Workload="srv--rni4s.gb1.brightbox.com-k8s-csi--node--driver--zh8xr-eth0" Jan 29 14:37:51.670122 containerd[1501]: 2025-01-29 14:37:51.652 [INFO][4015] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 14:37:51.670122 containerd[1501]: 2025-01-29 14:37:51.663 [INFO][3966] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3" Jan 29 14:37:51.674999 containerd[1501]: time="2025-01-29T14:37:51.674307746Z" level=info msg="TearDown network for sandbox \"b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3\" successfully" Jan 29 14:37:51.674999 containerd[1501]: time="2025-01-29T14:37:51.674335212Z" level=info msg="StopPodSandbox for \"b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3\" returns successfully" Jan 29 14:37:51.682550 containerd[1501]: time="2025-01-29T14:37:51.677591216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b98d747cb-8t2mn,Uid:41990643-0c0f-44e1-bc9b-12b09a813a6e,Namespace:calico-apiserver,Attempt:1,}" Jan 29 14:37:51.678793 systemd[1]: run-netns-cni\x2d0873adaf\x2db90b\x2d5cbf\x2ddf3f\x2dd7cdbf472949.mount: Deactivated successfully. Jan 29 14:37:51.686411 containerd[1501]: time="2025-01-29T14:37:51.686351689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zh8xr,Uid:9171cee1-001f-4815-a918-01b00e67d3d3,Namespace:calico-system,Attempt:1,}" Jan 29 14:37:52.044697 containerd[1501]: time="2025-01-29T14:37:52.044106328Z" level=info msg="StopPodSandbox for \"a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90\"" Jan 29 14:37:52.227715 systemd-networkd[1417]: caliedc026b2ea8: Link UP Jan 29 14:37:52.232726 systemd-networkd[1417]: caliedc026b2ea8: Gained carrier Jan 29 14:37:52.290842 containerd[1501]: 2025-01-29 14:37:51.903 [INFO][4112] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 14:37:52.290842 containerd[1501]: 2025-01-29 14:37:51.942 [INFO][4112] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--rni4s.gb1.brightbox.com-k8s-csi--node--driver--zh8xr-eth0 csi-node-driver- calico-system 9171cee1-001f-4815-a918-01b00e67d3d3 788 0 2025-01-29 14:37:24 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-rni4s.gb1.brightbox.com csi-node-driver-zh8xr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliedc026b2ea8 [] []}} ContainerID="4f66d87254513761f431f10b81a0960195f3dc9cf4b582d45568f8498f159c1c" Namespace="calico-system" Pod="csi-node-driver-zh8xr" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-csi--node--driver--zh8xr-" Jan 29 14:37:52.290842 containerd[1501]: 2025-01-29 14:37:51.942 [INFO][4112] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4f66d87254513761f431f10b81a0960195f3dc9cf4b582d45568f8498f159c1c" Namespace="calico-system" Pod="csi-node-driver-zh8xr" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-csi--node--driver--zh8xr-eth0" Jan 29 14:37:52.290842 containerd[1501]: 2025-01-29 14:37:52.086 [INFO][4138] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4f66d87254513761f431f10b81a0960195f3dc9cf4b582d45568f8498f159c1c" HandleID="k8s-pod-network.4f66d87254513761f431f10b81a0960195f3dc9cf4b582d45568f8498f159c1c" Workload="srv--rni4s.gb1.brightbox.com-k8s-csi--node--driver--zh8xr-eth0" Jan 29 14:37:52.290842 containerd[1501]: 2025-01-29 14:37:52.115 [INFO][4138] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4f66d87254513761f431f10b81a0960195f3dc9cf4b582d45568f8498f159c1c" HandleID="k8s-pod-network.4f66d87254513761f431f10b81a0960195f3dc9cf4b582d45568f8498f159c1c" Workload="srv--rni4s.gb1.brightbox.com-k8s-csi--node--driver--zh8xr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c13b0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-rni4s.gb1.brightbox.com", "pod":"csi-node-driver-zh8xr", "timestamp":"2025-01-29 14:37:52.086297657 +0000 UTC"}, Hostname:"srv-rni4s.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 14:37:52.290842 containerd[1501]: 2025-01-29 14:37:52.116 [INFO][4138] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 14:37:52.290842 containerd[1501]: 2025-01-29 14:37:52.117 [INFO][4138] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 14:37:52.290842 containerd[1501]: 2025-01-29 14:37:52.117 [INFO][4138] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-rni4s.gb1.brightbox.com' Jan 29 14:37:52.290842 containerd[1501]: 2025-01-29 14:37:52.125 [INFO][4138] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4f66d87254513761f431f10b81a0960195f3dc9cf4b582d45568f8498f159c1c" host="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:52.290842 containerd[1501]: 2025-01-29 14:37:52.140 [INFO][4138] ipam/ipam.go 372: Looking up existing affinities for host host="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:52.290842 containerd[1501]: 2025-01-29 14:37:52.149 [INFO][4138] ipam/ipam.go 489: Trying affinity for 192.168.0.192/26 host="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:52.290842 containerd[1501]: 2025-01-29 14:37:52.156 [INFO][4138] ipam/ipam.go 155: Attempting to load block cidr=192.168.0.192/26 host="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:52.290842 containerd[1501]: 2025-01-29 14:37:52.160 [INFO][4138] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.0.192/26 host="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:52.290842 containerd[1501]: 2025-01-29 14:37:52.160 [INFO][4138] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.0.192/26 handle="k8s-pod-network.4f66d87254513761f431f10b81a0960195f3dc9cf4b582d45568f8498f159c1c" host="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:52.290842 containerd[1501]: 2025-01-29 14:37:52.165 [INFO][4138] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4f66d87254513761f431f10b81a0960195f3dc9cf4b582d45568f8498f159c1c Jan 29 14:37:52.290842 containerd[1501]: 2025-01-29 14:37:52.173 [INFO][4138] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.0.192/26 handle="k8s-pod-network.4f66d87254513761f431f10b81a0960195f3dc9cf4b582d45568f8498f159c1c" host="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:52.290842 containerd[1501]: 2025-01-29 14:37:52.189 [INFO][4138] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.0.193/26] block=192.168.0.192/26 handle="k8s-pod-network.4f66d87254513761f431f10b81a0960195f3dc9cf4b582d45568f8498f159c1c" host="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:52.290842 containerd[1501]: 2025-01-29 14:37:52.189 [INFO][4138] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.0.193/26] handle="k8s-pod-network.4f66d87254513761f431f10b81a0960195f3dc9cf4b582d45568f8498f159c1c" host="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:52.290842 containerd[1501]: 2025-01-29 14:37:52.190 [INFO][4138] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 14:37:52.290842 containerd[1501]: 2025-01-29 14:37:52.190 [INFO][4138] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.0.193/26] IPv6=[] ContainerID="4f66d87254513761f431f10b81a0960195f3dc9cf4b582d45568f8498f159c1c" HandleID="k8s-pod-network.4f66d87254513761f431f10b81a0960195f3dc9cf4b582d45568f8498f159c1c" Workload="srv--rni4s.gb1.brightbox.com-k8s-csi--node--driver--zh8xr-eth0" Jan 29 14:37:52.292508 containerd[1501]: 2025-01-29 14:37:52.195 [INFO][4112] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4f66d87254513761f431f10b81a0960195f3dc9cf4b582d45568f8498f159c1c" Namespace="calico-system" Pod="csi-node-driver-zh8xr" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-csi--node--driver--zh8xr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--rni4s.gb1.brightbox.com-k8s-csi--node--driver--zh8xr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9171cee1-001f-4815-a918-01b00e67d3d3", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 14, 37, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-rni4s.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-zh8xr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.0.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliedc026b2ea8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 14:37:52.292508 containerd[1501]: 2025-01-29 14:37:52.196 [INFO][4112] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.0.193/32] ContainerID="4f66d87254513761f431f10b81a0960195f3dc9cf4b582d45568f8498f159c1c" Namespace="calico-system" Pod="csi-node-driver-zh8xr" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-csi--node--driver--zh8xr-eth0" Jan 29 14:37:52.292508 containerd[1501]: 2025-01-29 14:37:52.196 [INFO][4112] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliedc026b2ea8 ContainerID="4f66d87254513761f431f10b81a0960195f3dc9cf4b582d45568f8498f159c1c" Namespace="calico-system" Pod="csi-node-driver-zh8xr" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-csi--node--driver--zh8xr-eth0" Jan 29 14:37:52.292508 containerd[1501]: 2025-01-29 14:37:52.232 [INFO][4112] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4f66d87254513761f431f10b81a0960195f3dc9cf4b582d45568f8498f159c1c" Namespace="calico-system" Pod="csi-node-driver-zh8xr" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-csi--node--driver--zh8xr-eth0" Jan 29 14:37:52.292508 containerd[1501]: 2025-01-29 14:37:52.234 [INFO][4112] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4f66d87254513761f431f10b81a0960195f3dc9cf4b582d45568f8498f159c1c" Namespace="calico-system" Pod="csi-node-driver-zh8xr" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-csi--node--driver--zh8xr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--rni4s.gb1.brightbox.com-k8s-csi--node--driver--zh8xr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9171cee1-001f-4815-a918-01b00e67d3d3", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 14, 37, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-rni4s.gb1.brightbox.com", ContainerID:"4f66d87254513761f431f10b81a0960195f3dc9cf4b582d45568f8498f159c1c", Pod:"csi-node-driver-zh8xr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.0.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliedc026b2ea8", MAC:"7a:5a:80:70:83:43", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 14:37:52.292508 containerd[1501]: 2025-01-29 14:37:52.273 [INFO][4112] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4f66d87254513761f431f10b81a0960195f3dc9cf4b582d45568f8498f159c1c" Namespace="calico-system" Pod="csi-node-driver-zh8xr" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-csi--node--driver--zh8xr-eth0" Jan 29 14:37:52.320136 systemd-networkd[1417]: cali8b988c0073c: Link UP Jan 29 14:37:52.323462 systemd-networkd[1417]: cali8b988c0073c: Gained carrier Jan 29 14:37:52.364444 containerd[1501]: 2025-01-29 14:37:51.900 [INFO][4105] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 14:37:52.364444 containerd[1501]: 2025-01-29 14:37:51.947 [INFO][4105] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--rni4s.gb1.brightbox.com-k8s-calico--apiserver--b98d747cb--8t2mn-eth0 calico-apiserver-b98d747cb- calico-apiserver 41990643-0c0f-44e1-bc9b-12b09a813a6e 789 0 2025-01-29 14:37:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:b98d747cb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-rni4s.gb1.brightbox.com calico-apiserver-b98d747cb-8t2mn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8b988c0073c [] []}} ContainerID="8c9c63497df2d1e1f0a1c69022411b5b8fd381189a43a3c7201cd51f80a69c36" Namespace="calico-apiserver" Pod="calico-apiserver-b98d747cb-8t2mn" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-calico--apiserver--b98d747cb--8t2mn-" Jan 29 14:37:52.364444 containerd[1501]: 2025-01-29 14:37:51.948 [INFO][4105] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8c9c63497df2d1e1f0a1c69022411b5b8fd381189a43a3c7201cd51f80a69c36" Namespace="calico-apiserver" Pod="calico-apiserver-b98d747cb-8t2mn" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-calico--apiserver--b98d747cb--8t2mn-eth0" Jan 29 14:37:52.364444 containerd[1501]: 2025-01-29 14:37:52.085 [INFO][4139] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8c9c63497df2d1e1f0a1c69022411b5b8fd381189a43a3c7201cd51f80a69c36" HandleID="k8s-pod-network.8c9c63497df2d1e1f0a1c69022411b5b8fd381189a43a3c7201cd51f80a69c36" Workload="srv--rni4s.gb1.brightbox.com-k8s-calico--apiserver--b98d747cb--8t2mn-eth0" Jan 29 14:37:52.364444 containerd[1501]: 2025-01-29 14:37:52.116 [INFO][4139] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8c9c63497df2d1e1f0a1c69022411b5b8fd381189a43a3c7201cd51f80a69c36" HandleID="k8s-pod-network.8c9c63497df2d1e1f0a1c69022411b5b8fd381189a43a3c7201cd51f80a69c36" Workload="srv--rni4s.gb1.brightbox.com-k8s-calico--apiserver--b98d747cb--8t2mn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051a90), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-rni4s.gb1.brightbox.com", "pod":"calico-apiserver-b98d747cb-8t2mn", "timestamp":"2025-01-29 14:37:52.085340682 +0000 UTC"}, Hostname:"srv-rni4s.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 14:37:52.364444 containerd[1501]: 2025-01-29 14:37:52.117 [INFO][4139] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 14:37:52.364444 containerd[1501]: 2025-01-29 14:37:52.190 [INFO][4139] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 14:37:52.364444 containerd[1501]: 2025-01-29 14:37:52.190 [INFO][4139] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-rni4s.gb1.brightbox.com' Jan 29 14:37:52.364444 containerd[1501]: 2025-01-29 14:37:52.202 [INFO][4139] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8c9c63497df2d1e1f0a1c69022411b5b8fd381189a43a3c7201cd51f80a69c36" host="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:52.364444 containerd[1501]: 2025-01-29 14:37:52.230 [INFO][4139] ipam/ipam.go 372: Looking up existing affinities for host host="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:52.364444 containerd[1501]: 2025-01-29 14:37:52.252 [INFO][4139] ipam/ipam.go 489: Trying affinity for 192.168.0.192/26 host="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:52.364444 containerd[1501]: 2025-01-29 14:37:52.257 [INFO][4139] ipam/ipam.go 155: Attempting to load block cidr=192.168.0.192/26 host="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:52.364444 containerd[1501]: 2025-01-29 14:37:52.263 [INFO][4139] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.0.192/26 host="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:52.364444 containerd[1501]: 2025-01-29 14:37:52.263 [INFO][4139] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.0.192/26 handle="k8s-pod-network.8c9c63497df2d1e1f0a1c69022411b5b8fd381189a43a3c7201cd51f80a69c36" host="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:52.364444 containerd[1501]: 2025-01-29 14:37:52.267 [INFO][4139] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8c9c63497df2d1e1f0a1c69022411b5b8fd381189a43a3c7201cd51f80a69c36 Jan 29 14:37:52.364444 containerd[1501]: 2025-01-29 14:37:52.291 [INFO][4139] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.0.192/26 handle="k8s-pod-network.8c9c63497df2d1e1f0a1c69022411b5b8fd381189a43a3c7201cd51f80a69c36" host="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:52.364444 containerd[1501]: 2025-01-29 14:37:52.311 [INFO][4139] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.0.194/26] block=192.168.0.192/26 handle="k8s-pod-network.8c9c63497df2d1e1f0a1c69022411b5b8fd381189a43a3c7201cd51f80a69c36" host="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:52.364444 containerd[1501]: 2025-01-29 14:37:52.311 [INFO][4139] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.0.194/26] handle="k8s-pod-network.8c9c63497df2d1e1f0a1c69022411b5b8fd381189a43a3c7201cd51f80a69c36" host="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:52.364444 containerd[1501]: 2025-01-29 14:37:52.312 [INFO][4139] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 14:37:52.364444 containerd[1501]: 2025-01-29 14:37:52.312 [INFO][4139] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.0.194/26] IPv6=[] ContainerID="8c9c63497df2d1e1f0a1c69022411b5b8fd381189a43a3c7201cd51f80a69c36" HandleID="k8s-pod-network.8c9c63497df2d1e1f0a1c69022411b5b8fd381189a43a3c7201cd51f80a69c36" Workload="srv--rni4s.gb1.brightbox.com-k8s-calico--apiserver--b98d747cb--8t2mn-eth0" Jan 29 14:37:52.367406 containerd[1501]: 2025-01-29 14:37:52.316 [INFO][4105] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8c9c63497df2d1e1f0a1c69022411b5b8fd381189a43a3c7201cd51f80a69c36" Namespace="calico-apiserver" Pod="calico-apiserver-b98d747cb-8t2mn" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-calico--apiserver--b98d747cb--8t2mn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--rni4s.gb1.brightbox.com-k8s-calico--apiserver--b98d747cb--8t2mn-eth0", GenerateName:"calico-apiserver-b98d747cb-", Namespace:"calico-apiserver", SelfLink:"", UID:"41990643-0c0f-44e1-bc9b-12b09a813a6e", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 14, 37, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b98d747cb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-rni4s.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-b98d747cb-8t2mn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.0.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8b988c0073c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 14:37:52.367406 containerd[1501]: 2025-01-29 14:37:52.316 [INFO][4105] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.0.194/32] ContainerID="8c9c63497df2d1e1f0a1c69022411b5b8fd381189a43a3c7201cd51f80a69c36" Namespace="calico-apiserver" Pod="calico-apiserver-b98d747cb-8t2mn" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-calico--apiserver--b98d747cb--8t2mn-eth0" Jan 29 14:37:52.367406 containerd[1501]: 2025-01-29 14:37:52.316 [INFO][4105] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8b988c0073c ContainerID="8c9c63497df2d1e1f0a1c69022411b5b8fd381189a43a3c7201cd51f80a69c36" Namespace="calico-apiserver" Pod="calico-apiserver-b98d747cb-8t2mn" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-calico--apiserver--b98d747cb--8t2mn-eth0" Jan 29 14:37:52.367406 containerd[1501]: 2025-01-29 14:37:52.320 [INFO][4105] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8c9c63497df2d1e1f0a1c69022411b5b8fd381189a43a3c7201cd51f80a69c36" Namespace="calico-apiserver" Pod="calico-apiserver-b98d747cb-8t2mn" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-calico--apiserver--b98d747cb--8t2mn-eth0" Jan 29 14:37:52.367406 containerd[1501]: 2025-01-29 14:37:52.322 [INFO][4105] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8c9c63497df2d1e1f0a1c69022411b5b8fd381189a43a3c7201cd51f80a69c36" Namespace="calico-apiserver" Pod="calico-apiserver-b98d747cb-8t2mn" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-calico--apiserver--b98d747cb--8t2mn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--rni4s.gb1.brightbox.com-k8s-calico--apiserver--b98d747cb--8t2mn-eth0", GenerateName:"calico-apiserver-b98d747cb-", Namespace:"calico-apiserver", SelfLink:"", UID:"41990643-0c0f-44e1-bc9b-12b09a813a6e", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 14, 37, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b98d747cb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-rni4s.gb1.brightbox.com", ContainerID:"8c9c63497df2d1e1f0a1c69022411b5b8fd381189a43a3c7201cd51f80a69c36", Pod:"calico-apiserver-b98d747cb-8t2mn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.0.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8b988c0073c", MAC:"e2:10:81:ce:89:5c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 14:37:52.367406 containerd[1501]: 2025-01-29 14:37:52.359 [INFO][4105] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8c9c63497df2d1e1f0a1c69022411b5b8fd381189a43a3c7201cd51f80a69c36" Namespace="calico-apiserver" Pod="calico-apiserver-b98d747cb-8t2mn" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-calico--apiserver--b98d747cb--8t2mn-eth0" Jan 29 14:37:52.403481 systemd-networkd[1417]: calic7190d9e812: Link UP Jan 29 14:37:52.404954 systemd-networkd[1417]: calic7190d9e812: Gained carrier Jan 29 14:37:52.418493 containerd[1501]: 2025-01-29 14:37:52.178 [INFO][4171] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90" Jan 29 14:37:52.418493 containerd[1501]: 2025-01-29 14:37:52.179 [INFO][4171] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90" iface="eth0" netns="/var/run/netns/cni-ad06b590-c741-0639-1817-aefaa1587565" Jan 29 14:37:52.418493 containerd[1501]: 2025-01-29 14:37:52.182 [INFO][4171] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90" iface="eth0" netns="/var/run/netns/cni-ad06b590-c741-0639-1817-aefaa1587565" Jan 29 14:37:52.418493 containerd[1501]: 2025-01-29 14:37:52.182 [INFO][4171] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90" iface="eth0" netns="/var/run/netns/cni-ad06b590-c741-0639-1817-aefaa1587565" Jan 29 14:37:52.418493 containerd[1501]: 2025-01-29 14:37:52.182 [INFO][4171] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90" Jan 29 14:37:52.418493 containerd[1501]: 2025-01-29 14:37:52.183 [INFO][4171] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90" Jan 29 14:37:52.418493 containerd[1501]: 2025-01-29 14:37:52.296 [INFO][4178] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90" HandleID="k8s-pod-network.a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90" Workload="srv--rni4s.gb1.brightbox.com-k8s-calico--apiserver--b98d747cb--hclzq-eth0" Jan 29 14:37:52.418493 containerd[1501]: 2025-01-29 14:37:52.296 [INFO][4178] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 14:37:52.418493 containerd[1501]: 2025-01-29 14:37:52.376 [INFO][4178] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 14:37:52.418493 containerd[1501]: 2025-01-29 14:37:52.393 [WARNING][4178] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90" HandleID="k8s-pod-network.a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90" Workload="srv--rni4s.gb1.brightbox.com-k8s-calico--apiserver--b98d747cb--hclzq-eth0" Jan 29 14:37:52.418493 containerd[1501]: 2025-01-29 14:37:52.393 [INFO][4178] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90" HandleID="k8s-pod-network.a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90" Workload="srv--rni4s.gb1.brightbox.com-k8s-calico--apiserver--b98d747cb--hclzq-eth0" Jan 29 14:37:52.418493 containerd[1501]: 2025-01-29 14:37:52.406 [INFO][4178] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 14:37:52.418493 containerd[1501]: 2025-01-29 14:37:52.411 [INFO][4171] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90" Jan 29 14:37:52.421431 containerd[1501]: time="2025-01-29T14:37:52.421378496Z" level=info msg="TearDown network for sandbox \"a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90\" successfully" Jan 29 14:37:52.421431 containerd[1501]: time="2025-01-29T14:37:52.421425586Z" level=info msg="StopPodSandbox for \"a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90\" returns successfully" Jan 29 14:37:52.422381 containerd[1501]: time="2025-01-29T14:37:52.422342439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b98d747cb-hclzq,Uid:8f491968-0d35-4eb0-81d6-df6823ca943a,Namespace:calico-apiserver,Attempt:1,}" Jan 29 14:37:52.438148 containerd[1501]: time="2025-01-29T14:37:52.435302855Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 14:37:52.438148 containerd[1501]: time="2025-01-29T14:37:52.435443247Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 14:37:52.438148 containerd[1501]: time="2025-01-29T14:37:52.435483043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 14:37:52.438148 containerd[1501]: time="2025-01-29T14:37:52.435692615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 14:37:52.438771 containerd[1501]: 2025-01-29 14:37:51.904 [INFO][4100] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 14:37:52.438771 containerd[1501]: 2025-01-29 14:37:51.942 [INFO][4100] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--rni4s.gb1.brightbox.com-k8s-calico--kube--controllers--77d6948c77--z6748-eth0 calico-kube-controllers-77d6948c77- calico-system 5ff99ef3-cf8a-4707-99e1-ba45cf048b9d 787 0 2025-01-29 14:37:24 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:77d6948c77 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-rni4s.gb1.brightbox.com calico-kube-controllers-77d6948c77-z6748 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic7190d9e812 [] []}} ContainerID="0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81" Namespace="calico-system" Pod="calico-kube-controllers-77d6948c77-z6748" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-calico--kube--controllers--77d6948c77--z6748-" Jan 29 14:37:52.438771 containerd[1501]: 2025-01-29 14:37:51.942 [INFO][4100] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81" Namespace="calico-system" Pod="calico-kube-controllers-77d6948c77-z6748" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-calico--kube--controllers--77d6948c77--z6748-eth0" Jan 29 14:37:52.438771 containerd[1501]: 2025-01-29 14:37:52.118 [INFO][4141] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81" HandleID="k8s-pod-network.0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81" Workload="srv--rni4s.gb1.brightbox.com-k8s-calico--kube--controllers--77d6948c77--z6748-eth0" Jan 29 14:37:52.438771 containerd[1501]: 2025-01-29 14:37:52.137 [INFO][4141] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81" HandleID="k8s-pod-network.0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81" Workload="srv--rni4s.gb1.brightbox.com-k8s-calico--kube--controllers--77d6948c77--z6748-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051630), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-rni4s.gb1.brightbox.com", "pod":"calico-kube-controllers-77d6948c77-z6748", "timestamp":"2025-01-29 14:37:52.118579667 +0000 UTC"}, Hostname:"srv-rni4s.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 14:37:52.438771 containerd[1501]: 2025-01-29 14:37:52.137 [INFO][4141] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 14:37:52.438771 containerd[1501]: 2025-01-29 14:37:52.312 [INFO][4141] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 14:37:52.438771 containerd[1501]: 2025-01-29 14:37:52.312 [INFO][4141] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-rni4s.gb1.brightbox.com' Jan 29 14:37:52.438771 containerd[1501]: 2025-01-29 14:37:52.317 [INFO][4141] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81" host="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:52.438771 containerd[1501]: 2025-01-29 14:37:52.332 [INFO][4141] ipam/ipam.go 372: Looking up existing affinities for host host="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:52.438771 containerd[1501]: 2025-01-29 14:37:52.339 [INFO][4141] ipam/ipam.go 489: Trying affinity for 192.168.0.192/26 host="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:52.438771 containerd[1501]: 2025-01-29 14:37:52.342 [INFO][4141] ipam/ipam.go 155: Attempting to load block cidr=192.168.0.192/26 host="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:52.438771 containerd[1501]: 2025-01-29 14:37:52.346 [INFO][4141] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.0.192/26 host="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:52.438771 containerd[1501]: 2025-01-29 14:37:52.346 [INFO][4141] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.0.192/26 handle="k8s-pod-network.0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81" host="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:52.438771 containerd[1501]: 2025-01-29 14:37:52.348 [INFO][4141] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81 Jan 29 14:37:52.438771 containerd[1501]: 2025-01-29 14:37:52.359 [INFO][4141] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.0.192/26 handle="k8s-pod-network.0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81" host="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:52.438771 containerd[1501]: 2025-01-29 14:37:52.372 [INFO][4141] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.0.195/26] block=192.168.0.192/26 handle="k8s-pod-network.0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81" host="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:52.438771 containerd[1501]: 2025-01-29 14:37:52.372 [INFO][4141] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.0.195/26] handle="k8s-pod-network.0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81" host="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:52.438771 containerd[1501]: 2025-01-29 14:37:52.372 [INFO][4141] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 14:37:52.438771 containerd[1501]: 2025-01-29 14:37:52.372 [INFO][4141] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.0.195/26] IPv6=[] ContainerID="0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81" HandleID="k8s-pod-network.0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81" Workload="srv--rni4s.gb1.brightbox.com-k8s-calico--kube--controllers--77d6948c77--z6748-eth0" Jan 29 14:37:52.440655 containerd[1501]: 2025-01-29 14:37:52.378 [INFO][4100] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81" Namespace="calico-system" Pod="calico-kube-controllers-77d6948c77-z6748" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-calico--kube--controllers--77d6948c77--z6748-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--rni4s.gb1.brightbox.com-k8s-calico--kube--controllers--77d6948c77--z6748-eth0", GenerateName:"calico-kube-controllers-77d6948c77-", Namespace:"calico-system", SelfLink:"", UID:"5ff99ef3-cf8a-4707-99e1-ba45cf048b9d", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 14, 37, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77d6948c77", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-rni4s.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-77d6948c77-z6748", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.0.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic7190d9e812", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 14:37:52.440655 containerd[1501]: 2025-01-29 14:37:52.379 [INFO][4100] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.0.195/32] ContainerID="0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81" Namespace="calico-system" Pod="calico-kube-controllers-77d6948c77-z6748" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-calico--kube--controllers--77d6948c77--z6748-eth0" Jan 29 14:37:52.440655 containerd[1501]: 2025-01-29 14:37:52.379 [INFO][4100] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic7190d9e812 ContainerID="0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81" Namespace="calico-system" Pod="calico-kube-controllers-77d6948c77-z6748" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-calico--kube--controllers--77d6948c77--z6748-eth0" Jan 29 14:37:52.440655 containerd[1501]: 2025-01-29 14:37:52.406 [INFO][4100] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81" Namespace="calico-system" Pod="calico-kube-controllers-77d6948c77-z6748" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-calico--kube--controllers--77d6948c77--z6748-eth0" Jan 29 14:37:52.440655 containerd[1501]: 2025-01-29 14:37:52.407 [INFO][4100] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81" Namespace="calico-system" Pod="calico-kube-controllers-77d6948c77-z6748" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-calico--kube--controllers--77d6948c77--z6748-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--rni4s.gb1.brightbox.com-k8s-calico--kube--controllers--77d6948c77--z6748-eth0", GenerateName:"calico-kube-controllers-77d6948c77-", Namespace:"calico-system", SelfLink:"", UID:"5ff99ef3-cf8a-4707-99e1-ba45cf048b9d", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 14, 37, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77d6948c77", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-rni4s.gb1.brightbox.com", ContainerID:"0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81", Pod:"calico-kube-controllers-77d6948c77-z6748", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.0.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic7190d9e812", MAC:"86:ef:79:1d:a3:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 14:37:52.440655 containerd[1501]: 2025-01-29 14:37:52.432 [INFO][4100] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81" Namespace="calico-system" Pod="calico-kube-controllers-77d6948c77-z6748" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-calico--kube--controllers--77d6948c77--z6748-eth0" Jan 29 14:37:52.507135 systemd[1]: Started cri-containerd-4f66d87254513761f431f10b81a0960195f3dc9cf4b582d45568f8498f159c1c.scope - libcontainer container 4f66d87254513761f431f10b81a0960195f3dc9cf4b582d45568f8498f159c1c. Jan 29 14:37:52.513631 containerd[1501]: time="2025-01-29T14:37:52.512760115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 14:37:52.513631 containerd[1501]: time="2025-01-29T14:37:52.512913666Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 14:37:52.513631 containerd[1501]: time="2025-01-29T14:37:52.512961901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 14:37:52.515790 containerd[1501]: time="2025-01-29T14:37:52.514322960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 14:37:52.553511 containerd[1501]: time="2025-01-29T14:37:52.553197266Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 14:37:52.553511 containerd[1501]: time="2025-01-29T14:37:52.553343071Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 14:37:52.553511 containerd[1501]: time="2025-01-29T14:37:52.553366481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 14:37:52.554334 containerd[1501]: time="2025-01-29T14:37:52.554204144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 14:37:52.587403 systemd[1]: run-netns-cni\x2dad06b590\x2dc741\x2d0639\x2d1817\x2daefaa1587565.mount: Deactivated successfully. Jan 29 14:37:52.614198 systemd[1]: Started cri-containerd-8c9c63497df2d1e1f0a1c69022411b5b8fd381189a43a3c7201cd51f80a69c36.scope - libcontainer container 8c9c63497df2d1e1f0a1c69022411b5b8fd381189a43a3c7201cd51f80a69c36. Jan 29 14:37:52.627409 containerd[1501]: time="2025-01-29T14:37:52.627358839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zh8xr,Uid:9171cee1-001f-4815-a918-01b00e67d3d3,Namespace:calico-system,Attempt:1,} returns sandbox id \"4f66d87254513761f431f10b81a0960195f3dc9cf4b582d45568f8498f159c1c\"" Jan 29 14:37:52.640797 containerd[1501]: time="2025-01-29T14:37:52.640752175Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 29 14:37:52.672582 systemd[1]: Started cri-containerd-0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81.scope - libcontainer container 0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81. Jan 29 14:37:52.817872 containerd[1501]: time="2025-01-29T14:37:52.817208289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b98d747cb-8t2mn,Uid:41990643-0c0f-44e1-bc9b-12b09a813a6e,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"8c9c63497df2d1e1f0a1c69022411b5b8fd381189a43a3c7201cd51f80a69c36\"" Jan 29 14:37:52.861260 containerd[1501]: time="2025-01-29T14:37:52.859076580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77d6948c77-z6748,Uid:5ff99ef3-cf8a-4707-99e1-ba45cf048b9d,Namespace:calico-system,Attempt:1,} returns sandbox id \"0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81\"" Jan 29 14:37:52.869160 systemd-networkd[1417]: cali975492754d3: Link UP Jan 29 14:37:52.869506 systemd-networkd[1417]: cali975492754d3: Gained carrier Jan 29 14:37:52.904682 containerd[1501]: 2025-01-29 14:37:52.601 [INFO][4242] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 14:37:52.904682 containerd[1501]: 2025-01-29 14:37:52.658 [INFO][4242] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--rni4s.gb1.brightbox.com-k8s-calico--apiserver--b98d747cb--hclzq-eth0 calico-apiserver-b98d747cb- calico-apiserver 8f491968-0d35-4eb0-81d6-df6823ca943a 795 0 2025-01-29 14:37:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:b98d747cb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-rni4s.gb1.brightbox.com calico-apiserver-b98d747cb-hclzq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali975492754d3 [] []}} ContainerID="9975b3fe69ef567883c3dcf45b3919b3b8aaa93a453c53be742a522efbb27eef" Namespace="calico-apiserver" Pod="calico-apiserver-b98d747cb-hclzq" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-calico--apiserver--b98d747cb--hclzq-" Jan 29 14:37:52.904682 containerd[1501]: 2025-01-29 14:37:52.658 [INFO][4242] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9975b3fe69ef567883c3dcf45b3919b3b8aaa93a453c53be742a522efbb27eef" Namespace="calico-apiserver" Pod="calico-apiserver-b98d747cb-hclzq" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-calico--apiserver--b98d747cb--hclzq-eth0" Jan 29 14:37:52.904682 containerd[1501]: 2025-01-29 14:37:52.767 [INFO][4340] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9975b3fe69ef567883c3dcf45b3919b3b8aaa93a453c53be742a522efbb27eef" HandleID="k8s-pod-network.9975b3fe69ef567883c3dcf45b3919b3b8aaa93a453c53be742a522efbb27eef" Workload="srv--rni4s.gb1.brightbox.com-k8s-calico--apiserver--b98d747cb--hclzq-eth0" Jan 29 14:37:52.904682 containerd[1501]: 2025-01-29 14:37:52.787 [INFO][4340] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9975b3fe69ef567883c3dcf45b3919b3b8aaa93a453c53be742a522efbb27eef" HandleID="k8s-pod-network.9975b3fe69ef567883c3dcf45b3919b3b8aaa93a453c53be742a522efbb27eef" Workload="srv--rni4s.gb1.brightbox.com-k8s-calico--apiserver--b98d747cb--hclzq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a1300), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-rni4s.gb1.brightbox.com", "pod":"calico-apiserver-b98d747cb-hclzq", "timestamp":"2025-01-29 14:37:52.767379171 +0000 UTC"}, Hostname:"srv-rni4s.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 14:37:52.904682 containerd[1501]: 2025-01-29 14:37:52.787 [INFO][4340] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 14:37:52.904682 containerd[1501]: 2025-01-29 14:37:52.787 [INFO][4340] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 14:37:52.904682 containerd[1501]: 2025-01-29 14:37:52.787 [INFO][4340] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-rni4s.gb1.brightbox.com' Jan 29 14:37:52.904682 containerd[1501]: 2025-01-29 14:37:52.790 [INFO][4340] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9975b3fe69ef567883c3dcf45b3919b3b8aaa93a453c53be742a522efbb27eef" host="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:52.904682 containerd[1501]: 2025-01-29 14:37:52.803 [INFO][4340] ipam/ipam.go 372: Looking up existing affinities for host host="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:52.904682 containerd[1501]: 2025-01-29 14:37:52.811 [INFO][4340] ipam/ipam.go 489: Trying affinity for 192.168.0.192/26 host="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:52.904682 containerd[1501]: 2025-01-29 14:37:52.816 [INFO][4340] ipam/ipam.go 155: Attempting to load block cidr=192.168.0.192/26 host="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:52.904682 containerd[1501]: 2025-01-29 14:37:52.823 [INFO][4340] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.0.192/26 host="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:52.904682 containerd[1501]: 2025-01-29 14:37:52.823 [INFO][4340] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.0.192/26 handle="k8s-pod-network.9975b3fe69ef567883c3dcf45b3919b3b8aaa93a453c53be742a522efbb27eef" host="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:52.904682 containerd[1501]: 2025-01-29 14:37:52.825 [INFO][4340] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9975b3fe69ef567883c3dcf45b3919b3b8aaa93a453c53be742a522efbb27eef Jan 29 14:37:52.904682 containerd[1501]: 2025-01-29 14:37:52.834 [INFO][4340] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.0.192/26 handle="k8s-pod-network.9975b3fe69ef567883c3dcf45b3919b3b8aaa93a453c53be742a522efbb27eef" host="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:52.904682 containerd[1501]: 2025-01-29 14:37:52.850 [INFO][4340] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.0.196/26] block=192.168.0.192/26 handle="k8s-pod-network.9975b3fe69ef567883c3dcf45b3919b3b8aaa93a453c53be742a522efbb27eef" host="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:52.904682 containerd[1501]: 2025-01-29 14:37:52.850 [INFO][4340] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.0.196/26] handle="k8s-pod-network.9975b3fe69ef567883c3dcf45b3919b3b8aaa93a453c53be742a522efbb27eef" host="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:52.904682 containerd[1501]: 2025-01-29 14:37:52.850 [INFO][4340] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 14:37:52.904682 containerd[1501]: 2025-01-29 14:37:52.851 [INFO][4340] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.0.196/26] IPv6=[] ContainerID="9975b3fe69ef567883c3dcf45b3919b3b8aaa93a453c53be742a522efbb27eef" HandleID="k8s-pod-network.9975b3fe69ef567883c3dcf45b3919b3b8aaa93a453c53be742a522efbb27eef" Workload="srv--rni4s.gb1.brightbox.com-k8s-calico--apiserver--b98d747cb--hclzq-eth0" Jan 29 14:37:52.907250 containerd[1501]: 2025-01-29 14:37:52.857 [INFO][4242] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9975b3fe69ef567883c3dcf45b3919b3b8aaa93a453c53be742a522efbb27eef" Namespace="calico-apiserver" Pod="calico-apiserver-b98d747cb-hclzq" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-calico--apiserver--b98d747cb--hclzq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--rni4s.gb1.brightbox.com-k8s-calico--apiserver--b98d747cb--hclzq-eth0", GenerateName:"calico-apiserver-b98d747cb-", Namespace:"calico-apiserver", SelfLink:"", UID:"8f491968-0d35-4eb0-81d6-df6823ca943a", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 14, 37, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b98d747cb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-rni4s.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-b98d747cb-hclzq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.0.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali975492754d3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 14:37:52.907250 containerd[1501]: 2025-01-29 14:37:52.858 [INFO][4242] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.0.196/32] ContainerID="9975b3fe69ef567883c3dcf45b3919b3b8aaa93a453c53be742a522efbb27eef" Namespace="calico-apiserver" Pod="calico-apiserver-b98d747cb-hclzq" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-calico--apiserver--b98d747cb--hclzq-eth0" Jan 29 14:37:52.907250 containerd[1501]: 2025-01-29 14:37:52.858 [INFO][4242] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali975492754d3 ContainerID="9975b3fe69ef567883c3dcf45b3919b3b8aaa93a453c53be742a522efbb27eef" Namespace="calico-apiserver" Pod="calico-apiserver-b98d747cb-hclzq" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-calico--apiserver--b98d747cb--hclzq-eth0" Jan 29 14:37:52.907250 containerd[1501]: 2025-01-29 14:37:52.869 [INFO][4242] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9975b3fe69ef567883c3dcf45b3919b3b8aaa93a453c53be742a522efbb27eef" Namespace="calico-apiserver" Pod="calico-apiserver-b98d747cb-hclzq" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-calico--apiserver--b98d747cb--hclzq-eth0" Jan 29 14:37:52.907250 containerd[1501]: 2025-01-29 14:37:52.871 [INFO][4242] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9975b3fe69ef567883c3dcf45b3919b3b8aaa93a453c53be742a522efbb27eef" Namespace="calico-apiserver" Pod="calico-apiserver-b98d747cb-hclzq" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-calico--apiserver--b98d747cb--hclzq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--rni4s.gb1.brightbox.com-k8s-calico--apiserver--b98d747cb--hclzq-eth0", GenerateName:"calico-apiserver-b98d747cb-", Namespace:"calico-apiserver", SelfLink:"", UID:"8f491968-0d35-4eb0-81d6-df6823ca943a", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 14, 37, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b98d747cb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-rni4s.gb1.brightbox.com", ContainerID:"9975b3fe69ef567883c3dcf45b3919b3b8aaa93a453c53be742a522efbb27eef", Pod:"calico-apiserver-b98d747cb-hclzq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.0.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali975492754d3", MAC:"86:9c:9c:c4:b6:2e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 14:37:52.907250 containerd[1501]: 2025-01-29 14:37:52.899 [INFO][4242] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9975b3fe69ef567883c3dcf45b3919b3b8aaa93a453c53be742a522efbb27eef" Namespace="calico-apiserver" Pod="calico-apiserver-b98d747cb-hclzq" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-calico--apiserver--b98d747cb--hclzq-eth0" Jan 29 14:37:52.945911 containerd[1501]: time="2025-01-29T14:37:52.944892227Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 14:37:52.945911 containerd[1501]: time="2025-01-29T14:37:52.945029805Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 14:37:52.945911 containerd[1501]: time="2025-01-29T14:37:52.945056375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 14:37:52.946282 containerd[1501]: time="2025-01-29T14:37:52.945908997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 14:37:53.001770 systemd[1]: Started cri-containerd-9975b3fe69ef567883c3dcf45b3919b3b8aaa93a453c53be742a522efbb27eef.scope - libcontainer container 9975b3fe69ef567883c3dcf45b3919b3b8aaa93a453c53be742a522efbb27eef. Jan 29 14:37:53.042831 containerd[1501]: time="2025-01-29T14:37:53.042733557Z" level=info msg="StopPodSandbox for \"666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428\"" Jan 29 14:37:53.046690 containerd[1501]: time="2025-01-29T14:37:53.046339607Z" level=info msg="StopPodSandbox for \"bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5\"" Jan 29 14:37:53.329209 containerd[1501]: time="2025-01-29T14:37:53.329154014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b98d747cb-hclzq,Uid:8f491968-0d35-4eb0-81d6-df6823ca943a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"9975b3fe69ef567883c3dcf45b3919b3b8aaa93a453c53be742a522efbb27eef\"" Jan 29 14:37:53.345049 containerd[1501]: 2025-01-29 14:37:53.170 [INFO][4453] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428" Jan 29 14:37:53.345049 containerd[1501]: 2025-01-29 14:37:53.170 [INFO][4453] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428" iface="eth0" netns="/var/run/netns/cni-b5f06ffc-4af0-5144-3720-c77a94faa706" Jan 29 14:37:53.345049 containerd[1501]: 2025-01-29 14:37:53.173 [INFO][4453] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428" iface="eth0" netns="/var/run/netns/cni-b5f06ffc-4af0-5144-3720-c77a94faa706" Jan 29 14:37:53.345049 containerd[1501]: 2025-01-29 14:37:53.174 [INFO][4453] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428" iface="eth0" netns="/var/run/netns/cni-b5f06ffc-4af0-5144-3720-c77a94faa706" Jan 29 14:37:53.345049 containerd[1501]: 2025-01-29 14:37:53.174 [INFO][4453] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428" Jan 29 14:37:53.345049 containerd[1501]: 2025-01-29 14:37:53.174 [INFO][4453] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428" Jan 29 14:37:53.345049 containerd[1501]: 2025-01-29 14:37:53.315 [INFO][4463] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428" HandleID="k8s-pod-network.666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428" Workload="srv--rni4s.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--snc7g-eth0" Jan 29 14:37:53.345049 containerd[1501]: 2025-01-29 14:37:53.316 [INFO][4463] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 14:37:53.345049 containerd[1501]: 2025-01-29 14:37:53.316 [INFO][4463] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 14:37:53.345049 containerd[1501]: 2025-01-29 14:37:53.330 [WARNING][4463] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428" HandleID="k8s-pod-network.666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428" Workload="srv--rni4s.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--snc7g-eth0" Jan 29 14:37:53.345049 containerd[1501]: 2025-01-29 14:37:53.331 [INFO][4463] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428" HandleID="k8s-pod-network.666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428" Workload="srv--rni4s.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--snc7g-eth0" Jan 29 14:37:53.345049 containerd[1501]: 2025-01-29 14:37:53.340 [INFO][4463] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 14:37:53.345049 containerd[1501]: 2025-01-29 14:37:53.342 [INFO][4453] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428" Jan 29 14:37:53.348458 containerd[1501]: time="2025-01-29T14:37:53.348333409Z" level=info msg="TearDown network for sandbox \"666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428\" successfully" Jan 29 14:37:53.348458 containerd[1501]: time="2025-01-29T14:37:53.348376707Z" level=info msg="StopPodSandbox for \"666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428\" returns successfully" Jan 29 14:37:53.351854 containerd[1501]: time="2025-01-29T14:37:53.350039617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-snc7g,Uid:114e2b6a-4dec-46af-b960-cb1b36f44242,Namespace:kube-system,Attempt:1,}" Jan 29 14:37:53.379012 containerd[1501]: 2025-01-29 14:37:53.262 [INFO][4452] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5" Jan 29 14:37:53.379012 containerd[1501]: 2025-01-29 14:37:53.264 [INFO][4452] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5" iface="eth0" netns="/var/run/netns/cni-d5ae40e6-2918-2251-c931-068531638240" Jan 29 14:37:53.379012 containerd[1501]: 2025-01-29 14:37:53.266 [INFO][4452] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5" iface="eth0" netns="/var/run/netns/cni-d5ae40e6-2918-2251-c931-068531638240" Jan 29 14:37:53.379012 containerd[1501]: 2025-01-29 14:37:53.266 [INFO][4452] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5" iface="eth0" netns="/var/run/netns/cni-d5ae40e6-2918-2251-c931-068531638240" Jan 29 14:37:53.379012 containerd[1501]: 2025-01-29 14:37:53.267 [INFO][4452] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5" Jan 29 14:37:53.379012 containerd[1501]: 2025-01-29 14:37:53.267 [INFO][4452] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5" Jan 29 14:37:53.379012 containerd[1501]: 2025-01-29 14:37:53.355 [INFO][4469] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5" HandleID="k8s-pod-network.bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5" Workload="srv--rni4s.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--7nbqm-eth0" Jan 29 14:37:53.379012 containerd[1501]: 2025-01-29 14:37:53.356 [INFO][4469] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 14:37:53.379012 containerd[1501]: 2025-01-29 14:37:53.356 [INFO][4469] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 14:37:53.379012 containerd[1501]: 2025-01-29 14:37:53.368 [WARNING][4469] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5" HandleID="k8s-pod-network.bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5" Workload="srv--rni4s.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--7nbqm-eth0" Jan 29 14:37:53.379012 containerd[1501]: 2025-01-29 14:37:53.368 [INFO][4469] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5" HandleID="k8s-pod-network.bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5" Workload="srv--rni4s.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--7nbqm-eth0" Jan 29 14:37:53.379012 containerd[1501]: 2025-01-29 14:37:53.371 [INFO][4469] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 14:37:53.379012 containerd[1501]: 2025-01-29 14:37:53.374 [INFO][4452] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5" Jan 29 14:37:53.380644 containerd[1501]: time="2025-01-29T14:37:53.380429710Z" level=info msg="TearDown network for sandbox \"bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5\" successfully" Jan 29 14:37:53.381047 containerd[1501]: time="2025-01-29T14:37:53.380982046Z" level=info msg="StopPodSandbox for \"bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5\" returns successfully" Jan 29 14:37:53.382300 containerd[1501]: time="2025-01-29T14:37:53.382267650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7nbqm,Uid:9d07ade6-7ccb-46c5-be71-11453b4fb53f,Namespace:kube-system,Attempt:1,}" Jan 29 14:37:53.575141 systemd[1]: run-containerd-runc-k8s.io-9975b3fe69ef567883c3dcf45b3919b3b8aaa93a453c53be742a522efbb27eef-runc.LLzFlX.mount: Deactivated successfully. Jan 29 14:37:53.575295 systemd[1]: run-netns-cni\x2dd5ae40e6\x2d2918\x2d2251\x2dc931\x2d068531638240.mount: Deactivated successfully. Jan 29 14:37:53.575395 systemd[1]: run-netns-cni\x2db5f06ffc\x2d4af0\x2d5144\x2d3720\x2dc77a94faa706.mount: Deactivated successfully. Jan 29 14:37:53.602632 kubelet[2727]: I0129 14:37:53.600406 2727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 14:37:53.680993 systemd-networkd[1417]: caliedc026b2ea8: Gained IPv6LL Jan 29 14:37:53.709925 systemd-networkd[1417]: calib6dd26487ca: Link UP Jan 29 14:37:53.710267 systemd-networkd[1417]: calib6dd26487ca: Gained carrier Jan 29 14:37:53.743411 containerd[1501]: 2025-01-29 14:37:53.408 [INFO][4487] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 14:37:53.743411 containerd[1501]: 2025-01-29 14:37:53.431 [INFO][4487] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--rni4s.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--snc7g-eth0 coredns-7db6d8ff4d- kube-system 114e2b6a-4dec-46af-b960-cb1b36f44242 815 0 2025-01-29 14:37:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-rni4s.gb1.brightbox.com coredns-7db6d8ff4d-snc7g eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib6dd26487ca [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="eec4c2495999cf9d04ed5aad22e1055e0f6036a16a10fbaf8af1862741da4ad6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-snc7g" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--snc7g-" Jan 29 14:37:53.743411 containerd[1501]: 2025-01-29 14:37:53.431 [INFO][4487] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="eec4c2495999cf9d04ed5aad22e1055e0f6036a16a10fbaf8af1862741da4ad6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-snc7g" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--snc7g-eth0" Jan 29 14:37:53.743411 containerd[1501]: 2025-01-29 14:37:53.610 [INFO][4511] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eec4c2495999cf9d04ed5aad22e1055e0f6036a16a10fbaf8af1862741da4ad6" HandleID="k8s-pod-network.eec4c2495999cf9d04ed5aad22e1055e0f6036a16a10fbaf8af1862741da4ad6" Workload="srv--rni4s.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--snc7g-eth0" Jan 29 14:37:53.743411 containerd[1501]: 2025-01-29 14:37:53.631 [INFO][4511] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eec4c2495999cf9d04ed5aad22e1055e0f6036a16a10fbaf8af1862741da4ad6" HandleID="k8s-pod-network.eec4c2495999cf9d04ed5aad22e1055e0f6036a16a10fbaf8af1862741da4ad6" Workload="srv--rni4s.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--snc7g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001fc1b0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-rni4s.gb1.brightbox.com", "pod":"coredns-7db6d8ff4d-snc7g", "timestamp":"2025-01-29 14:37:53.610039543 +0000 UTC"}, Hostname:"srv-rni4s.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 14:37:53.743411 containerd[1501]: 2025-01-29 14:37:53.632 [INFO][4511] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 14:37:53.743411 containerd[1501]: 2025-01-29 14:37:53.632 [INFO][4511] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 14:37:53.743411 containerd[1501]: 2025-01-29 14:37:53.632 [INFO][4511] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-rni4s.gb1.brightbox.com' Jan 29 14:37:53.743411 containerd[1501]: 2025-01-29 14:37:53.640 [INFO][4511] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.eec4c2495999cf9d04ed5aad22e1055e0f6036a16a10fbaf8af1862741da4ad6" host="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:53.743411 containerd[1501]: 2025-01-29 14:37:53.650 [INFO][4511] ipam/ipam.go 372: Looking up existing affinities for host host="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:53.743411 containerd[1501]: 2025-01-29 14:37:53.661 [INFO][4511] ipam/ipam.go 489: Trying affinity for 192.168.0.192/26 host="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:53.743411 containerd[1501]: 2025-01-29 14:37:53.665 [INFO][4511] ipam/ipam.go 155: Attempting to load block cidr=192.168.0.192/26 host="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:53.743411 containerd[1501]: 2025-01-29 14:37:53.673 [INFO][4511] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.0.192/26 host="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:53.743411 containerd[1501]: 2025-01-29 14:37:53.673 [INFO][4511] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.0.192/26 handle="k8s-pod-network.eec4c2495999cf9d04ed5aad22e1055e0f6036a16a10fbaf8af1862741da4ad6" host="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:53.743411 containerd[1501]: 2025-01-29 14:37:53.677 [INFO][4511] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.eec4c2495999cf9d04ed5aad22e1055e0f6036a16a10fbaf8af1862741da4ad6 Jan 29 14:37:53.743411 containerd[1501]: 2025-01-29 14:37:53.684 [INFO][4511] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.0.192/26 handle="k8s-pod-network.eec4c2495999cf9d04ed5aad22e1055e0f6036a16a10fbaf8af1862741da4ad6" host="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:53.743411 containerd[1501]: 2025-01-29 14:37:53.698 [INFO][4511] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.0.197/26] block=192.168.0.192/26 handle="k8s-pod-network.eec4c2495999cf9d04ed5aad22e1055e0f6036a16a10fbaf8af1862741da4ad6" host="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:53.743411 containerd[1501]: 2025-01-29 14:37:53.698 [INFO][4511] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.0.197/26] handle="k8s-pod-network.eec4c2495999cf9d04ed5aad22e1055e0f6036a16a10fbaf8af1862741da4ad6" host="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:53.743411 containerd[1501]: 2025-01-29 14:37:53.698 [INFO][4511] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 14:37:53.743411 containerd[1501]: 2025-01-29 14:37:53.698 [INFO][4511] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.0.197/26] IPv6=[] ContainerID="eec4c2495999cf9d04ed5aad22e1055e0f6036a16a10fbaf8af1862741da4ad6" HandleID="k8s-pod-network.eec4c2495999cf9d04ed5aad22e1055e0f6036a16a10fbaf8af1862741da4ad6" Workload="srv--rni4s.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--snc7g-eth0" Jan 29 14:37:53.746355 containerd[1501]: 2025-01-29 14:37:53.703 [INFO][4487] cni-plugin/k8s.go 386: Populated endpoint ContainerID="eec4c2495999cf9d04ed5aad22e1055e0f6036a16a10fbaf8af1862741da4ad6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-snc7g" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--snc7g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--rni4s.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--snc7g-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"114e2b6a-4dec-46af-b960-cb1b36f44242", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 14, 37, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-rni4s.gb1.brightbox.com", ContainerID:"", Pod:"coredns-7db6d8ff4d-snc7g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.0.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib6dd26487ca", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 14:37:53.746355 containerd[1501]: 2025-01-29 14:37:53.704 [INFO][4487] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.0.197/32] ContainerID="eec4c2495999cf9d04ed5aad22e1055e0f6036a16a10fbaf8af1862741da4ad6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-snc7g" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--snc7g-eth0" Jan 29 14:37:53.746355 containerd[1501]: 2025-01-29 14:37:53.704 [INFO][4487] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib6dd26487ca ContainerID="eec4c2495999cf9d04ed5aad22e1055e0f6036a16a10fbaf8af1862741da4ad6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-snc7g" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--snc7g-eth0" Jan 29 14:37:53.746355 containerd[1501]: 2025-01-29 14:37:53.709 [INFO][4487] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eec4c2495999cf9d04ed5aad22e1055e0f6036a16a10fbaf8af1862741da4ad6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-snc7g" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--snc7g-eth0" Jan 29 14:37:53.746355 containerd[1501]: 2025-01-29 14:37:53.712 [INFO][4487] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="eec4c2495999cf9d04ed5aad22e1055e0f6036a16a10fbaf8af1862741da4ad6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-snc7g" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--snc7g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--rni4s.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--snc7g-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"114e2b6a-4dec-46af-b960-cb1b36f44242", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 14, 37, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-rni4s.gb1.brightbox.com", ContainerID:"eec4c2495999cf9d04ed5aad22e1055e0f6036a16a10fbaf8af1862741da4ad6", Pod:"coredns-7db6d8ff4d-snc7g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.0.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib6dd26487ca", MAC:"06:a0:eb:a5:d1:a6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 14:37:53.746355 containerd[1501]: 2025-01-29 14:37:53.739 [INFO][4487] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="eec4c2495999cf9d04ed5aad22e1055e0f6036a16a10fbaf8af1862741da4ad6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-snc7g" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--snc7g-eth0" Jan 29 14:37:53.801889 containerd[1501]: time="2025-01-29T14:37:53.800854371Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 14:37:53.801889 containerd[1501]: time="2025-01-29T14:37:53.801035265Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 14:37:53.801889 containerd[1501]: time="2025-01-29T14:37:53.801084100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 14:37:53.801889 containerd[1501]: time="2025-01-29T14:37:53.801247898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 14:37:53.825075 systemd-networkd[1417]: caliedda56ef048: Link UP Jan 29 14:37:53.830405 systemd-networkd[1417]: caliedda56ef048: Gained carrier Jan 29 14:37:53.870728 containerd[1501]: 2025-01-29 14:37:53.441 [INFO][4497] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 14:37:53.870728 containerd[1501]: 2025-01-29 14:37:53.490 [INFO][4497] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--rni4s.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--7nbqm-eth0 coredns-7db6d8ff4d- kube-system 9d07ade6-7ccb-46c5-be71-11453b4fb53f 816 0 2025-01-29 14:37:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-rni4s.gb1.brightbox.com coredns-7db6d8ff4d-7nbqm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliedda56ef048 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="816cc8f01d1d6799476db775c2d90451fd4fb08e375838b98e28afc418fb11df" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7nbqm" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--7nbqm-" Jan 29 14:37:53.870728 containerd[1501]: 2025-01-29 14:37:53.491 [INFO][4497] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="816cc8f01d1d6799476db775c2d90451fd4fb08e375838b98e28afc418fb11df" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7nbqm" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--7nbqm-eth0" Jan 29 14:37:53.870728 containerd[1501]: 2025-01-29 14:37:53.649 [INFO][4515] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="816cc8f01d1d6799476db775c2d90451fd4fb08e375838b98e28afc418fb11df" HandleID="k8s-pod-network.816cc8f01d1d6799476db775c2d90451fd4fb08e375838b98e28afc418fb11df" Workload="srv--rni4s.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--7nbqm-eth0" Jan 29 14:37:53.870728 containerd[1501]: 2025-01-29 14:37:53.669 [INFO][4515] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="816cc8f01d1d6799476db775c2d90451fd4fb08e375838b98e28afc418fb11df" HandleID="k8s-pod-network.816cc8f01d1d6799476db775c2d90451fd4fb08e375838b98e28afc418fb11df" Workload="srv--rni4s.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--7nbqm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004e5e20), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-rni4s.gb1.brightbox.com", "pod":"coredns-7db6d8ff4d-7nbqm", "timestamp":"2025-01-29 14:37:53.648579075 +0000 UTC"}, Hostname:"srv-rni4s.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 14:37:53.870728 containerd[1501]: 2025-01-29 14:37:53.669 [INFO][4515] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 14:37:53.870728 containerd[1501]: 2025-01-29 14:37:53.698 [INFO][4515] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 14:37:53.870728 containerd[1501]: 2025-01-29 14:37:53.699 [INFO][4515] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-rni4s.gb1.brightbox.com' Jan 29 14:37:53.870728 containerd[1501]: 2025-01-29 14:37:53.704 [INFO][4515] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.816cc8f01d1d6799476db775c2d90451fd4fb08e375838b98e28afc418fb11df" host="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:53.870728 containerd[1501]: 2025-01-29 14:37:53.733 [INFO][4515] ipam/ipam.go 372: Looking up existing affinities for host host="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:53.870728 containerd[1501]: 2025-01-29 14:37:53.746 [INFO][4515] ipam/ipam.go 489: Trying affinity for 192.168.0.192/26 host="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:53.870728 containerd[1501]: 2025-01-29 14:37:53.750 [INFO][4515] ipam/ipam.go 155: Attempting to load block cidr=192.168.0.192/26 host="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:53.870728 containerd[1501]: 2025-01-29 14:37:53.756 [INFO][4515] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.0.192/26 host="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:53.870728 containerd[1501]: 2025-01-29 14:37:53.756 [INFO][4515] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.0.192/26 handle="k8s-pod-network.816cc8f01d1d6799476db775c2d90451fd4fb08e375838b98e28afc418fb11df" host="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:53.870728 containerd[1501]: 2025-01-29 14:37:53.760 [INFO][4515] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.816cc8f01d1d6799476db775c2d90451fd4fb08e375838b98e28afc418fb11df Jan 29 14:37:53.870728 containerd[1501]: 2025-01-29 14:37:53.779 [INFO][4515] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.0.192/26 handle="k8s-pod-network.816cc8f01d1d6799476db775c2d90451fd4fb08e375838b98e28afc418fb11df" host="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:53.870728 containerd[1501]: 2025-01-29 14:37:53.803 [INFO][4515] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.0.198/26] block=192.168.0.192/26 handle="k8s-pod-network.816cc8f01d1d6799476db775c2d90451fd4fb08e375838b98e28afc418fb11df" host="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:53.870728 containerd[1501]: 2025-01-29 14:37:53.803 [INFO][4515] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.0.198/26] handle="k8s-pod-network.816cc8f01d1d6799476db775c2d90451fd4fb08e375838b98e28afc418fb11df" host="srv-rni4s.gb1.brightbox.com" Jan 29 14:37:53.870728 containerd[1501]: 2025-01-29 14:37:53.803 [INFO][4515] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 14:37:53.870728 containerd[1501]: 2025-01-29 14:37:53.803 [INFO][4515] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.0.198/26] IPv6=[] ContainerID="816cc8f01d1d6799476db775c2d90451fd4fb08e375838b98e28afc418fb11df" HandleID="k8s-pod-network.816cc8f01d1d6799476db775c2d90451fd4fb08e375838b98e28afc418fb11df" Workload="srv--rni4s.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--7nbqm-eth0" Jan 29 14:37:53.874597 containerd[1501]: 2025-01-29 14:37:53.808 [INFO][4497] cni-plugin/k8s.go 386: Populated endpoint ContainerID="816cc8f01d1d6799476db775c2d90451fd4fb08e375838b98e28afc418fb11df" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7nbqm" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--7nbqm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--rni4s.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--7nbqm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9d07ade6-7ccb-46c5-be71-11453b4fb53f", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 14, 37, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-rni4s.gb1.brightbox.com", ContainerID:"", Pod:"coredns-7db6d8ff4d-7nbqm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.0.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliedda56ef048", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 14:37:53.874597 containerd[1501]: 2025-01-29 14:37:53.808 [INFO][4497] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.0.198/32] ContainerID="816cc8f01d1d6799476db775c2d90451fd4fb08e375838b98e28afc418fb11df" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7nbqm" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--7nbqm-eth0" Jan 29 14:37:53.874597 containerd[1501]: 2025-01-29 14:37:53.811 [INFO][4497] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliedda56ef048 ContainerID="816cc8f01d1d6799476db775c2d90451fd4fb08e375838b98e28afc418fb11df" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7nbqm" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--7nbqm-eth0" Jan 29 14:37:53.874597 containerd[1501]: 2025-01-29 14:37:53.828 [INFO][4497] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="816cc8f01d1d6799476db775c2d90451fd4fb08e375838b98e28afc418fb11df" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7nbqm" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--7nbqm-eth0" Jan 29 14:37:53.874597 containerd[1501]: 2025-01-29 14:37:53.832 [INFO][4497] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="816cc8f01d1d6799476db775c2d90451fd4fb08e375838b98e28afc418fb11df" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7nbqm" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--7nbqm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--rni4s.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--7nbqm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9d07ade6-7ccb-46c5-be71-11453b4fb53f", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 14, 37, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-rni4s.gb1.brightbox.com", ContainerID:"816cc8f01d1d6799476db775c2d90451fd4fb08e375838b98e28afc418fb11df", Pod:"coredns-7db6d8ff4d-7nbqm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.0.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliedda56ef048", MAC:"96:cd:1b:ee:14:6c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 14:37:53.874597 containerd[1501]: 2025-01-29 14:37:53.860 [INFO][4497] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="816cc8f01d1d6799476db775c2d90451fd4fb08e375838b98e28afc418fb11df" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7nbqm" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--7nbqm-eth0" Jan 29 14:37:53.893132 systemd[1]: Started cri-containerd-eec4c2495999cf9d04ed5aad22e1055e0f6036a16a10fbaf8af1862741da4ad6.scope - libcontainer container eec4c2495999cf9d04ed5aad22e1055e0f6036a16a10fbaf8af1862741da4ad6. Jan 29 14:37:53.942833 containerd[1501]: time="2025-01-29T14:37:53.941180411Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 14:37:53.943924 containerd[1501]: time="2025-01-29T14:37:53.943183757Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 14:37:53.944997 containerd[1501]: time="2025-01-29T14:37:53.944911342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 14:37:53.945264 containerd[1501]: time="2025-01-29T14:37:53.945209880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 14:37:53.993897 containerd[1501]: time="2025-01-29T14:37:53.993844117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-snc7g,Uid:114e2b6a-4dec-46af-b960-cb1b36f44242,Namespace:kube-system,Attempt:1,} returns sandbox id \"eec4c2495999cf9d04ed5aad22e1055e0f6036a16a10fbaf8af1862741da4ad6\"" Jan 29 14:37:54.004008 systemd-networkd[1417]: cali8b988c0073c: Gained IPv6LL Jan 29 14:37:54.004795 systemd[1]: Started cri-containerd-816cc8f01d1d6799476db775c2d90451fd4fb08e375838b98e28afc418fb11df.scope - libcontainer container 816cc8f01d1d6799476db775c2d90451fd4fb08e375838b98e28afc418fb11df. Jan 29 14:37:54.023872 containerd[1501]: time="2025-01-29T14:37:54.021377531Z" level=info msg="CreateContainer within sandbox \"eec4c2495999cf9d04ed5aad22e1055e0f6036a16a10fbaf8af1862741da4ad6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 14:37:54.066869 containerd[1501]: time="2025-01-29T14:37:54.066754526Z" level=info msg="CreateContainer within sandbox \"eec4c2495999cf9d04ed5aad22e1055e0f6036a16a10fbaf8af1862741da4ad6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"627943762e1c6d73a9584b692b742d960c9a01bdac3f6f70bf5da434ff2247a1\"" Jan 29 14:37:54.068616 containerd[1501]: time="2025-01-29T14:37:54.067941745Z" level=info msg="StartContainer for \"627943762e1c6d73a9584b692b742d960c9a01bdac3f6f70bf5da434ff2247a1\"" Jan 29 14:37:54.125521 containerd[1501]: time="2025-01-29T14:37:54.125052142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7nbqm,Uid:9d07ade6-7ccb-46c5-be71-11453b4fb53f,Namespace:kube-system,Attempt:1,} returns sandbox id \"816cc8f01d1d6799476db775c2d90451fd4fb08e375838b98e28afc418fb11df\"" Jan 29 14:37:54.136477 containerd[1501]: time="2025-01-29T14:37:54.136070034Z" level=info msg="CreateContainer within sandbox \"816cc8f01d1d6799476db775c2d90451fd4fb08e375838b98e28afc418fb11df\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 14:37:54.163063 systemd[1]: Started cri-containerd-627943762e1c6d73a9584b692b742d960c9a01bdac3f6f70bf5da434ff2247a1.scope - libcontainer container 627943762e1c6d73a9584b692b742d960c9a01bdac3f6f70bf5da434ff2247a1. Jan 29 14:37:54.170108 containerd[1501]: time="2025-01-29T14:37:54.170059212Z" level=info msg="CreateContainer within sandbox \"816cc8f01d1d6799476db775c2d90451fd4fb08e375838b98e28afc418fb11df\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d919eda6a01f76a52576eb970aa1daf5bea76478a85a8c50b907f705f0dc7b26\"" Jan 29 14:37:54.171821 containerd[1501]: time="2025-01-29T14:37:54.171760157Z" level=info msg="StartContainer for \"d919eda6a01f76a52576eb970aa1daf5bea76478a85a8c50b907f705f0dc7b26\"" Jan 29 14:37:54.243029 systemd[1]: Started cri-containerd-d919eda6a01f76a52576eb970aa1daf5bea76478a85a8c50b907f705f0dc7b26.scope - libcontainer container d919eda6a01f76a52576eb970aa1daf5bea76478a85a8c50b907f705f0dc7b26. Jan 29 14:37:54.266993 containerd[1501]: time="2025-01-29T14:37:54.266754324Z" level=info msg="StartContainer for \"627943762e1c6d73a9584b692b742d960c9a01bdac3f6f70bf5da434ff2247a1\" returns successfully" Jan 29 14:37:54.315658 containerd[1501]: time="2025-01-29T14:37:54.315589265Z" level=info msg="StartContainer for \"d919eda6a01f76a52576eb970aa1daf5bea76478a85a8c50b907f705f0dc7b26\" returns successfully" Jan 29 14:37:54.386333 systemd-networkd[1417]: calic7190d9e812: Gained IPv6LL Jan 29 14:37:54.577174 systemd-networkd[1417]: cali975492754d3: Gained IPv6LL Jan 29 14:37:54.592293 kubelet[2727]: I0129 14:37:54.592147 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-7nbqm" podStartSLOduration=38.592116931 podStartE2EDuration="38.592116931s" podCreationTimestamp="2025-01-29 14:37:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 14:37:54.591386419 +0000 UTC m=+51.784077700" watchObservedRunningTime="2025-01-29 14:37:54.592116931 +0000 UTC m=+51.784808201" Jan 29 14:37:54.834907 systemd-networkd[1417]: calib6dd26487ca: Gained IPv6LL Jan 29 14:37:54.915117 containerd[1501]: time="2025-01-29T14:37:54.915052447Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:37:54.917478 containerd[1501]: time="2025-01-29T14:37:54.917085325Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 29 14:37:54.918657 containerd[1501]: time="2025-01-29T14:37:54.918368796Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:37:54.928611 containerd[1501]: time="2025-01-29T14:37:54.928273646Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:37:54.931108 containerd[1501]: time="2025-01-29T14:37:54.930450553Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.289637076s" Jan 29 14:37:54.931108 containerd[1501]: time="2025-01-29T14:37:54.930497508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 29 14:37:54.935457 containerd[1501]: time="2025-01-29T14:37:54.935381851Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 14:37:54.940920 containerd[1501]: time="2025-01-29T14:37:54.940466618Z" level=info msg="CreateContainer within sandbox \"4f66d87254513761f431f10b81a0960195f3dc9cf4b582d45568f8498f159c1c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 29 14:37:54.967901 containerd[1501]: time="2025-01-29T14:37:54.967706870Z" level=info msg="CreateContainer within sandbox \"4f66d87254513761f431f10b81a0960195f3dc9cf4b582d45568f8498f159c1c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"2e09d8d9be62e1f3027aa4824ef503374efed01c3fecb10ce05d9cfef11df21a\"" Jan 29 14:37:54.974411 containerd[1501]: time="2025-01-29T14:37:54.973607580Z" level=info msg="StartContainer for \"2e09d8d9be62e1f3027aa4824ef503374efed01c3fecb10ce05d9cfef11df21a\"" Jan 29 14:37:55.090141 systemd[1]: Started cri-containerd-2e09d8d9be62e1f3027aa4824ef503374efed01c3fecb10ce05d9cfef11df21a.scope - libcontainer container 2e09d8d9be62e1f3027aa4824ef503374efed01c3fecb10ce05d9cfef11df21a. Jan 29 14:37:55.212559 containerd[1501]: time="2025-01-29T14:37:55.212499174Z" level=info msg="StartContainer for \"2e09d8d9be62e1f3027aa4824ef503374efed01c3fecb10ce05d9cfef11df21a\" returns successfully" Jan 29 14:37:55.281045 systemd-networkd[1417]: caliedda56ef048: Gained IPv6LL Jan 29 14:37:55.409903 kernel: bpftool[4799]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 29 14:37:55.577829 kubelet[2727]: I0129 14:37:55.577319 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-snc7g" podStartSLOduration=39.577296513 podStartE2EDuration="39.577296513s" podCreationTimestamp="2025-01-29 14:37:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 14:37:54.624686427 +0000 UTC m=+51.817377704" watchObservedRunningTime="2025-01-29 14:37:55.577296513 +0000 UTC m=+52.769987780" Jan 29 14:37:55.879472 systemd-networkd[1417]: vxlan.calico: Link UP Jan 29 14:37:55.879489 systemd-networkd[1417]: vxlan.calico: Gained carrier Jan 29 14:37:57.201061 systemd-networkd[1417]: vxlan.calico: Gained IPv6LL Jan 29 14:37:58.556693 systemd[1]: run-containerd-runc-k8s.io-aaf035c40e295dcbf84ee6332b1a7eb733e5696a45fc36ca70f7ea82073c17bb-runc.46V7GS.mount: Deactivated successfully. Jan 29 14:37:58.882221 containerd[1501]: time="2025-01-29T14:37:58.882124032Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:37:58.892883 containerd[1501]: time="2025-01-29T14:37:58.891688086Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 29 14:37:58.892883 containerd[1501]: time="2025-01-29T14:37:58.891892992Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:37:58.895900 containerd[1501]: time="2025-01-29T14:37:58.895797254Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:37:58.896856 containerd[1501]: time="2025-01-29T14:37:58.896751943Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 3.96131186s" Jan 29 14:37:58.898754 containerd[1501]: time="2025-01-29T14:37:58.898712108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 29 14:37:58.910385 containerd[1501]: time="2025-01-29T14:37:58.910317406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 29 14:37:58.916977 containerd[1501]: time="2025-01-29T14:37:58.915690980Z" level=info msg="CreateContainer within sandbox \"8c9c63497df2d1e1f0a1c69022411b5b8fd381189a43a3c7201cd51f80a69c36\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 14:37:58.938260 containerd[1501]: time="2025-01-29T14:37:58.938067978Z" level=info msg="CreateContainer within sandbox \"8c9c63497df2d1e1f0a1c69022411b5b8fd381189a43a3c7201cd51f80a69c36\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4e8ce5d1a96b98fb34c04e0ca1bb118258e6409ee39b33f6d2aad58bfa0e339f\"" Jan 29 14:37:58.941086 containerd[1501]: time="2025-01-29T14:37:58.940940659Z" level=info msg="StartContainer for \"4e8ce5d1a96b98fb34c04e0ca1bb118258e6409ee39b33f6d2aad58bfa0e339f\"" Jan 29 14:37:59.004025 systemd[1]: Started cri-containerd-4e8ce5d1a96b98fb34c04e0ca1bb118258e6409ee39b33f6d2aad58bfa0e339f.scope - libcontainer container 4e8ce5d1a96b98fb34c04e0ca1bb118258e6409ee39b33f6d2aad58bfa0e339f. Jan 29 14:37:59.082834 containerd[1501]: time="2025-01-29T14:37:59.082744636Z" level=info msg="StartContainer for \"4e8ce5d1a96b98fb34c04e0ca1bb118258e6409ee39b33f6d2aad58bfa0e339f\" returns successfully" Jan 29 14:37:59.685831 kubelet[2727]: I0129 14:37:59.685508 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-b98d747cb-8t2mn" podStartSLOduration=29.597051008 podStartE2EDuration="35.685413257s" podCreationTimestamp="2025-01-29 14:37:24 +0000 UTC" firstStartedPulling="2025-01-29 14:37:52.821321945 +0000 UTC m=+50.014013206" lastFinishedPulling="2025-01-29 14:37:58.909684186 +0000 UTC m=+56.102375455" observedRunningTime="2025-01-29 14:37:59.617506046 +0000 UTC m=+56.810197329" watchObservedRunningTime="2025-01-29 14:37:59.685413257 +0000 UTC m=+56.878104518" Jan 29 14:38:00.420894 containerd[1501]: time="2025-01-29T14:38:00.420829426Z" level=info msg="StopContainer for \"1a9cdf6b8582e417a6c387dd8cc36671e076e69d87cbf589b4c312610571b3da\" with timeout 300 (s)" Jan 29 14:38:00.425475 containerd[1501]: time="2025-01-29T14:38:00.425327590Z" level=info msg="Stop container \"1a9cdf6b8582e417a6c387dd8cc36671e076e69d87cbf589b4c312610571b3da\" with signal terminated" Jan 29 14:38:00.599716 kubelet[2727]: I0129 14:38:00.599631 2727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 14:38:00.676576 systemd[1]: run-containerd-runc-k8s.io-aaf035c40e295dcbf84ee6332b1a7eb733e5696a45fc36ca70f7ea82073c17bb-runc.N96eoY.mount: Deactivated successfully. Jan 29 14:38:01.058576 containerd[1501]: time="2025-01-29T14:38:01.057645419Z" level=info msg="StopContainer for \"aaf035c40e295dcbf84ee6332b1a7eb733e5696a45fc36ca70f7ea82073c17bb\" with timeout 5 (s)" Jan 29 14:38:01.060876 containerd[1501]: time="2025-01-29T14:38:01.059819008Z" level=info msg="Stop container \"aaf035c40e295dcbf84ee6332b1a7eb733e5696a45fc36ca70f7ea82073c17bb\" with signal terminated" Jan 29 14:38:01.134259 systemd[1]: cri-containerd-aaf035c40e295dcbf84ee6332b1a7eb733e5696a45fc36ca70f7ea82073c17bb.scope: Deactivated successfully. Jan 29 14:38:01.135453 systemd[1]: cri-containerd-aaf035c40e295dcbf84ee6332b1a7eb733e5696a45fc36ca70f7ea82073c17bb.scope: Consumed 2.988s CPU time. Jan 29 14:38:01.204723 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aaf035c40e295dcbf84ee6332b1a7eb733e5696a45fc36ca70f7ea82073c17bb-rootfs.mount: Deactivated successfully. Jan 29 14:38:01.205943 containerd[1501]: time="2025-01-29T14:38:01.204962221Z" level=info msg="shim disconnected" id=aaf035c40e295dcbf84ee6332b1a7eb733e5696a45fc36ca70f7ea82073c17bb namespace=k8s.io Jan 29 14:38:01.205943 containerd[1501]: time="2025-01-29T14:38:01.205100979Z" level=warning msg="cleaning up after shim disconnected" id=aaf035c40e295dcbf84ee6332b1a7eb733e5696a45fc36ca70f7ea82073c17bb namespace=k8s.io Jan 29 14:38:01.205943 containerd[1501]: time="2025-01-29T14:38:01.205120672Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 14:38:01.249767 containerd[1501]: time="2025-01-29T14:38:01.249537795Z" level=warning msg="cleanup warnings time=\"2025-01-29T14:38:01Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 14:38:01.439743 containerd[1501]: time="2025-01-29T14:38:01.438510706Z" level=info msg="StopContainer for \"aaf035c40e295dcbf84ee6332b1a7eb733e5696a45fc36ca70f7ea82073c17bb\" returns successfully" Jan 29 14:38:01.444167 containerd[1501]: time="2025-01-29T14:38:01.444130719Z" level=info msg="StopPodSandbox for \"47c11c7023de2e575694e6b90ddc00a2b2319b50c630f4bfa72a8ae25488c20f\"" Jan 29 14:38:01.444278 containerd[1501]: time="2025-01-29T14:38:01.444203809Z" level=info msg="Container to stop \"a69055f40e29bd4488240af653164a0957678e1c4e1fe0dbe80c60e493f3356f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 14:38:01.444278 containerd[1501]: time="2025-01-29T14:38:01.444237987Z" level=info msg="Container to stop \"a6ff3da03a0689833e444b96911d0c7b6cb2566ad136ad4744e4e36296fc195d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 14:38:01.444278 containerd[1501]: time="2025-01-29T14:38:01.444254683Z" level=info msg="Container to stop \"aaf035c40e295dcbf84ee6332b1a7eb733e5696a45fc36ca70f7ea82073c17bb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 14:38:01.450704 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-47c11c7023de2e575694e6b90ddc00a2b2319b50c630f4bfa72a8ae25488c20f-shm.mount: Deactivated successfully. Jan 29 14:38:01.482425 systemd[1]: cri-containerd-47c11c7023de2e575694e6b90ddc00a2b2319b50c630f4bfa72a8ae25488c20f.scope: Deactivated successfully. Jan 29 14:38:01.535913 containerd[1501]: time="2025-01-29T14:38:01.535686180Z" level=info msg="shim disconnected" id=47c11c7023de2e575694e6b90ddc00a2b2319b50c630f4bfa72a8ae25488c20f namespace=k8s.io Jan 29 14:38:01.537067 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47c11c7023de2e575694e6b90ddc00a2b2319b50c630f4bfa72a8ae25488c20f-rootfs.mount: Deactivated successfully. Jan 29 14:38:01.539584 containerd[1501]: time="2025-01-29T14:38:01.535875914Z" level=warning msg="cleaning up after shim disconnected" id=47c11c7023de2e575694e6b90ddc00a2b2319b50c630f4bfa72a8ae25488c20f namespace=k8s.io Jan 29 14:38:01.539584 containerd[1501]: time="2025-01-29T14:38:01.538527985Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 14:38:01.635238 containerd[1501]: time="2025-01-29T14:38:01.632051562Z" level=info msg="TearDown network for sandbox \"47c11c7023de2e575694e6b90ddc00a2b2319b50c630f4bfa72a8ae25488c20f\" successfully" Jan 29 14:38:01.635238 containerd[1501]: time="2025-01-29T14:38:01.632112753Z" level=info msg="StopPodSandbox for \"47c11c7023de2e575694e6b90ddc00a2b2319b50c630f4bfa72a8ae25488c20f\" returns successfully" Jan 29 14:38:01.698044 systemd[1]: cri-containerd-1a9cdf6b8582e417a6c387dd8cc36671e076e69d87cbf589b4c312610571b3da.scope: Deactivated successfully. Jan 29 14:38:01.825547 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1a9cdf6b8582e417a6c387dd8cc36671e076e69d87cbf589b4c312610571b3da-rootfs.mount: Deactivated successfully. Jan 29 14:38:01.839483 containerd[1501]: time="2025-01-29T14:38:01.838736923Z" level=info msg="shim disconnected" id=1a9cdf6b8582e417a6c387dd8cc36671e076e69d87cbf589b4c312610571b3da namespace=k8s.io Jan 29 14:38:01.839483 containerd[1501]: time="2025-01-29T14:38:01.838840782Z" level=warning msg="cleaning up after shim disconnected" id=1a9cdf6b8582e417a6c387dd8cc36671e076e69d87cbf589b4c312610571b3da namespace=k8s.io Jan 29 14:38:01.839483 containerd[1501]: time="2025-01-29T14:38:01.838860282Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 14:38:01.843843 kubelet[2727]: I0129 14:38:01.843182 2727 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/bde8eccf-d1f1-447e-84fc-13f58369a03e-node-certs\") pod \"bde8eccf-d1f1-447e-84fc-13f58369a03e\" (UID: \"bde8eccf-d1f1-447e-84fc-13f58369a03e\") " Jan 29 14:38:01.843843 kubelet[2727]: I0129 14:38:01.843303 2727 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bde8eccf-d1f1-447e-84fc-13f58369a03e-lib-modules\") pod \"bde8eccf-d1f1-447e-84fc-13f58369a03e\" (UID: \"bde8eccf-d1f1-447e-84fc-13f58369a03e\") " Jan 29 14:38:01.843843 kubelet[2727]: I0129 14:38:01.843351 2727 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9pf9\" (UniqueName: \"kubernetes.io/projected/bde8eccf-d1f1-447e-84fc-13f58369a03e-kube-api-access-c9pf9\") pod \"bde8eccf-d1f1-447e-84fc-13f58369a03e\" (UID: \"bde8eccf-d1f1-447e-84fc-13f58369a03e\") " Jan 29 14:38:01.843843 kubelet[2727]: I0129 14:38:01.843384 2727 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bde8eccf-d1f1-447e-84fc-13f58369a03e-xtables-lock\") pod \"bde8eccf-d1f1-447e-84fc-13f58369a03e\" (UID: \"bde8eccf-d1f1-447e-84fc-13f58369a03e\") " Jan 29 14:38:01.843843 kubelet[2727]: I0129 14:38:01.843433 2727 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/bde8eccf-d1f1-447e-84fc-13f58369a03e-policysync\") pod \"bde8eccf-d1f1-447e-84fc-13f58369a03e\" (UID: \"bde8eccf-d1f1-447e-84fc-13f58369a03e\") " Jan 29 14:38:01.843843 kubelet[2727]: I0129 14:38:01.843460 2727 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/bde8eccf-d1f1-447e-84fc-13f58369a03e-cni-log-dir\") pod \"bde8eccf-d1f1-447e-84fc-13f58369a03e\" (UID: \"bde8eccf-d1f1-447e-84fc-13f58369a03e\") " Jan 29 14:38:01.844821 kubelet[2727]: I0129 14:38:01.843484 2727 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bde8eccf-d1f1-447e-84fc-13f58369a03e-var-lib-calico\") pod \"bde8eccf-d1f1-447e-84fc-13f58369a03e\" (UID: \"bde8eccf-d1f1-447e-84fc-13f58369a03e\") " Jan 29 14:38:01.844821 kubelet[2727]: I0129 14:38:01.843525 2727 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bde8eccf-d1f1-447e-84fc-13f58369a03e-tigera-ca-bundle\") pod \"bde8eccf-d1f1-447e-84fc-13f58369a03e\" (UID: \"bde8eccf-d1f1-447e-84fc-13f58369a03e\") " Jan 29 14:38:01.844821 kubelet[2727]: I0129 14:38:01.843552 2727 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/bde8eccf-d1f1-447e-84fc-13f58369a03e-var-run-calico\") pod \"bde8eccf-d1f1-447e-84fc-13f58369a03e\" (UID: \"bde8eccf-d1f1-447e-84fc-13f58369a03e\") " Jan 29 14:38:01.844821 kubelet[2727]: I0129 14:38:01.843589 2727 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/bde8eccf-d1f1-447e-84fc-13f58369a03e-flexvol-driver-host\") pod \"bde8eccf-d1f1-447e-84fc-13f58369a03e\" (UID: \"bde8eccf-d1f1-447e-84fc-13f58369a03e\") " Jan 29 14:38:01.844821 kubelet[2727]: I0129 14:38:01.843635 2727 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/bde8eccf-d1f1-447e-84fc-13f58369a03e-cni-net-dir\") pod \"bde8eccf-d1f1-447e-84fc-13f58369a03e\" (UID: \"bde8eccf-d1f1-447e-84fc-13f58369a03e\") " Jan 29 14:38:01.844821 kubelet[2727]: I0129 14:38:01.843661 2727 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/bde8eccf-d1f1-447e-84fc-13f58369a03e-cni-bin-dir\") pod \"bde8eccf-d1f1-447e-84fc-13f58369a03e\" (UID: \"bde8eccf-d1f1-447e-84fc-13f58369a03e\") " Jan 29 14:38:01.847236 kubelet[2727]: I0129 14:38:01.845316 2727 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bde8eccf-d1f1-447e-84fc-13f58369a03e-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "bde8eccf-d1f1-447e-84fc-13f58369a03e" (UID: "bde8eccf-d1f1-447e-84fc-13f58369a03e"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 14:38:01.847236 kubelet[2727]: I0129 14:38:01.846684 2727 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bde8eccf-d1f1-447e-84fc-13f58369a03e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "bde8eccf-d1f1-447e-84fc-13f58369a03e" (UID: "bde8eccf-d1f1-447e-84fc-13f58369a03e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 14:38:01.877922 kubelet[2727]: I0129 14:38:01.877340 2727 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bde8eccf-d1f1-447e-84fc-13f58369a03e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "bde8eccf-d1f1-447e-84fc-13f58369a03e" (UID: "bde8eccf-d1f1-447e-84fc-13f58369a03e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 14:38:01.877922 kubelet[2727]: I0129 14:38:01.877431 2727 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bde8eccf-d1f1-447e-84fc-13f58369a03e-policysync" (OuterVolumeSpecName: "policysync") pod "bde8eccf-d1f1-447e-84fc-13f58369a03e" (UID: "bde8eccf-d1f1-447e-84fc-13f58369a03e"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 14:38:01.877922 kubelet[2727]: I0129 14:38:01.877482 2727 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bde8eccf-d1f1-447e-84fc-13f58369a03e-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "bde8eccf-d1f1-447e-84fc-13f58369a03e" (UID: "bde8eccf-d1f1-447e-84fc-13f58369a03e"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 14:38:01.877922 kubelet[2727]: I0129 14:38:01.877520 2727 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bde8eccf-d1f1-447e-84fc-13f58369a03e-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "bde8eccf-d1f1-447e-84fc-13f58369a03e" (UID: "bde8eccf-d1f1-447e-84fc-13f58369a03e"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 14:38:01.912839 kubelet[2727]: I0129 14:38:01.908547 2727 topology_manager.go:215] "Topology Admit Handler" podUID="1eb020c0-1605-4485-ae10-273ce2e4d2ad" podNamespace="calico-system" podName="calico-node-6p8x4" Jan 29 14:38:01.912783 systemd[1]: var-lib-kubelet-pods-bde8eccf\x2dd1f1\x2d447e\x2d84fc\x2d13f58369a03e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dc9pf9.mount: Deactivated successfully. Jan 29 14:38:01.917587 systemd[1]: var-lib-kubelet-pods-bde8eccf\x2dd1f1\x2d447e\x2d84fc\x2d13f58369a03e-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Jan 29 14:38:01.922278 kubelet[2727]: I0129 14:38:01.922228 2727 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bde8eccf-d1f1-447e-84fc-13f58369a03e-kube-api-access-c9pf9" (OuterVolumeSpecName: "kube-api-access-c9pf9") pod "bde8eccf-d1f1-447e-84fc-13f58369a03e" (UID: "bde8eccf-d1f1-447e-84fc-13f58369a03e"). InnerVolumeSpecName "kube-api-access-c9pf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 14:38:01.923267 kubelet[2727]: E0129 14:38:01.923044 2727 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bde8eccf-d1f1-447e-84fc-13f58369a03e" containerName="calico-node" Jan 29 14:38:01.923267 kubelet[2727]: E0129 14:38:01.923102 2727 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bde8eccf-d1f1-447e-84fc-13f58369a03e" containerName="flexvol-driver" Jan 29 14:38:01.923267 kubelet[2727]: E0129 14:38:01.923119 2727 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bde8eccf-d1f1-447e-84fc-13f58369a03e" containerName="install-cni" Jan 29 14:38:01.930954 kubelet[2727]: I0129 14:38:01.928021 2727 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bde8eccf-d1f1-447e-84fc-13f58369a03e-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "bde8eccf-d1f1-447e-84fc-13f58369a03e" (UID: "bde8eccf-d1f1-447e-84fc-13f58369a03e"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 14:38:01.930954 kubelet[2727]: I0129 14:38:01.929638 2727 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bde8eccf-d1f1-447e-84fc-13f58369a03e-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "bde8eccf-d1f1-447e-84fc-13f58369a03e" (UID: "bde8eccf-d1f1-447e-84fc-13f58369a03e"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 14:38:01.930954 kubelet[2727]: I0129 14:38:01.929724 2727 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bde8eccf-d1f1-447e-84fc-13f58369a03e-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "bde8eccf-d1f1-447e-84fc-13f58369a03e" (UID: "bde8eccf-d1f1-447e-84fc-13f58369a03e"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 14:38:01.944582 kubelet[2727]: I0129 14:38:01.943992 2727 reconciler_common.go:289] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/bde8eccf-d1f1-447e-84fc-13f58369a03e-flexvol-driver-host\") on node \"srv-rni4s.gb1.brightbox.com\" DevicePath \"\"" Jan 29 14:38:01.944582 kubelet[2727]: I0129 14:38:01.944037 2727 reconciler_common.go:289] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/bde8eccf-d1f1-447e-84fc-13f58369a03e-cni-net-dir\") on node \"srv-rni4s.gb1.brightbox.com\" DevicePath \"\"" Jan 29 14:38:01.944582 kubelet[2727]: I0129 14:38:01.944055 2727 reconciler_common.go:289] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/bde8eccf-d1f1-447e-84fc-13f58369a03e-cni-bin-dir\") on node \"srv-rni4s.gb1.brightbox.com\" DevicePath \"\"" Jan 29 14:38:01.944582 kubelet[2727]: I0129 14:38:01.944071 2727 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bde8eccf-d1f1-447e-84fc-13f58369a03e-lib-modules\") on node \"srv-rni4s.gb1.brightbox.com\" DevicePath \"\"" Jan 29 14:38:01.944582 kubelet[2727]: I0129 14:38:01.944086 2727 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-c9pf9\" (UniqueName: \"kubernetes.io/projected/bde8eccf-d1f1-447e-84fc-13f58369a03e-kube-api-access-c9pf9\") on node \"srv-rni4s.gb1.brightbox.com\" DevicePath \"\"" Jan 29 14:38:01.944582 kubelet[2727]: I0129 14:38:01.944101 2727 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bde8eccf-d1f1-447e-84fc-13f58369a03e-xtables-lock\") on node \"srv-rni4s.gb1.brightbox.com\" DevicePath \"\"" Jan 29 14:38:01.944582 kubelet[2727]: I0129 14:38:01.944116 2727 reconciler_common.go:289] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/bde8eccf-d1f1-447e-84fc-13f58369a03e-policysync\") on node \"srv-rni4s.gb1.brightbox.com\" DevicePath \"\"" Jan 29 14:38:01.944582 kubelet[2727]: I0129 14:38:01.944133 2727 reconciler_common.go:289] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/bde8eccf-d1f1-447e-84fc-13f58369a03e-cni-log-dir\") on node \"srv-rni4s.gb1.brightbox.com\" DevicePath \"\"" Jan 29 14:38:01.946880 kubelet[2727]: I0129 14:38:01.944148 2727 reconciler_common.go:289] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bde8eccf-d1f1-447e-84fc-13f58369a03e-var-lib-calico\") on node \"srv-rni4s.gb1.brightbox.com\" DevicePath \"\"" Jan 29 14:38:01.946880 kubelet[2727]: I0129 14:38:01.944162 2727 reconciler_common.go:289] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/bde8eccf-d1f1-447e-84fc-13f58369a03e-var-run-calico\") on node \"srv-rni4s.gb1.brightbox.com\" DevicePath \"\"" Jan 29 14:38:01.951503 systemd[1]: var-lib-kubelet-pods-bde8eccf\x2dd1f1\x2d447e\x2d84fc\x2d13f58369a03e-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Jan 29 14:38:01.953606 kubelet[2727]: I0129 14:38:01.953180 2727 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bde8eccf-d1f1-447e-84fc-13f58369a03e-node-certs" (OuterVolumeSpecName: "node-certs") pod "bde8eccf-d1f1-447e-84fc-13f58369a03e" (UID: "bde8eccf-d1f1-447e-84fc-13f58369a03e"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 14:38:01.959076 kubelet[2727]: I0129 14:38:01.954332 2727 memory_manager.go:354] "RemoveStaleState removing state" podUID="bde8eccf-d1f1-447e-84fc-13f58369a03e" containerName="calico-node" Jan 29 14:38:01.967657 kubelet[2727]: I0129 14:38:01.967604 2727 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bde8eccf-d1f1-447e-84fc-13f58369a03e-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "bde8eccf-d1f1-447e-84fc-13f58369a03e" (UID: "bde8eccf-d1f1-447e-84fc-13f58369a03e"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 14:38:01.973787 containerd[1501]: time="2025-01-29T14:38:01.973729715Z" level=info msg="StopContainer for \"1a9cdf6b8582e417a6c387dd8cc36671e076e69d87cbf589b4c312610571b3da\" returns successfully" Jan 29 14:38:01.975126 containerd[1501]: time="2025-01-29T14:38:01.975059892Z" level=info msg="StopPodSandbox for \"df130c88186aedd04a86bf39f6277ada377a0ed955aa53dcf7f7363429765a51\"" Jan 29 14:38:01.975375 containerd[1501]: time="2025-01-29T14:38:01.975330474Z" level=info msg="Container to stop \"1a9cdf6b8582e417a6c387dd8cc36671e076e69d87cbf589b4c312610571b3da\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 14:38:01.983468 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-df130c88186aedd04a86bf39f6277ada377a0ed955aa53dcf7f7363429765a51-shm.mount: Deactivated successfully. Jan 29 14:38:01.990293 systemd[1]: Created slice kubepods-besteffort-pod1eb020c0_1605_4485_ae10_273ce2e4d2ad.slice - libcontainer container kubepods-besteffort-pod1eb020c0_1605_4485_ae10_273ce2e4d2ad.slice. Jan 29 14:38:02.029136 systemd[1]: cri-containerd-df130c88186aedd04a86bf39f6277ada377a0ed955aa53dcf7f7363429765a51.scope: Deactivated successfully. Jan 29 14:38:02.045631 kubelet[2727]: I0129 14:38:02.045269 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1eb020c0-1605-4485-ae10-273ce2e4d2ad-tigera-ca-bundle\") pod \"calico-node-6p8x4\" (UID: \"1eb020c0-1605-4485-ae10-273ce2e4d2ad\") " pod="calico-system/calico-node-6p8x4" Jan 29 14:38:02.045897 kubelet[2727]: I0129 14:38:02.045867 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1eb020c0-1605-4485-ae10-273ce2e4d2ad-var-lib-calico\") pod \"calico-node-6p8x4\" (UID: \"1eb020c0-1605-4485-ae10-273ce2e4d2ad\") " pod="calico-system/calico-node-6p8x4" Jan 29 14:38:02.045983 kubelet[2727]: I0129 14:38:02.045927 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1eb020c0-1605-4485-ae10-273ce2e4d2ad-cni-log-dir\") pod \"calico-node-6p8x4\" (UID: \"1eb020c0-1605-4485-ae10-273ce2e4d2ad\") " pod="calico-system/calico-node-6p8x4" Jan 29 14:38:02.045983 kubelet[2727]: I0129 14:38:02.045959 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1eb020c0-1605-4485-ae10-273ce2e4d2ad-node-certs\") pod \"calico-node-6p8x4\" (UID: \"1eb020c0-1605-4485-ae10-273ce2e4d2ad\") " pod="calico-system/calico-node-6p8x4" Jan 29 14:38:02.046101 kubelet[2727]: I0129 14:38:02.045999 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1eb020c0-1605-4485-ae10-273ce2e4d2ad-policysync\") pod \"calico-node-6p8x4\" (UID: \"1eb020c0-1605-4485-ae10-273ce2e4d2ad\") " pod="calico-system/calico-node-6p8x4" Jan 29 14:38:02.046101 kubelet[2727]: I0129 14:38:02.046044 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1eb020c0-1605-4485-ae10-273ce2e4d2ad-lib-modules\") pod \"calico-node-6p8x4\" (UID: \"1eb020c0-1605-4485-ae10-273ce2e4d2ad\") " pod="calico-system/calico-node-6p8x4" Jan 29 14:38:02.046101 kubelet[2727]: I0129 14:38:02.046074 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1eb020c0-1605-4485-ae10-273ce2e4d2ad-var-run-calico\") pod \"calico-node-6p8x4\" (UID: \"1eb020c0-1605-4485-ae10-273ce2e4d2ad\") " pod="calico-system/calico-node-6p8x4" Jan 29 14:38:02.046272 kubelet[2727]: I0129 14:38:02.046104 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1eb020c0-1605-4485-ae10-273ce2e4d2ad-flexvol-driver-host\") pod \"calico-node-6p8x4\" (UID: \"1eb020c0-1605-4485-ae10-273ce2e4d2ad\") " pod="calico-system/calico-node-6p8x4" Jan 29 14:38:02.046272 kubelet[2727]: I0129 14:38:02.046132 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1eb020c0-1605-4485-ae10-273ce2e4d2ad-xtables-lock\") pod \"calico-node-6p8x4\" (UID: \"1eb020c0-1605-4485-ae10-273ce2e4d2ad\") " pod="calico-system/calico-node-6p8x4" Jan 29 14:38:02.046272 kubelet[2727]: I0129 14:38:02.046169 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1eb020c0-1605-4485-ae10-273ce2e4d2ad-cni-bin-dir\") pod \"calico-node-6p8x4\" (UID: \"1eb020c0-1605-4485-ae10-273ce2e4d2ad\") " pod="calico-system/calico-node-6p8x4" Jan 29 14:38:02.046272 kubelet[2727]: I0129 14:38:02.046197 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1eb020c0-1605-4485-ae10-273ce2e4d2ad-cni-net-dir\") pod \"calico-node-6p8x4\" (UID: \"1eb020c0-1605-4485-ae10-273ce2e4d2ad\") " pod="calico-system/calico-node-6p8x4" Jan 29 14:38:02.046272 kubelet[2727]: I0129 14:38:02.046226 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxnwb\" (UniqueName: \"kubernetes.io/projected/1eb020c0-1605-4485-ae10-273ce2e4d2ad-kube-api-access-hxnwb\") pod \"calico-node-6p8x4\" (UID: \"1eb020c0-1605-4485-ae10-273ce2e4d2ad\") " pod="calico-system/calico-node-6p8x4" Jan 29 14:38:02.046272 kubelet[2727]: I0129 14:38:02.046264 2727 reconciler_common.go:289] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/bde8eccf-d1f1-447e-84fc-13f58369a03e-node-certs\") on node \"srv-rni4s.gb1.brightbox.com\" DevicePath \"\"" Jan 29 14:38:02.046598 kubelet[2727]: I0129 14:38:02.046295 2727 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bde8eccf-d1f1-447e-84fc-13f58369a03e-tigera-ca-bundle\") on node \"srv-rni4s.gb1.brightbox.com\" DevicePath \"\"" Jan 29 14:38:02.085523 containerd[1501]: time="2025-01-29T14:38:02.084402453Z" level=info msg="shim disconnected" id=df130c88186aedd04a86bf39f6277ada377a0ed955aa53dcf7f7363429765a51 namespace=k8s.io Jan 29 14:38:02.085523 containerd[1501]: time="2025-01-29T14:38:02.084486083Z" level=warning msg="cleaning up after shim disconnected" id=df130c88186aedd04a86bf39f6277ada377a0ed955aa53dcf7f7363429765a51 namespace=k8s.io Jan 29 14:38:02.085523 containerd[1501]: time="2025-01-29T14:38:02.084515308Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 14:38:02.155653 containerd[1501]: time="2025-01-29T14:38:02.154723961Z" level=info msg="TearDown network for sandbox \"df130c88186aedd04a86bf39f6277ada377a0ed955aa53dcf7f7363429765a51\" successfully" Jan 29 14:38:02.155653 containerd[1501]: time="2025-01-29T14:38:02.154771974Z" level=info msg="StopPodSandbox for \"df130c88186aedd04a86bf39f6277ada377a0ed955aa53dcf7f7363429765a51\" returns successfully" Jan 29 14:38:02.249461 kubelet[2727]: I0129 14:38:02.248936 2727 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c990bf3d-6953-4265-9c69-aca2f2c4441a-tigera-ca-bundle\") pod \"c990bf3d-6953-4265-9c69-aca2f2c4441a\" (UID: \"c990bf3d-6953-4265-9c69-aca2f2c4441a\") " Jan 29 14:38:02.249461 kubelet[2727]: I0129 14:38:02.249015 2727 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c990bf3d-6953-4265-9c69-aca2f2c4441a-typha-certs\") pod \"c990bf3d-6953-4265-9c69-aca2f2c4441a\" (UID: \"c990bf3d-6953-4265-9c69-aca2f2c4441a\") " Jan 29 14:38:02.249461 kubelet[2727]: I0129 14:38:02.249064 2727 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gz25j\" (UniqueName: \"kubernetes.io/projected/c990bf3d-6953-4265-9c69-aca2f2c4441a-kube-api-access-gz25j\") pod \"c990bf3d-6953-4265-9c69-aca2f2c4441a\" (UID: \"c990bf3d-6953-4265-9c69-aca2f2c4441a\") " Jan 29 14:38:02.275619 kubelet[2727]: I0129 14:38:02.274492 2727 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c990bf3d-6953-4265-9c69-aca2f2c4441a-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "c990bf3d-6953-4265-9c69-aca2f2c4441a" (UID: "c990bf3d-6953-4265-9c69-aca2f2c4441a"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 14:38:02.276858 kubelet[2727]: I0129 14:38:02.275893 2727 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c990bf3d-6953-4265-9c69-aca2f2c4441a-kube-api-access-gz25j" (OuterVolumeSpecName: "kube-api-access-gz25j") pod "c990bf3d-6953-4265-9c69-aca2f2c4441a" (UID: "c990bf3d-6953-4265-9c69-aca2f2c4441a"). InnerVolumeSpecName "kube-api-access-gz25j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 14:38:02.279500 kubelet[2727]: I0129 14:38:02.279226 2727 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c990bf3d-6953-4265-9c69-aca2f2c4441a-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "c990bf3d-6953-4265-9c69-aca2f2c4441a" (UID: "c990bf3d-6953-4265-9c69-aca2f2c4441a"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 14:38:02.311396 containerd[1501]: time="2025-01-29T14:38:02.311238893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6p8x4,Uid:1eb020c0-1605-4485-ae10-273ce2e4d2ad,Namespace:calico-system,Attempt:0,}" Jan 29 14:38:02.350618 kubelet[2727]: I0129 14:38:02.350242 2727 reconciler_common.go:289] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c990bf3d-6953-4265-9c69-aca2f2c4441a-typha-certs\") on node \"srv-rni4s.gb1.brightbox.com\" DevicePath \"\"" Jan 29 14:38:02.350618 kubelet[2727]: I0129 14:38:02.350310 2727 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c990bf3d-6953-4265-9c69-aca2f2c4441a-tigera-ca-bundle\") on node \"srv-rni4s.gb1.brightbox.com\" DevicePath \"\"" Jan 29 14:38:02.350618 kubelet[2727]: I0129 14:38:02.350332 2727 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-gz25j\" (UniqueName: \"kubernetes.io/projected/c990bf3d-6953-4265-9c69-aca2f2c4441a-kube-api-access-gz25j\") on node \"srv-rni4s.gb1.brightbox.com\" DevicePath \"\"" Jan 29 14:38:02.395797 containerd[1501]: time="2025-01-29T14:38:02.394731062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 14:38:02.395797 containerd[1501]: time="2025-01-29T14:38:02.395488480Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 14:38:02.395797 containerd[1501]: time="2025-01-29T14:38:02.395512294Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 14:38:02.399390 containerd[1501]: time="2025-01-29T14:38:02.396639915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 14:38:02.463664 systemd[1]: Started cri-containerd-a69cbee278048b8101fdc41bd4e8601c71153238390eb1e1ad4e3c2f0fde7315.scope - libcontainer container a69cbee278048b8101fdc41bd4e8601c71153238390eb1e1ad4e3c2f0fde7315. Jan 29 14:38:02.575026 containerd[1501]: time="2025-01-29T14:38:02.574655307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6p8x4,Uid:1eb020c0-1605-4485-ae10-273ce2e4d2ad,Namespace:calico-system,Attempt:0,} returns sandbox id \"a69cbee278048b8101fdc41bd4e8601c71153238390eb1e1ad4e3c2f0fde7315\"" Jan 29 14:38:02.586722 containerd[1501]: time="2025-01-29T14:38:02.585857643Z" level=info msg="CreateContainer within sandbox \"a69cbee278048b8101fdc41bd4e8601c71153238390eb1e1ad4e3c2f0fde7315\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 14:38:02.633835 containerd[1501]: time="2025-01-29T14:38:02.633054150Z" level=info msg="CreateContainer within sandbox \"a69cbee278048b8101fdc41bd4e8601c71153238390eb1e1ad4e3c2f0fde7315\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e03c72c92108b4631ceb3563a6cfc611a5ef6d2ce95ea632c39afb8f12a073ee\"" Jan 29 14:38:02.635553 containerd[1501]: time="2025-01-29T14:38:02.635023751Z" level=info msg="StartContainer for \"e03c72c92108b4631ceb3563a6cfc611a5ef6d2ce95ea632c39afb8f12a073ee\"" Jan 29 14:38:02.667962 kubelet[2727]: I0129 14:38:02.667211 2727 scope.go:117] "RemoveContainer" containerID="1a9cdf6b8582e417a6c387dd8cc36671e076e69d87cbf589b4c312610571b3da" Jan 29 14:38:02.694591 containerd[1501]: time="2025-01-29T14:38:02.694542174Z" level=info msg="RemoveContainer for \"1a9cdf6b8582e417a6c387dd8cc36671e076e69d87cbf589b4c312610571b3da\"" Jan 29 14:38:02.695664 systemd[1]: Removed slice kubepods-besteffort-podc990bf3d_6953_4265_9c69_aca2f2c4441a.slice - libcontainer container kubepods-besteffort-podc990bf3d_6953_4265_9c69_aca2f2c4441a.slice. Jan 29 14:38:02.739420 containerd[1501]: time="2025-01-29T14:38:02.739244165Z" level=info msg="RemoveContainer for \"1a9cdf6b8582e417a6c387dd8cc36671e076e69d87cbf589b4c312610571b3da\" returns successfully" Jan 29 14:38:02.743431 systemd[1]: Started cri-containerd-e03c72c92108b4631ceb3563a6cfc611a5ef6d2ce95ea632c39afb8f12a073ee.scope - libcontainer container e03c72c92108b4631ceb3563a6cfc611a5ef6d2ce95ea632c39afb8f12a073ee. Jan 29 14:38:02.750953 kubelet[2727]: I0129 14:38:02.750900 2727 scope.go:117] "RemoveContainer" containerID="1a9cdf6b8582e417a6c387dd8cc36671e076e69d87cbf589b4c312610571b3da" Jan 29 14:38:02.778100 containerd[1501]: time="2025-01-29T14:38:02.753282340Z" level=error msg="ContainerStatus for \"1a9cdf6b8582e417a6c387dd8cc36671e076e69d87cbf589b4c312610571b3da\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1a9cdf6b8582e417a6c387dd8cc36671e076e69d87cbf589b4c312610571b3da\": not found" Jan 29 14:38:02.779757 kubelet[2727]: E0129 14:38:02.779696 2727 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1a9cdf6b8582e417a6c387dd8cc36671e076e69d87cbf589b4c312610571b3da\": not found" containerID="1a9cdf6b8582e417a6c387dd8cc36671e076e69d87cbf589b4c312610571b3da" Jan 29 14:38:02.779927 kubelet[2727]: I0129 14:38:02.779773 2727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1a9cdf6b8582e417a6c387dd8cc36671e076e69d87cbf589b4c312610571b3da"} err="failed to get container status \"1a9cdf6b8582e417a6c387dd8cc36671e076e69d87cbf589b4c312610571b3da\": rpc error: code = NotFound desc = an error occurred when try to find container \"1a9cdf6b8582e417a6c387dd8cc36671e076e69d87cbf589b4c312610571b3da\": not found" Jan 29 14:38:02.779927 kubelet[2727]: I0129 14:38:02.779839 2727 scope.go:117] "RemoveContainer" containerID="aaf035c40e295dcbf84ee6332b1a7eb733e5696a45fc36ca70f7ea82073c17bb" Jan 29 14:38:02.781920 systemd[1]: Removed slice kubepods-besteffort-podbde8eccf_d1f1_447e_84fc_13f58369a03e.slice - libcontainer container kubepods-besteffort-podbde8eccf_d1f1_447e_84fc_13f58369a03e.slice. Jan 29 14:38:02.782072 systemd[1]: kubepods-besteffort-podbde8eccf_d1f1_447e_84fc_13f58369a03e.slice: Consumed 3.832s CPU time. Jan 29 14:38:02.788117 containerd[1501]: time="2025-01-29T14:38:02.787549598Z" level=info msg="RemoveContainer for \"aaf035c40e295dcbf84ee6332b1a7eb733e5696a45fc36ca70f7ea82073c17bb\"" Jan 29 14:38:02.795214 containerd[1501]: time="2025-01-29T14:38:02.795173641Z" level=info msg="RemoveContainer for \"aaf035c40e295dcbf84ee6332b1a7eb733e5696a45fc36ca70f7ea82073c17bb\" returns successfully" Jan 29 14:38:02.796348 kubelet[2727]: I0129 14:38:02.796222 2727 scope.go:117] "RemoveContainer" containerID="a6ff3da03a0689833e444b96911d0c7b6cb2566ad136ad4744e4e36296fc195d" Jan 29 14:38:02.804731 containerd[1501]: time="2025-01-29T14:38:02.804687721Z" level=info msg="RemoveContainer for \"a6ff3da03a0689833e444b96911d0c7b6cb2566ad136ad4744e4e36296fc195d\"" Jan 29 14:38:02.844846 systemd[1]: var-lib-kubelet-pods-c990bf3d\x2d6953\x2d4265\x2d9c69\x2daca2f2c4441a-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Jan 29 14:38:02.845743 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df130c88186aedd04a86bf39f6277ada377a0ed955aa53dcf7f7363429765a51-rootfs.mount: Deactivated successfully. Jan 29 14:38:02.851008 kubelet[2727]: I0129 14:38:02.847556 2727 scope.go:117] "RemoveContainer" containerID="a69055f40e29bd4488240af653164a0957678e1c4e1fe0dbe80c60e493f3356f" Jan 29 14:38:02.851469 containerd[1501]: time="2025-01-29T14:38:02.846896269Z" level=info msg="RemoveContainer for \"a6ff3da03a0689833e444b96911d0c7b6cb2566ad136ad4744e4e36296fc195d\" returns successfully" Jan 29 14:38:02.845903 systemd[1]: var-lib-kubelet-pods-c990bf3d\x2d6953\x2d4265\x2d9c69\x2daca2f2c4441a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgz25j.mount: Deactivated successfully. Jan 29 14:38:02.846022 systemd[1]: var-lib-kubelet-pods-c990bf3d\x2d6953\x2d4265\x2d9c69\x2daca2f2c4441a-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Jan 29 14:38:02.853830 containerd[1501]: time="2025-01-29T14:38:02.853527163Z" level=info msg="RemoveContainer for \"a69055f40e29bd4488240af653164a0957678e1c4e1fe0dbe80c60e493f3356f\"" Jan 29 14:38:02.862852 containerd[1501]: time="2025-01-29T14:38:02.861773595Z" level=info msg="RemoveContainer for \"a69055f40e29bd4488240af653164a0957678e1c4e1fe0dbe80c60e493f3356f\" returns successfully" Jan 29 14:38:02.930476 containerd[1501]: time="2025-01-29T14:38:02.929612909Z" level=info msg="StartContainer for \"e03c72c92108b4631ceb3563a6cfc611a5ef6d2ce95ea632c39afb8f12a073ee\" returns successfully" Jan 29 14:38:03.025168 containerd[1501]: time="2025-01-29T14:38:03.025053444Z" level=info msg="StopPodSandbox for \"a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90\"" Jan 29 14:38:03.050686 kubelet[2727]: I0129 14:38:03.050300 2727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bde8eccf-d1f1-447e-84fc-13f58369a03e" path="/var/lib/kubelet/pods/bde8eccf-d1f1-447e-84fc-13f58369a03e/volumes" Jan 29 14:38:03.054286 kubelet[2727]: I0129 14:38:03.053454 2727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c990bf3d-6953-4265-9c69-aca2f2c4441a" path="/var/lib/kubelet/pods/c990bf3d-6953-4265-9c69-aca2f2c4441a/volumes" Jan 29 14:38:03.134553 systemd[1]: cri-containerd-e03c72c92108b4631ceb3563a6cfc611a5ef6d2ce95ea632c39afb8f12a073ee.scope: Deactivated successfully. Jan 29 14:38:03.319488 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e03c72c92108b4631ceb3563a6cfc611a5ef6d2ce95ea632c39afb8f12a073ee-rootfs.mount: Deactivated successfully. Jan 29 14:38:03.417411 containerd[1501]: time="2025-01-29T14:38:03.416884521Z" level=info msg="shim disconnected" id=e03c72c92108b4631ceb3563a6cfc611a5ef6d2ce95ea632c39afb8f12a073ee namespace=k8s.io Jan 29 14:38:03.417411 containerd[1501]: time="2025-01-29T14:38:03.416980368Z" level=warning msg="cleaning up after shim disconnected" id=e03c72c92108b4631ceb3563a6cfc611a5ef6d2ce95ea632c39afb8f12a073ee namespace=k8s.io Jan 29 14:38:03.417411 containerd[1501]: time="2025-01-29T14:38:03.417021394Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 14:38:03.689048 containerd[1501]: 2025-01-29 14:38:03.510 [WARNING][5218] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--rni4s.gb1.brightbox.com-k8s-calico--apiserver--b98d747cb--hclzq-eth0", GenerateName:"calico-apiserver-b98d747cb-", Namespace:"calico-apiserver", SelfLink:"", UID:"8f491968-0d35-4eb0-81d6-df6823ca943a", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 14, 37, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b98d747cb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-rni4s.gb1.brightbox.com", ContainerID:"9975b3fe69ef567883c3dcf45b3919b3b8aaa93a453c53be742a522efbb27eef", Pod:"calico-apiserver-b98d747cb-hclzq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.0.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali975492754d3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 14:38:03.689048 containerd[1501]: 2025-01-29 14:38:03.515 [INFO][5218] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90" Jan 29 14:38:03.689048 containerd[1501]: 2025-01-29 14:38:03.516 [INFO][5218] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90" iface="eth0" netns="" Jan 29 14:38:03.689048 containerd[1501]: 2025-01-29 14:38:03.516 [INFO][5218] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90" Jan 29 14:38:03.689048 containerd[1501]: 2025-01-29 14:38:03.516 [INFO][5218] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90" Jan 29 14:38:03.689048 containerd[1501]: 2025-01-29 14:38:03.622 [INFO][5248] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90" HandleID="k8s-pod-network.a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90" Workload="srv--rni4s.gb1.brightbox.com-k8s-calico--apiserver--b98d747cb--hclzq-eth0" Jan 29 14:38:03.689048 containerd[1501]: 2025-01-29 14:38:03.623 [INFO][5248] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 14:38:03.689048 containerd[1501]: 2025-01-29 14:38:03.623 [INFO][5248] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 14:38:03.689048 containerd[1501]: 2025-01-29 14:38:03.653 [WARNING][5248] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90" HandleID="k8s-pod-network.a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90" Workload="srv--rni4s.gb1.brightbox.com-k8s-calico--apiserver--b98d747cb--hclzq-eth0" Jan 29 14:38:03.689048 containerd[1501]: 2025-01-29 14:38:03.653 [INFO][5248] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90" HandleID="k8s-pod-network.a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90" Workload="srv--rni4s.gb1.brightbox.com-k8s-calico--apiserver--b98d747cb--hclzq-eth0" Jan 29 14:38:03.689048 containerd[1501]: 2025-01-29 14:38:03.661 [INFO][5248] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 14:38:03.689048 containerd[1501]: 2025-01-29 14:38:03.673 [INFO][5218] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90" Jan 29 14:38:03.692084 containerd[1501]: time="2025-01-29T14:38:03.690074517Z" level=info msg="TearDown network for sandbox \"a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90\" successfully" Jan 29 14:38:03.692084 containerd[1501]: time="2025-01-29T14:38:03.690111696Z" level=info msg="StopPodSandbox for \"a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90\" returns successfully" Jan 29 14:38:03.695006 containerd[1501]: time="2025-01-29T14:38:03.694597732Z" level=info msg="RemovePodSandbox for \"a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90\"" Jan 29 14:38:03.695006 containerd[1501]: time="2025-01-29T14:38:03.694661400Z" level=info msg="Forcibly stopping sandbox \"a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90\"" Jan 29 14:38:03.780451 containerd[1501]: time="2025-01-29T14:38:03.780388184Z" level=info msg="CreateContainer within sandbox \"a69cbee278048b8101fdc41bd4e8601c71153238390eb1e1ad4e3c2f0fde7315\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 14:38:03.817495 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2124362526.mount: Deactivated successfully. Jan 29 14:38:03.826524 containerd[1501]: time="2025-01-29T14:38:03.826374052Z" level=info msg="CreateContainer within sandbox \"a69cbee278048b8101fdc41bd4e8601c71153238390eb1e1ad4e3c2f0fde7315\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0f53af9e4b5cf68071e751f430783b609e40209c251192be2238c928472a8991\"" Jan 29 14:38:03.829267 containerd[1501]: time="2025-01-29T14:38:03.829164536Z" level=info msg="StartContainer for \"0f53af9e4b5cf68071e751f430783b609e40209c251192be2238c928472a8991\"" Jan 29 14:38:03.941175 systemd[1]: Started cri-containerd-0f53af9e4b5cf68071e751f430783b609e40209c251192be2238c928472a8991.scope - libcontainer container 0f53af9e4b5cf68071e751f430783b609e40209c251192be2238c928472a8991. Jan 29 14:38:03.966985 kubelet[2727]: I0129 14:38:03.966734 2727 topology_manager.go:215] "Topology Admit Handler" podUID="ea215272-cc63-45f1-9268-e1df5288d954" podNamespace="calico-system" podName="calico-typha-f67b96744-v5cf5" Jan 29 14:38:03.968678 kubelet[2727]: E0129 14:38:03.967483 2727 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c990bf3d-6953-4265-9c69-aca2f2c4441a" containerName="calico-typha" Jan 29 14:38:03.968678 kubelet[2727]: I0129 14:38:03.967545 2727 memory_manager.go:354] "RemoveStaleState removing state" podUID="c990bf3d-6953-4265-9c69-aca2f2c4441a" containerName="calico-typha" Jan 29 14:38:03.976008 kubelet[2727]: I0129 14:38:03.975864 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea215272-cc63-45f1-9268-e1df5288d954-tigera-ca-bundle\") pod \"calico-typha-f67b96744-v5cf5\" (UID: \"ea215272-cc63-45f1-9268-e1df5288d954\") " pod="calico-system/calico-typha-f67b96744-v5cf5" Jan 29 14:38:03.976008 kubelet[2727]: I0129 14:38:03.975930 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ea215272-cc63-45f1-9268-e1df5288d954-typha-certs\") pod \"calico-typha-f67b96744-v5cf5\" (UID: \"ea215272-cc63-45f1-9268-e1df5288d954\") " pod="calico-system/calico-typha-f67b96744-v5cf5" Jan 29 14:38:03.976008 kubelet[2727]: I0129 14:38:03.975970 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5n92\" (UniqueName: \"kubernetes.io/projected/ea215272-cc63-45f1-9268-e1df5288d954-kube-api-access-p5n92\") pod \"calico-typha-f67b96744-v5cf5\" (UID: \"ea215272-cc63-45f1-9268-e1df5288d954\") " pod="calico-system/calico-typha-f67b96744-v5cf5" Jan 29 14:38:03.983053 systemd[1]: Created slice kubepods-besteffort-podea215272_cc63_45f1_9268_e1df5288d954.slice - libcontainer container kubepods-besteffort-podea215272_cc63_45f1_9268_e1df5288d954.slice. Jan 29 14:38:04.169568 containerd[1501]: 2025-01-29 14:38:03.873 [WARNING][5269] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--rni4s.gb1.brightbox.com-k8s-calico--apiserver--b98d747cb--hclzq-eth0", GenerateName:"calico-apiserver-b98d747cb-", Namespace:"calico-apiserver", SelfLink:"", UID:"8f491968-0d35-4eb0-81d6-df6823ca943a", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 14, 37, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b98d747cb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-rni4s.gb1.brightbox.com", ContainerID:"9975b3fe69ef567883c3dcf45b3919b3b8aaa93a453c53be742a522efbb27eef", Pod:"calico-apiserver-b98d747cb-hclzq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.0.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali975492754d3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 14:38:04.169568 containerd[1501]: 2025-01-29 14:38:03.877 [INFO][5269] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90" Jan 29 14:38:04.169568 containerd[1501]: 2025-01-29 14:38:03.878 [INFO][5269] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90" iface="eth0" netns="" Jan 29 14:38:04.169568 containerd[1501]: 2025-01-29 14:38:03.878 [INFO][5269] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90" Jan 29 14:38:04.169568 containerd[1501]: 2025-01-29 14:38:03.878 [INFO][5269] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90" Jan 29 14:38:04.169568 containerd[1501]: 2025-01-29 14:38:04.089 [INFO][5293] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90" HandleID="k8s-pod-network.a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90" Workload="srv--rni4s.gb1.brightbox.com-k8s-calico--apiserver--b98d747cb--hclzq-eth0" Jan 29 14:38:04.169568 containerd[1501]: 2025-01-29 14:38:04.089 [INFO][5293] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 14:38:04.169568 containerd[1501]: 2025-01-29 14:38:04.089 [INFO][5293] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 14:38:04.169568 containerd[1501]: 2025-01-29 14:38:04.138 [WARNING][5293] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90" HandleID="k8s-pod-network.a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90" Workload="srv--rni4s.gb1.brightbox.com-k8s-calico--apiserver--b98d747cb--hclzq-eth0" Jan 29 14:38:04.169568 containerd[1501]: 2025-01-29 14:38:04.139 [INFO][5293] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90" HandleID="k8s-pod-network.a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90" Workload="srv--rni4s.gb1.brightbox.com-k8s-calico--apiserver--b98d747cb--hclzq-eth0" Jan 29 14:38:04.169568 containerd[1501]: 2025-01-29 14:38:04.155 [INFO][5293] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 14:38:04.169568 containerd[1501]: 2025-01-29 14:38:04.163 [INFO][5269] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90" Jan 29 14:38:04.170678 containerd[1501]: time="2025-01-29T14:38:04.169777336Z" level=info msg="TearDown network for sandbox \"a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90\" successfully" Jan 29 14:38:04.194301 containerd[1501]: time="2025-01-29T14:38:04.193013032Z" level=info msg="StartContainer for \"0f53af9e4b5cf68071e751f430783b609e40209c251192be2238c928472a8991\" returns successfully" Jan 29 14:38:04.196144 containerd[1501]: time="2025-01-29T14:38:04.195923163Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 14:38:04.197749 containerd[1501]: time="2025-01-29T14:38:04.197694700Z" level=info msg="RemovePodSandbox \"a56284ac47134e22e52a6c1003143b2edda98a706945cf6174caf6c44586de90\" returns successfully" Jan 29 14:38:04.199468 containerd[1501]: time="2025-01-29T14:38:04.199409395Z" level=info msg="StopPodSandbox for \"e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c\"" Jan 29 14:38:04.271234 containerd[1501]: time="2025-01-29T14:38:04.271035975Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:38:04.272947 containerd[1501]: time="2025-01-29T14:38:04.272824181Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 29 14:38:04.278427 containerd[1501]: time="2025-01-29T14:38:04.277953345Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:38:04.285847 containerd[1501]: time="2025-01-29T14:38:04.285429630Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:38:04.287294 containerd[1501]: time="2025-01-29T14:38:04.287247013Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 5.376839068s" Jan 29 14:38:04.287633 containerd[1501]: time="2025-01-29T14:38:04.287602991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 29 14:38:04.291894 containerd[1501]: time="2025-01-29T14:38:04.290996270Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 14:38:04.313272 containerd[1501]: time="2025-01-29T14:38:04.313074569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f67b96744-v5cf5,Uid:ea215272-cc63-45f1-9268-e1df5288d954,Namespace:calico-system,Attempt:0,}" Jan 29 14:38:04.315134 containerd[1501]: time="2025-01-29T14:38:04.314929684Z" level=info msg="CreateContainer within sandbox \"0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 29 14:38:04.364062 containerd[1501]: time="2025-01-29T14:38:04.364000173Z" level=info msg="CreateContainer within sandbox \"0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"ebc1fbd9f3c36879854d7b5af1fc72c5a90d214b0d9d73465cb98b0b5ce2394a\"" Jan 29 14:38:04.368630 containerd[1501]: time="2025-01-29T14:38:04.368024540Z" level=info msg="StartContainer for \"ebc1fbd9f3c36879854d7b5af1fc72c5a90d214b0d9d73465cb98b0b5ce2394a\"" Jan 29 14:38:04.407567 containerd[1501]: time="2025-01-29T14:38:04.406258424Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 14:38:04.407567 containerd[1501]: time="2025-01-29T14:38:04.406388301Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 14:38:04.407567 containerd[1501]: time="2025-01-29T14:38:04.406412690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 14:38:04.407567 containerd[1501]: time="2025-01-29T14:38:04.406583898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 14:38:04.471025 systemd[1]: Started cri-containerd-ebc1fbd9f3c36879854d7b5af1fc72c5a90d214b0d9d73465cb98b0b5ce2394a.scope - libcontainer container ebc1fbd9f3c36879854d7b5af1fc72c5a90d214b0d9d73465cb98b0b5ce2394a. Jan 29 14:38:04.484350 systemd[1]: Started cri-containerd-b4f7a7b167a5355e089240ad1a48911b91ce224f747490ad5c29a0a0df79cb1f.scope - libcontainer container b4f7a7b167a5355e089240ad1a48911b91ce224f747490ad5c29a0a0df79cb1f. Jan 29 14:38:04.524036 containerd[1501]: 2025-01-29 14:38:04.366 [WARNING][5338] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--rni4s.gb1.brightbox.com-k8s-calico--kube--controllers--77d6948c77--z6748-eth0", GenerateName:"calico-kube-controllers-77d6948c77-", Namespace:"calico-system", SelfLink:"", UID:"5ff99ef3-cf8a-4707-99e1-ba45cf048b9d", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 14, 37, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77d6948c77", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-rni4s.gb1.brightbox.com", ContainerID:"0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81", Pod:"calico-kube-controllers-77d6948c77-z6748", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.0.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic7190d9e812", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 14:38:04.524036 containerd[1501]: 2025-01-29 14:38:04.368 [INFO][5338] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c" Jan 29 14:38:04.524036 containerd[1501]: 2025-01-29 14:38:04.368 [INFO][5338] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c" iface="eth0" netns="" Jan 29 14:38:04.524036 containerd[1501]: 2025-01-29 14:38:04.368 [INFO][5338] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c" Jan 29 14:38:04.524036 containerd[1501]: 2025-01-29 14:38:04.369 [INFO][5338] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c" Jan 29 14:38:04.524036 containerd[1501]: 2025-01-29 14:38:04.498 [INFO][5354] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c" HandleID="k8s-pod-network.e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c" Workload="srv--rni4s.gb1.brightbox.com-k8s-calico--kube--controllers--77d6948c77--z6748-eth0" Jan 29 14:38:04.524036 containerd[1501]: 2025-01-29 14:38:04.499 [INFO][5354] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 14:38:04.524036 containerd[1501]: 2025-01-29 14:38:04.499 [INFO][5354] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 14:38:04.524036 containerd[1501]: 2025-01-29 14:38:04.514 [WARNING][5354] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c" HandleID="k8s-pod-network.e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c" Workload="srv--rni4s.gb1.brightbox.com-k8s-calico--kube--controllers--77d6948c77--z6748-eth0" Jan 29 14:38:04.524036 containerd[1501]: 2025-01-29 14:38:04.515 [INFO][5354] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c" HandleID="k8s-pod-network.e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c" Workload="srv--rni4s.gb1.brightbox.com-k8s-calico--kube--controllers--77d6948c77--z6748-eth0" Jan 29 14:38:04.524036 containerd[1501]: 2025-01-29 14:38:04.517 [INFO][5354] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 14:38:04.524036 containerd[1501]: 2025-01-29 14:38:04.520 [INFO][5338] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c" Jan 29 14:38:04.525838 containerd[1501]: time="2025-01-29T14:38:04.525378356Z" level=info msg="TearDown network for sandbox \"e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c\" successfully" Jan 29 14:38:04.525838 containerd[1501]: time="2025-01-29T14:38:04.525600716Z" level=info msg="StopPodSandbox for \"e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c\" returns successfully" Jan 29 14:38:04.527132 containerd[1501]: time="2025-01-29T14:38:04.526880748Z" level=info msg="RemovePodSandbox for \"e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c\"" Jan 29 14:38:04.527132 containerd[1501]: time="2025-01-29T14:38:04.527008631Z" level=info msg="Forcibly stopping sandbox \"e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c\"" Jan 29 14:38:04.691714 containerd[1501]: time="2025-01-29T14:38:04.691643840Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:38:04.694685 containerd[1501]: time="2025-01-29T14:38:04.693259921Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 29 14:38:04.715836 containerd[1501]: time="2025-01-29T14:38:04.715309042Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 423.144282ms" Jan 29 14:38:04.715836 containerd[1501]: time="2025-01-29T14:38:04.715371752Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 29 14:38:04.732336 containerd[1501]: 2025-01-29 14:38:04.609 [WARNING][5429] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--rni4s.gb1.brightbox.com-k8s-calico--kube--controllers--77d6948c77--z6748-eth0", GenerateName:"calico-kube-controllers-77d6948c77-", Namespace:"calico-system", SelfLink:"", UID:"5ff99ef3-cf8a-4707-99e1-ba45cf048b9d", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 14, 37, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77d6948c77", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-rni4s.gb1.brightbox.com", ContainerID:"0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81", Pod:"calico-kube-controllers-77d6948c77-z6748", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.0.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic7190d9e812", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 14:38:04.732336 containerd[1501]: 2025-01-29 14:38:04.609 [INFO][5429] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c" Jan 29 14:38:04.732336 containerd[1501]: 2025-01-29 14:38:04.610 [INFO][5429] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c" iface="eth0" netns="" Jan 29 14:38:04.732336 containerd[1501]: 2025-01-29 14:38:04.610 [INFO][5429] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c" Jan 29 14:38:04.732336 containerd[1501]: 2025-01-29 14:38:04.610 [INFO][5429] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c" Jan 29 14:38:04.732336 containerd[1501]: 2025-01-29 14:38:04.676 [INFO][5435] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c" HandleID="k8s-pod-network.e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c" Workload="srv--rni4s.gb1.brightbox.com-k8s-calico--kube--controllers--77d6948c77--z6748-eth0" Jan 29 14:38:04.732336 containerd[1501]: 2025-01-29 14:38:04.677 [INFO][5435] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 14:38:04.732336 containerd[1501]: 2025-01-29 14:38:04.677 [INFO][5435] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 14:38:04.732336 containerd[1501]: 2025-01-29 14:38:04.696 [WARNING][5435] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c" HandleID="k8s-pod-network.e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c" Workload="srv--rni4s.gb1.brightbox.com-k8s-calico--kube--controllers--77d6948c77--z6748-eth0" Jan 29 14:38:04.732336 containerd[1501]: 2025-01-29 14:38:04.700 [INFO][5435] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c" HandleID="k8s-pod-network.e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c" Workload="srv--rni4s.gb1.brightbox.com-k8s-calico--kube--controllers--77d6948c77--z6748-eth0" Jan 29 14:38:04.732336 containerd[1501]: 2025-01-29 14:38:04.706 [INFO][5435] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 14:38:04.732336 containerd[1501]: 2025-01-29 14:38:04.716 [INFO][5429] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c" Jan 29 14:38:04.735911 containerd[1501]: time="2025-01-29T14:38:04.733963961Z" level=info msg="TearDown network for sandbox \"e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c\" successfully" Jan 29 14:38:04.741691 containerd[1501]: time="2025-01-29T14:38:04.741332916Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 29 14:38:04.755319 containerd[1501]: time="2025-01-29T14:38:04.754160592Z" level=info msg="CreateContainer within sandbox \"9975b3fe69ef567883c3dcf45b3919b3b8aaa93a453c53be742a522efbb27eef\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 14:38:04.771073 containerd[1501]: time="2025-01-29T14:38:04.770362308Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 14:38:04.776762 containerd[1501]: time="2025-01-29T14:38:04.776627364Z" level=info msg="RemovePodSandbox \"e4aedcfd8f8d76550ed8023b77790f4de30033156706d3dbee9a89120c67bb7c\" returns successfully" Jan 29 14:38:04.778254 containerd[1501]: time="2025-01-29T14:38:04.777350341Z" level=info msg="StartContainer for \"ebc1fbd9f3c36879854d7b5af1fc72c5a90d214b0d9d73465cb98b0b5ce2394a\" returns successfully" Jan 29 14:38:04.788943 containerd[1501]: time="2025-01-29T14:38:04.788885231Z" level=info msg="StopPodSandbox for \"b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3\"" Jan 29 14:38:04.795833 containerd[1501]: time="2025-01-29T14:38:04.795591024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f67b96744-v5cf5,Uid:ea215272-cc63-45f1-9268-e1df5288d954,Namespace:calico-system,Attempt:0,} returns sandbox id \"b4f7a7b167a5355e089240ad1a48911b91ce224f747490ad5c29a0a0df79cb1f\"" Jan 29 14:38:04.829433 containerd[1501]: time="2025-01-29T14:38:04.828759005Z" level=info msg="StopContainer for \"ebc1fbd9f3c36879854d7b5af1fc72c5a90d214b0d9d73465cb98b0b5ce2394a\" with timeout 30 (s)" Jan 29 14:38:04.833697 containerd[1501]: time="2025-01-29T14:38:04.832438694Z" level=info msg="CreateContainer within sandbox \"9975b3fe69ef567883c3dcf45b3919b3b8aaa93a453c53be742a522efbb27eef\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"acd7fce0c1f9d58bf5c1a77b30bf53660cef5f7d24bd19f5b724c40c76eac003\"" Jan 29 14:38:04.834544 containerd[1501]: time="2025-01-29T14:38:04.833877455Z" level=info msg="Stop container \"ebc1fbd9f3c36879854d7b5af1fc72c5a90d214b0d9d73465cb98b0b5ce2394a\" with signal terminated" Jan 29 14:38:04.838331 containerd[1501]: time="2025-01-29T14:38:04.835129583Z" level=info msg="StartContainer for \"acd7fce0c1f9d58bf5c1a77b30bf53660cef5f7d24bd19f5b724c40c76eac003\"" Jan 29 14:38:04.844232 containerd[1501]: time="2025-01-29T14:38:04.843563325Z" level=info msg="CreateContainer within sandbox \"b4f7a7b167a5355e089240ad1a48911b91ce224f747490ad5c29a0a0df79cb1f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 29 14:38:04.857505 systemd[1]: cri-containerd-ebc1fbd9f3c36879854d7b5af1fc72c5a90d214b0d9d73465cb98b0b5ce2394a.scope: Deactivated successfully. Jan 29 14:38:04.932987 kubelet[2727]: I0129 14:38:04.931833 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-77d6948c77-z6748" podStartSLOduration=29.50432716 podStartE2EDuration="40.931763953s" podCreationTimestamp="2025-01-29 14:37:24 +0000 UTC" firstStartedPulling="2025-01-29 14:37:52.86327036 +0000 UTC m=+50.055961617" lastFinishedPulling="2025-01-29 14:38:04.290707147 +0000 UTC m=+61.483398410" observedRunningTime="2025-01-29 14:38:04.931140454 +0000 UTC m=+62.123831735" watchObservedRunningTime="2025-01-29 14:38:04.931763953 +0000 UTC m=+62.124455224" Jan 29 14:38:04.967637 containerd[1501]: time="2025-01-29T14:38:04.967296720Z" level=info msg="CreateContainer within sandbox \"b4f7a7b167a5355e089240ad1a48911b91ce224f747490ad5c29a0a0df79cb1f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b64d29bcf5f45a5bee347b7e7a0ee4715de61d49dc8f0c4bd158bc4abce6c2b0\"" Jan 29 14:38:04.970005 containerd[1501]: time="2025-01-29T14:38:04.969954791Z" level=info msg="StartContainer for \"b64d29bcf5f45a5bee347b7e7a0ee4715de61d49dc8f0c4bd158bc4abce6c2b0\"" Jan 29 14:38:05.062704 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ebc1fbd9f3c36879854d7b5af1fc72c5a90d214b0d9d73465cb98b0b5ce2394a-rootfs.mount: Deactivated successfully. Jan 29 14:38:05.073117 systemd[1]: Started cri-containerd-acd7fce0c1f9d58bf5c1a77b30bf53660cef5f7d24bd19f5b724c40c76eac003.scope - libcontainer container acd7fce0c1f9d58bf5c1a77b30bf53660cef5f7d24bd19f5b724c40c76eac003. Jan 29 14:38:05.093262 containerd[1501]: time="2025-01-29T14:38:05.092931261Z" level=info msg="shim disconnected" id=ebc1fbd9f3c36879854d7b5af1fc72c5a90d214b0d9d73465cb98b0b5ce2394a namespace=k8s.io Jan 29 14:38:05.093262 containerd[1501]: time="2025-01-29T14:38:05.093008746Z" level=warning msg="cleaning up after shim disconnected" id=ebc1fbd9f3c36879854d7b5af1fc72c5a90d214b0d9d73465cb98b0b5ce2394a namespace=k8s.io Jan 29 14:38:05.093262 containerd[1501]: time="2025-01-29T14:38:05.093027596Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 14:38:05.095555 containerd[1501]: time="2025-01-29T14:38:05.092966515Z" level=error msg="failed sending message on channel" error="write unix /run/containerd/s/d9c1613ff57182fc1159e6e4ec77a863cca781bc87e4258af6f83abc08034536->@: write: broken pipe" runtime=io.containerd.runc.v2 Jan 29 14:38:05.095555 containerd[1501]: time="2025-01-29T14:38:05.094832418Z" level=error msg="ExecSync for \"ebc1fbd9f3c36879854d7b5af1fc72c5a90d214b0d9d73465cb98b0b5ce2394a\" failed" error="failed to exec in container: failed to start exec \"c194258f36d25984955c0a0c11b828997b3a0cdaa4af0279b430bdedbc4fff2d\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" Jan 29 14:38:05.095676 kubelet[2727]: E0129 14:38:05.095134 2727 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"c194258f36d25984955c0a0c11b828997b3a0cdaa4af0279b430bdedbc4fff2d\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" containerID="ebc1fbd9f3c36879854d7b5af1fc72c5a90d214b0d9d73465cb98b0b5ce2394a" cmd=["/usr/bin/check-status","-r"] Jan 29 14:38:05.099022 containerd[1501]: time="2025-01-29T14:38:05.098577342Z" level=error msg="ExecSync for \"ebc1fbd9f3c36879854d7b5af1fc72c5a90d214b0d9d73465cb98b0b5ce2394a\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task ebc1fbd9f3c36879854d7b5af1fc72c5a90d214b0d9d73465cb98b0b5ce2394a not found: not found" Jan 29 14:38:05.099376 kubelet[2727]: E0129 14:38:05.099133 2727 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task ebc1fbd9f3c36879854d7b5af1fc72c5a90d214b0d9d73465cb98b0b5ce2394a not found: not found" containerID="ebc1fbd9f3c36879854d7b5af1fc72c5a90d214b0d9d73465cb98b0b5ce2394a" cmd=["/usr/bin/check-status","-r"] Jan 29 14:38:05.100925 containerd[1501]: time="2025-01-29T14:38:05.100847465Z" level=error msg="ExecSync for \"ebc1fbd9f3c36879854d7b5af1fc72c5a90d214b0d9d73465cb98b0b5ce2394a\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task ebc1fbd9f3c36879854d7b5af1fc72c5a90d214b0d9d73465cb98b0b5ce2394a not found: not found" Jan 29 14:38:05.101363 kubelet[2727]: E0129 14:38:05.101282 2727 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task ebc1fbd9f3c36879854d7b5af1fc72c5a90d214b0d9d73465cb98b0b5ce2394a not found: not found" containerID="ebc1fbd9f3c36879854d7b5af1fc72c5a90d214b0d9d73465cb98b0b5ce2394a" cmd=["/usr/bin/check-status","-r"] Jan 29 14:38:05.141054 systemd[1]: Started cri-containerd-b64d29bcf5f45a5bee347b7e7a0ee4715de61d49dc8f0c4bd158bc4abce6c2b0.scope - libcontainer container b64d29bcf5f45a5bee347b7e7a0ee4715de61d49dc8f0c4bd158bc4abce6c2b0. Jan 29 14:38:05.174875 containerd[1501]: time="2025-01-29T14:38:05.174641368Z" level=warning msg="cleanup warnings time=\"2025-01-29T14:38:05Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 14:38:05.211676 containerd[1501]: time="2025-01-29T14:38:05.211355597Z" level=info msg="StopContainer for \"ebc1fbd9f3c36879854d7b5af1fc72c5a90d214b0d9d73465cb98b0b5ce2394a\" returns successfully" Jan 29 14:38:05.214478 containerd[1501]: time="2025-01-29T14:38:05.214195952Z" level=info msg="StopPodSandbox for \"0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81\"" Jan 29 14:38:05.214478 containerd[1501]: time="2025-01-29T14:38:05.214250244Z" level=info msg="Container to stop \"ebc1fbd9f3c36879854d7b5af1fc72c5a90d214b0d9d73465cb98b0b5ce2394a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 14:38:05.279160 systemd[1]: cri-containerd-0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81.scope: Deactivated successfully. Jan 29 14:38:05.336773 containerd[1501]: time="2025-01-29T14:38:05.336120081Z" level=info msg="shim disconnected" id=0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81 namespace=k8s.io Jan 29 14:38:05.336773 containerd[1501]: time="2025-01-29T14:38:05.336543317Z" level=warning msg="cleaning up after shim disconnected" id=0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81 namespace=k8s.io Jan 29 14:38:05.336773 containerd[1501]: time="2025-01-29T14:38:05.336562096Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 14:38:05.363563 containerd[1501]: 2025-01-29 14:38:05.182 [WARNING][5469] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--rni4s.gb1.brightbox.com-k8s-csi--node--driver--zh8xr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9171cee1-001f-4815-a918-01b00e67d3d3", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 14, 37, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-rni4s.gb1.brightbox.com", ContainerID:"4f66d87254513761f431f10b81a0960195f3dc9cf4b582d45568f8498f159c1c", Pod:"csi-node-driver-zh8xr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.0.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliedc026b2ea8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 14:38:05.363563 containerd[1501]: 2025-01-29 14:38:05.182 [INFO][5469] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3" Jan 29 14:38:05.363563 containerd[1501]: 2025-01-29 14:38:05.182 [INFO][5469] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3" iface="eth0" netns="" Jan 29 14:38:05.363563 containerd[1501]: 2025-01-29 14:38:05.182 [INFO][5469] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3" Jan 29 14:38:05.363563 containerd[1501]: 2025-01-29 14:38:05.182 [INFO][5469] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3" Jan 29 14:38:05.363563 containerd[1501]: 2025-01-29 14:38:05.289 [INFO][5568] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3" HandleID="k8s-pod-network.b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3" Workload="srv--rni4s.gb1.brightbox.com-k8s-csi--node--driver--zh8xr-eth0" Jan 29 14:38:05.363563 containerd[1501]: 2025-01-29 14:38:05.290 [INFO][5568] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 14:38:05.363563 containerd[1501]: 2025-01-29 14:38:05.291 [INFO][5568] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 14:38:05.363563 containerd[1501]: 2025-01-29 14:38:05.302 [WARNING][5568] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3" HandleID="k8s-pod-network.b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3" Workload="srv--rni4s.gb1.brightbox.com-k8s-csi--node--driver--zh8xr-eth0" Jan 29 14:38:05.363563 containerd[1501]: 2025-01-29 14:38:05.302 [INFO][5568] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3" HandleID="k8s-pod-network.b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3" Workload="srv--rni4s.gb1.brightbox.com-k8s-csi--node--driver--zh8xr-eth0" Jan 29 14:38:05.363563 containerd[1501]: 2025-01-29 14:38:05.310 [INFO][5568] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 14:38:05.363563 containerd[1501]: 2025-01-29 14:38:05.325 [INFO][5469] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3" Jan 29 14:38:05.364385 containerd[1501]: time="2025-01-29T14:38:05.363634711Z" level=info msg="TearDown network for sandbox \"b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3\" successfully" Jan 29 14:38:05.364385 containerd[1501]: time="2025-01-29T14:38:05.363688777Z" level=info msg="StopPodSandbox for \"b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3\" returns successfully" Jan 29 14:38:05.365910 containerd[1501]: time="2025-01-29T14:38:05.365398595Z" level=info msg="RemovePodSandbox for \"b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3\"" Jan 29 14:38:05.365910 containerd[1501]: time="2025-01-29T14:38:05.365441305Z" level=info msg="Forcibly stopping sandbox \"b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3\"" Jan 29 14:38:05.441957 containerd[1501]: time="2025-01-29T14:38:05.441534817Z" level=info msg="StartContainer for \"acd7fce0c1f9d58bf5c1a77b30bf53660cef5f7d24bd19f5b724c40c76eac003\" returns successfully" Jan 29 14:38:05.506481 containerd[1501]: time="2025-01-29T14:38:05.503243211Z" level=info msg="StartContainer for \"b64d29bcf5f45a5bee347b7e7a0ee4715de61d49dc8f0c4bd158bc4abce6c2b0\" returns successfully" Jan 29 14:38:05.613721 systemd-networkd[1417]: calic7190d9e812: Link DOWN Jan 29 14:38:05.616646 systemd-networkd[1417]: calic7190d9e812: Lost carrier Jan 29 14:38:05.746090 containerd[1501]: 2025-01-29 14:38:05.563 [WARNING][5619] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--rni4s.gb1.brightbox.com-k8s-csi--node--driver--zh8xr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9171cee1-001f-4815-a918-01b00e67d3d3", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 14, 37, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-rni4s.gb1.brightbox.com", ContainerID:"4f66d87254513761f431f10b81a0960195f3dc9cf4b582d45568f8498f159c1c", Pod:"csi-node-driver-zh8xr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.0.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliedc026b2ea8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 14:38:05.746090 containerd[1501]: 2025-01-29 14:38:05.565 [INFO][5619] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3" Jan 29 14:38:05.746090 containerd[1501]: 2025-01-29 14:38:05.565 [INFO][5619] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3" iface="eth0" netns="" Jan 29 14:38:05.746090 containerd[1501]: 2025-01-29 14:38:05.566 [INFO][5619] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3" Jan 29 14:38:05.746090 containerd[1501]: 2025-01-29 14:38:05.566 [INFO][5619] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3" Jan 29 14:38:05.746090 containerd[1501]: 2025-01-29 14:38:05.708 [INFO][5661] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3" HandleID="k8s-pod-network.b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3" Workload="srv--rni4s.gb1.brightbox.com-k8s-csi--node--driver--zh8xr-eth0" Jan 29 14:38:05.746090 containerd[1501]: 2025-01-29 14:38:05.708 [INFO][5661] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 14:38:05.746090 containerd[1501]: 2025-01-29 14:38:05.709 [INFO][5661] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 14:38:05.746090 containerd[1501]: 2025-01-29 14:38:05.725 [WARNING][5661] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3" HandleID="k8s-pod-network.b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3" Workload="srv--rni4s.gb1.brightbox.com-k8s-csi--node--driver--zh8xr-eth0" Jan 29 14:38:05.746090 containerd[1501]: 2025-01-29 14:38:05.725 [INFO][5661] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3" HandleID="k8s-pod-network.b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3" Workload="srv--rni4s.gb1.brightbox.com-k8s-csi--node--driver--zh8xr-eth0" Jan 29 14:38:05.746090 containerd[1501]: 2025-01-29 14:38:05.731 [INFO][5661] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 14:38:05.746090 containerd[1501]: 2025-01-29 14:38:05.738 [INFO][5619] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3" Jan 29 14:38:05.749697 containerd[1501]: time="2025-01-29T14:38:05.747905856Z" level=info msg="TearDown network for sandbox \"b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3\" successfully" Jan 29 14:38:05.753881 containerd[1501]: time="2025-01-29T14:38:05.753721669Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 14:38:05.754362 containerd[1501]: time="2025-01-29T14:38:05.754200795Z" level=info msg="RemovePodSandbox \"b1a4a8894a609d8ba17c88103f4e1682a3dfcf816c41037cd302d6671ee760b3\" returns successfully" Jan 29 14:38:05.757261 containerd[1501]: time="2025-01-29T14:38:05.756218869Z" level=info msg="StopPodSandbox for \"df130c88186aedd04a86bf39f6277ada377a0ed955aa53dcf7f7363429765a51\"" Jan 29 14:38:05.757261 containerd[1501]: time="2025-01-29T14:38:05.756336841Z" level=info msg="TearDown network for sandbox \"df130c88186aedd04a86bf39f6277ada377a0ed955aa53dcf7f7363429765a51\" successfully" Jan 29 14:38:05.757261 containerd[1501]: time="2025-01-29T14:38:05.756360787Z" level=info msg="StopPodSandbox for \"df130c88186aedd04a86bf39f6277ada377a0ed955aa53dcf7f7363429765a51\" returns successfully" Jan 29 14:38:05.759832 containerd[1501]: time="2025-01-29T14:38:05.757862343Z" level=info msg="RemovePodSandbox for \"df130c88186aedd04a86bf39f6277ada377a0ed955aa53dcf7f7363429765a51\"" Jan 29 14:38:05.759832 containerd[1501]: time="2025-01-29T14:38:05.757905744Z" level=info msg="Forcibly stopping sandbox \"df130c88186aedd04a86bf39f6277ada377a0ed955aa53dcf7f7363429765a51\"" Jan 29 14:38:05.759832 containerd[1501]: time="2025-01-29T14:38:05.757977890Z" level=info msg="TearDown network for sandbox \"df130c88186aedd04a86bf39f6277ada377a0ed955aa53dcf7f7363429765a51\" successfully" Jan 29 14:38:05.771733 containerd[1501]: time="2025-01-29T14:38:05.771676446Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"df130c88186aedd04a86bf39f6277ada377a0ed955aa53dcf7f7363429765a51\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 14:38:05.772575 containerd[1501]: time="2025-01-29T14:38:05.772254399Z" level=info msg="RemovePodSandbox \"df130c88186aedd04a86bf39f6277ada377a0ed955aa53dcf7f7363429765a51\" returns successfully" Jan 29 14:38:05.774081 containerd[1501]: time="2025-01-29T14:38:05.773612678Z" level=info msg="StopPodSandbox for \"666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428\"" Jan 29 14:38:05.846608 kubelet[2727]: I0129 14:38:05.846542 2727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81" Jan 29 14:38:05.885637 kubelet[2727]: I0129 14:38:05.885538 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-f67b96744-v5cf5" podStartSLOduration=5.885509494 podStartE2EDuration="5.885509494s" podCreationTimestamp="2025-01-29 14:38:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 14:38:05.880333128 +0000 UTC m=+63.073024403" watchObservedRunningTime="2025-01-29 14:38:05.885509494 +0000 UTC m=+63.078200759" Jan 29 14:38:05.902907 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81-rootfs.mount: Deactivated successfully. Jan 29 14:38:05.903075 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81-shm.mount: Deactivated successfully. Jan 29 14:38:06.006400 containerd[1501]: 2025-01-29 14:38:05.607 [INFO][5641] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81" Jan 29 14:38:06.006400 containerd[1501]: 2025-01-29 14:38:05.607 [INFO][5641] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81" iface="eth0" netns="/var/run/netns/cni-7636d157-c6bf-5e23-e18d-d708ee166977" Jan 29 14:38:06.006400 containerd[1501]: 2025-01-29 14:38:05.608 [INFO][5641] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81" iface="eth0" netns="/var/run/netns/cni-7636d157-c6bf-5e23-e18d-d708ee166977" Jan 29 14:38:06.006400 containerd[1501]: 2025-01-29 14:38:05.619 [INFO][5641] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81" after=11.220896ms iface="eth0" netns="/var/run/netns/cni-7636d157-c6bf-5e23-e18d-d708ee166977" Jan 29 14:38:06.006400 containerd[1501]: 2025-01-29 14:38:05.619 [INFO][5641] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81" Jan 29 14:38:06.006400 containerd[1501]: 2025-01-29 14:38:05.619 [INFO][5641] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81" Jan 29 14:38:06.006400 containerd[1501]: 2025-01-29 14:38:05.780 [INFO][5665] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81" HandleID="k8s-pod-network.0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81" Workload="srv--rni4s.gb1.brightbox.com-k8s-calico--kube--controllers--77d6948c77--z6748-eth0" Jan 29 14:38:06.006400 containerd[1501]: 2025-01-29 14:38:05.780 [INFO][5665] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 14:38:06.006400 containerd[1501]: 2025-01-29 14:38:05.780 [INFO][5665] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 14:38:06.006400 containerd[1501]: 2025-01-29 14:38:05.990 [INFO][5665] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81" HandleID="k8s-pod-network.0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81" Workload="srv--rni4s.gb1.brightbox.com-k8s-calico--kube--controllers--77d6948c77--z6748-eth0" Jan 29 14:38:06.006400 containerd[1501]: 2025-01-29 14:38:05.991 [INFO][5665] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81" HandleID="k8s-pod-network.0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81" Workload="srv--rni4s.gb1.brightbox.com-k8s-calico--kube--controllers--77d6948c77--z6748-eth0" Jan 29 14:38:06.006400 containerd[1501]: 2025-01-29 14:38:05.994 [INFO][5665] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 14:38:06.006400 containerd[1501]: 2025-01-29 14:38:06.000 [INFO][5641] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81" Jan 29 14:38:06.008274 containerd[1501]: time="2025-01-29T14:38:06.007602929Z" level=info msg="TearDown network for sandbox \"0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81\" successfully" Jan 29 14:38:06.008274 containerd[1501]: time="2025-01-29T14:38:06.007656400Z" level=info msg="StopPodSandbox for \"0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81\" returns successfully" Jan 29 14:38:06.019176 systemd[1]: run-netns-cni\x2d7636d157\x2dc6bf\x2d5e23\x2de18d\x2dd708ee166977.mount: Deactivated successfully. Jan 29 14:38:06.073211 kubelet[2727]: I0129 14:38:06.073111 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-b98d747cb-hclzq" podStartSLOduration=30.66358485 podStartE2EDuration="42.07308528s" podCreationTimestamp="2025-01-29 14:37:24 +0000 UTC" firstStartedPulling="2025-01-29 14:37:53.331497424 +0000 UTC m=+50.524188685" lastFinishedPulling="2025-01-29 14:38:04.740997848 +0000 UTC m=+61.933689115" observedRunningTime="2025-01-29 14:38:05.93125691 +0000 UTC m=+63.123948196" watchObservedRunningTime="2025-01-29 14:38:06.07308528 +0000 UTC m=+63.265776548" Jan 29 14:38:06.097268 kubelet[2727]: I0129 14:38:06.095355 2727 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkswg\" (UniqueName: \"kubernetes.io/projected/5ff99ef3-cf8a-4707-99e1-ba45cf048b9d-kube-api-access-xkswg\") pod \"5ff99ef3-cf8a-4707-99e1-ba45cf048b9d\" (UID: \"5ff99ef3-cf8a-4707-99e1-ba45cf048b9d\") " Jan 29 14:38:06.097268 kubelet[2727]: I0129 14:38:06.095430 2727 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ff99ef3-cf8a-4707-99e1-ba45cf048b9d-tigera-ca-bundle\") pod \"5ff99ef3-cf8a-4707-99e1-ba45cf048b9d\" (UID: \"5ff99ef3-cf8a-4707-99e1-ba45cf048b9d\") " Jan 29 14:38:06.109848 systemd[1]: var-lib-kubelet-pods-5ff99ef3\x2dcf8a\x2d4707\x2d99e1\x2dba45cf048b9d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxkswg.mount: Deactivated successfully. Jan 29 14:38:06.111124 kubelet[2727]: I0129 14:38:06.111068 2727 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ff99ef3-cf8a-4707-99e1-ba45cf048b9d-kube-api-access-xkswg" (OuterVolumeSpecName: "kube-api-access-xkswg") pod "5ff99ef3-cf8a-4707-99e1-ba45cf048b9d" (UID: "5ff99ef3-cf8a-4707-99e1-ba45cf048b9d"). InnerVolumeSpecName "kube-api-access-xkswg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 14:38:06.129295 systemd[1]: var-lib-kubelet-pods-5ff99ef3\x2dcf8a\x2d4707\x2d99e1\x2dba45cf048b9d-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dkube\x2dcontrollers-1.mount: Deactivated successfully. Jan 29 14:38:06.134907 kubelet[2727]: I0129 14:38:06.134798 2727 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ff99ef3-cf8a-4707-99e1-ba45cf048b9d-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "5ff99ef3-cf8a-4707-99e1-ba45cf048b9d" (UID: "5ff99ef3-cf8a-4707-99e1-ba45cf048b9d"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 14:38:06.159797 containerd[1501]: 2025-01-29 14:38:05.955 [WARNING][5690] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--rni4s.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--snc7g-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"114e2b6a-4dec-46af-b960-cb1b36f44242", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 14, 37, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-rni4s.gb1.brightbox.com", ContainerID:"eec4c2495999cf9d04ed5aad22e1055e0f6036a16a10fbaf8af1862741da4ad6", Pod:"coredns-7db6d8ff4d-snc7g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.0.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib6dd26487ca", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 14:38:06.159797 containerd[1501]: 2025-01-29 14:38:05.956 [INFO][5690] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428" Jan 29 14:38:06.159797 containerd[1501]: 2025-01-29 14:38:05.956 [INFO][5690] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428" iface="eth0" netns="" Jan 29 14:38:06.159797 containerd[1501]: 2025-01-29 14:38:05.956 [INFO][5690] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428" Jan 29 14:38:06.159797 containerd[1501]: 2025-01-29 14:38:05.956 [INFO][5690] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428" Jan 29 14:38:06.159797 containerd[1501]: 2025-01-29 14:38:06.118 [INFO][5697] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428" HandleID="k8s-pod-network.666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428" Workload="srv--rni4s.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--snc7g-eth0" Jan 29 14:38:06.159797 containerd[1501]: 2025-01-29 14:38:06.118 [INFO][5697] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 14:38:06.159797 containerd[1501]: 2025-01-29 14:38:06.119 [INFO][5697] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 14:38:06.159797 containerd[1501]: 2025-01-29 14:38:06.144 [WARNING][5697] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428" HandleID="k8s-pod-network.666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428" Workload="srv--rni4s.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--snc7g-eth0" Jan 29 14:38:06.159797 containerd[1501]: 2025-01-29 14:38:06.145 [INFO][5697] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428" HandleID="k8s-pod-network.666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428" Workload="srv--rni4s.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--snc7g-eth0" Jan 29 14:38:06.159797 containerd[1501]: 2025-01-29 14:38:06.150 [INFO][5697] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 14:38:06.159797 containerd[1501]: 2025-01-29 14:38:06.155 [INFO][5690] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428" Jan 29 14:38:06.159797 containerd[1501]: time="2025-01-29T14:38:06.159737842Z" level=info msg="TearDown network for sandbox \"666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428\" successfully" Jan 29 14:38:06.159797 containerd[1501]: time="2025-01-29T14:38:06.159785533Z" level=info msg="StopPodSandbox for \"666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428\" returns successfully" Jan 29 14:38:06.165775 containerd[1501]: time="2025-01-29T14:38:06.161644600Z" level=info msg="RemovePodSandbox for \"666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428\"" Jan 29 14:38:06.165775 containerd[1501]: time="2025-01-29T14:38:06.161690558Z" level=info msg="Forcibly stopping sandbox \"666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428\"" Jan 29 14:38:06.196468 kubelet[2727]: I0129 14:38:06.196401 2727 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-xkswg\" (UniqueName: \"kubernetes.io/projected/5ff99ef3-cf8a-4707-99e1-ba45cf048b9d-kube-api-access-xkswg\") on node \"srv-rni4s.gb1.brightbox.com\" DevicePath \"\"" Jan 29 14:38:06.196468 kubelet[2727]: I0129 14:38:06.196459 2727 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ff99ef3-cf8a-4707-99e1-ba45cf048b9d-tigera-ca-bundle\") on node \"srv-rni4s.gb1.brightbox.com\" DevicePath \"\"" Jan 29 14:38:06.361658 containerd[1501]: 2025-01-29 14:38:06.258 [WARNING][5722] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--rni4s.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--snc7g-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"114e2b6a-4dec-46af-b960-cb1b36f44242", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 14, 37, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-rni4s.gb1.brightbox.com", ContainerID:"eec4c2495999cf9d04ed5aad22e1055e0f6036a16a10fbaf8af1862741da4ad6", Pod:"coredns-7db6d8ff4d-snc7g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.0.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib6dd26487ca", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 14:38:06.361658 containerd[1501]: 2025-01-29 14:38:06.259 [INFO][5722] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428" Jan 29 14:38:06.361658 containerd[1501]: 2025-01-29 14:38:06.259 [INFO][5722] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428" iface="eth0" netns="" Jan 29 14:38:06.361658 containerd[1501]: 2025-01-29 14:38:06.259 [INFO][5722] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428" Jan 29 14:38:06.361658 containerd[1501]: 2025-01-29 14:38:06.259 [INFO][5722] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428" Jan 29 14:38:06.361658 containerd[1501]: 2025-01-29 14:38:06.329 [INFO][5728] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428" HandleID="k8s-pod-network.666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428" Workload="srv--rni4s.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--snc7g-eth0" Jan 29 14:38:06.361658 containerd[1501]: 2025-01-29 14:38:06.330 [INFO][5728] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 14:38:06.361658 containerd[1501]: 2025-01-29 14:38:06.330 [INFO][5728] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 14:38:06.361658 containerd[1501]: 2025-01-29 14:38:06.348 [WARNING][5728] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428" HandleID="k8s-pod-network.666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428" Workload="srv--rni4s.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--snc7g-eth0" Jan 29 14:38:06.361658 containerd[1501]: 2025-01-29 14:38:06.349 [INFO][5728] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428" HandleID="k8s-pod-network.666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428" Workload="srv--rni4s.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--snc7g-eth0" Jan 29 14:38:06.361658 containerd[1501]: 2025-01-29 14:38:06.352 [INFO][5728] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 14:38:06.361658 containerd[1501]: 2025-01-29 14:38:06.359 [INFO][5722] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428" Jan 29 14:38:06.364686 containerd[1501]: time="2025-01-29T14:38:06.361722964Z" level=info msg="TearDown network for sandbox \"666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428\" successfully" Jan 29 14:38:06.365840 containerd[1501]: time="2025-01-29T14:38:06.365751792Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 14:38:06.365902 containerd[1501]: time="2025-01-29T14:38:06.365880529Z" level=info msg="RemovePodSandbox \"666a762bd72593f255d0be2228fa06a05ff64025860f90c09eab9d36b025d428\" returns successfully" Jan 29 14:38:06.366795 containerd[1501]: time="2025-01-29T14:38:06.366762460Z" level=info msg="StopPodSandbox for \"bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5\"" Jan 29 14:38:06.673923 containerd[1501]: 2025-01-29 14:38:06.536 [WARNING][5746] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--rni4s.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--7nbqm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9d07ade6-7ccb-46c5-be71-11453b4fb53f", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 14, 37, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-rni4s.gb1.brightbox.com", ContainerID:"816cc8f01d1d6799476db775c2d90451fd4fb08e375838b98e28afc418fb11df", Pod:"coredns-7db6d8ff4d-7nbqm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.0.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliedda56ef048", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 14:38:06.673923 containerd[1501]: 2025-01-29 14:38:06.538 [INFO][5746] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5" Jan 29 14:38:06.673923 containerd[1501]: 2025-01-29 14:38:06.538 [INFO][5746] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5" iface="eth0" netns="" Jan 29 14:38:06.673923 containerd[1501]: 2025-01-29 14:38:06.538 [INFO][5746] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5" Jan 29 14:38:06.673923 containerd[1501]: 2025-01-29 14:38:06.539 [INFO][5746] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5" Jan 29 14:38:06.673923 containerd[1501]: 2025-01-29 14:38:06.643 [INFO][5752] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5" HandleID="k8s-pod-network.bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5" Workload="srv--rni4s.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--7nbqm-eth0" Jan 29 14:38:06.673923 containerd[1501]: 2025-01-29 14:38:06.644 [INFO][5752] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 14:38:06.673923 containerd[1501]: 2025-01-29 14:38:06.645 [INFO][5752] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 14:38:06.673923 containerd[1501]: 2025-01-29 14:38:06.659 [WARNING][5752] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5" HandleID="k8s-pod-network.bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5" Workload="srv--rni4s.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--7nbqm-eth0" Jan 29 14:38:06.673923 containerd[1501]: 2025-01-29 14:38:06.659 [INFO][5752] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5" HandleID="k8s-pod-network.bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5" Workload="srv--rni4s.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--7nbqm-eth0" Jan 29 14:38:06.673923 containerd[1501]: 2025-01-29 14:38:06.662 [INFO][5752] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 14:38:06.673923 containerd[1501]: 2025-01-29 14:38:06.668 [INFO][5746] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5" Jan 29 14:38:06.678692 containerd[1501]: time="2025-01-29T14:38:06.676098404Z" level=info msg="TearDown network for sandbox \"bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5\" successfully" Jan 29 14:38:06.678692 containerd[1501]: time="2025-01-29T14:38:06.676180107Z" level=info msg="StopPodSandbox for \"bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5\" returns successfully" Jan 29 14:38:06.680300 containerd[1501]: time="2025-01-29T14:38:06.679069378Z" level=info msg="RemovePodSandbox for \"bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5\"" Jan 29 14:38:06.680300 containerd[1501]: time="2025-01-29T14:38:06.679126317Z" level=info msg="Forcibly stopping sandbox \"bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5\"" Jan 29 14:38:06.882832 kubelet[2727]: I0129 14:38:06.880392 2727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 14:38:06.904170 systemd[1]: Removed slice kubepods-besteffort-pod5ff99ef3_cf8a_4707_99e1_ba45cf048b9d.slice - libcontainer container kubepods-besteffort-pod5ff99ef3_cf8a_4707_99e1_ba45cf048b9d.slice. Jan 29 14:38:07.052910 kubelet[2727]: I0129 14:38:07.052166 2727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ff99ef3-cf8a-4707-99e1-ba45cf048b9d" path="/var/lib/kubelet/pods/5ff99ef3-cf8a-4707-99e1-ba45cf048b9d/volumes" Jan 29 14:38:07.062835 containerd[1501]: 2025-01-29 14:38:06.919 [WARNING][5776] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--rni4s.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--7nbqm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9d07ade6-7ccb-46c5-be71-11453b4fb53f", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 14, 37, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-rni4s.gb1.brightbox.com", ContainerID:"816cc8f01d1d6799476db775c2d90451fd4fb08e375838b98e28afc418fb11df", Pod:"coredns-7db6d8ff4d-7nbqm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.0.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliedda56ef048", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 14:38:07.062835 containerd[1501]: 2025-01-29 14:38:06.922 [INFO][5776] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5" Jan 29 14:38:07.062835 containerd[1501]: 2025-01-29 14:38:06.922 [INFO][5776] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5" iface="eth0" netns="" Jan 29 14:38:07.062835 containerd[1501]: 2025-01-29 14:38:06.922 [INFO][5776] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5" Jan 29 14:38:07.062835 containerd[1501]: 2025-01-29 14:38:06.922 [INFO][5776] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5" Jan 29 14:38:07.062835 containerd[1501]: 2025-01-29 14:38:07.007 [INFO][5783] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5" HandleID="k8s-pod-network.bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5" Workload="srv--rni4s.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--7nbqm-eth0" Jan 29 14:38:07.062835 containerd[1501]: 2025-01-29 14:38:07.008 [INFO][5783] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 14:38:07.062835 containerd[1501]: 2025-01-29 14:38:07.009 [INFO][5783] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 14:38:07.062835 containerd[1501]: 2025-01-29 14:38:07.038 [WARNING][5783] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5" HandleID="k8s-pod-network.bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5" Workload="srv--rni4s.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--7nbqm-eth0" Jan 29 14:38:07.062835 containerd[1501]: 2025-01-29 14:38:07.039 [INFO][5783] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5" HandleID="k8s-pod-network.bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5" Workload="srv--rni4s.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--7nbqm-eth0" Jan 29 14:38:07.062835 containerd[1501]: 2025-01-29 14:38:07.047 [INFO][5783] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 14:38:07.062835 containerd[1501]: 2025-01-29 14:38:07.053 [INFO][5776] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5" Jan 29 14:38:07.062835 containerd[1501]: time="2025-01-29T14:38:07.059590091Z" level=info msg="TearDown network for sandbox \"bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5\" successfully" Jan 29 14:38:07.079510 containerd[1501]: time="2025-01-29T14:38:07.078605890Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 14:38:07.079835 containerd[1501]: time="2025-01-29T14:38:07.079789033Z" level=info msg="RemovePodSandbox \"bb574768e7c1c1edb1bd3dccae1f13543274eb42e301bbcfb0c7aa7682abc0b5\" returns successfully" Jan 29 14:38:07.082429 containerd[1501]: time="2025-01-29T14:38:07.082395377Z" level=info msg="StopPodSandbox for \"47c11c7023de2e575694e6b90ddc00a2b2319b50c630f4bfa72a8ae25488c20f\"" Jan 29 14:38:07.082698 containerd[1501]: time="2025-01-29T14:38:07.082668523Z" level=info msg="TearDown network for sandbox \"47c11c7023de2e575694e6b90ddc00a2b2319b50c630f4bfa72a8ae25488c20f\" successfully" Jan 29 14:38:07.082841 containerd[1501]: time="2025-01-29T14:38:07.082785724Z" level=info msg="StopPodSandbox for \"47c11c7023de2e575694e6b90ddc00a2b2319b50c630f4bfa72a8ae25488c20f\" returns successfully" Jan 29 14:38:07.084323 containerd[1501]: time="2025-01-29T14:38:07.084106596Z" level=info msg="RemovePodSandbox for \"47c11c7023de2e575694e6b90ddc00a2b2319b50c630f4bfa72a8ae25488c20f\"" Jan 29 14:38:07.085825 containerd[1501]: time="2025-01-29T14:38:07.084909116Z" level=info msg="Forcibly stopping sandbox \"47c11c7023de2e575694e6b90ddc00a2b2319b50c630f4bfa72a8ae25488c20f\"" Jan 29 14:38:07.086033 containerd[1501]: time="2025-01-29T14:38:07.086003039Z" level=info msg="TearDown network for sandbox \"47c11c7023de2e575694e6b90ddc00a2b2319b50c630f4bfa72a8ae25488c20f\" successfully" Jan 29 14:38:07.095927 containerd[1501]: time="2025-01-29T14:38:07.095611581Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"47c11c7023de2e575694e6b90ddc00a2b2319b50c630f4bfa72a8ae25488c20f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 14:38:07.095927 containerd[1501]: time="2025-01-29T14:38:07.095716216Z" level=info msg="RemovePodSandbox \"47c11c7023de2e575694e6b90ddc00a2b2319b50c630f4bfa72a8ae25488c20f\" returns successfully" Jan 29 14:38:07.098105 containerd[1501]: time="2025-01-29T14:38:07.098071879Z" level=info msg="StopPodSandbox for \"16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225\"" Jan 29 14:38:07.187047 containerd[1501]: time="2025-01-29T14:38:07.186965389Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:38:07.192767 containerd[1501]: time="2025-01-29T14:38:07.192524901Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 29 14:38:07.193172 containerd[1501]: time="2025-01-29T14:38:07.193118362Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:38:07.201531 containerd[1501]: time="2025-01-29T14:38:07.201413502Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:38:07.205147 containerd[1501]: time="2025-01-29T14:38:07.205001454Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.463611947s" Jan 29 14:38:07.205147 containerd[1501]: time="2025-01-29T14:38:07.205089067Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 29 14:38:07.217112 containerd[1501]: time="2025-01-29T14:38:07.216554182Z" level=info msg="CreateContainer within sandbox \"4f66d87254513761f431f10b81a0960195f3dc9cf4b582d45568f8498f159c1c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 29 14:38:07.274891 containerd[1501]: time="2025-01-29T14:38:07.271523611Z" level=info msg="CreateContainer within sandbox \"4f66d87254513761f431f10b81a0960195f3dc9cf4b582d45568f8498f159c1c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"5ca9365a5b46b206f4628add05f4c5122077eb25a29b82df10593ef41fe4762a\"" Jan 29 14:38:07.275757 containerd[1501]: time="2025-01-29T14:38:07.275186180Z" level=info msg="StartContainer for \"5ca9365a5b46b206f4628add05f4c5122077eb25a29b82df10593ef41fe4762a\"" Jan 29 14:38:07.380085 systemd[1]: Started cri-containerd-5ca9365a5b46b206f4628add05f4c5122077eb25a29b82df10593ef41fe4762a.scope - libcontainer container 5ca9365a5b46b206f4628add05f4c5122077eb25a29b82df10593ef41fe4762a. Jan 29 14:38:07.472827 containerd[1501]: 2025-01-29 14:38:07.324 [WARNING][5801] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--rni4s.gb1.brightbox.com-k8s-calico--apiserver--b98d747cb--8t2mn-eth0", GenerateName:"calico-apiserver-b98d747cb-", Namespace:"calico-apiserver", SelfLink:"", UID:"41990643-0c0f-44e1-bc9b-12b09a813a6e", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 14, 37, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b98d747cb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-rni4s.gb1.brightbox.com", ContainerID:"8c9c63497df2d1e1f0a1c69022411b5b8fd381189a43a3c7201cd51f80a69c36", Pod:"calico-apiserver-b98d747cb-8t2mn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.0.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8b988c0073c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 14:38:07.472827 containerd[1501]: 2025-01-29 14:38:07.326 [INFO][5801] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225" Jan 29 14:38:07.472827 containerd[1501]: 2025-01-29 14:38:07.326 [INFO][5801] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225" iface="eth0" netns="" Jan 29 14:38:07.472827 containerd[1501]: 2025-01-29 14:38:07.327 [INFO][5801] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225" Jan 29 14:38:07.472827 containerd[1501]: 2025-01-29 14:38:07.327 [INFO][5801] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225" Jan 29 14:38:07.472827 containerd[1501]: 2025-01-29 14:38:07.447 [INFO][5822] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225" HandleID="k8s-pod-network.16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225" Workload="srv--rni4s.gb1.brightbox.com-k8s-calico--apiserver--b98d747cb--8t2mn-eth0" Jan 29 14:38:07.472827 containerd[1501]: 2025-01-29 14:38:07.447 [INFO][5822] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 14:38:07.472827 containerd[1501]: 2025-01-29 14:38:07.447 [INFO][5822] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 14:38:07.472827 containerd[1501]: 2025-01-29 14:38:07.460 [WARNING][5822] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225" HandleID="k8s-pod-network.16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225" Workload="srv--rni4s.gb1.brightbox.com-k8s-calico--apiserver--b98d747cb--8t2mn-eth0" Jan 29 14:38:07.472827 containerd[1501]: 2025-01-29 14:38:07.460 [INFO][5822] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225" HandleID="k8s-pod-network.16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225" Workload="srv--rni4s.gb1.brightbox.com-k8s-calico--apiserver--b98d747cb--8t2mn-eth0" Jan 29 14:38:07.472827 containerd[1501]: 2025-01-29 14:38:07.465 [INFO][5822] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 14:38:07.472827 containerd[1501]: 2025-01-29 14:38:07.467 [INFO][5801] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225" Jan 29 14:38:07.477947 containerd[1501]: time="2025-01-29T14:38:07.474431894Z" level=info msg="TearDown network for sandbox \"16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225\" successfully" Jan 29 14:38:07.477947 containerd[1501]: time="2025-01-29T14:38:07.474769127Z" level=info msg="StopPodSandbox for \"16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225\" returns successfully" Jan 29 14:38:07.478393 containerd[1501]: time="2025-01-29T14:38:07.478086321Z" level=info msg="RemovePodSandbox for \"16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225\"" Jan 29 14:38:07.478393 containerd[1501]: time="2025-01-29T14:38:07.478166026Z" level=info msg="Forcibly stopping sandbox \"16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225\"" Jan 29 14:38:07.524987 containerd[1501]: time="2025-01-29T14:38:07.524869106Z" level=info msg="StartContainer for \"5ca9365a5b46b206f4628add05f4c5122077eb25a29b82df10593ef41fe4762a\" returns successfully" Jan 29 14:38:07.699734 containerd[1501]: 2025-01-29 14:38:07.594 [WARNING][5858] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--rni4s.gb1.brightbox.com-k8s-calico--apiserver--b98d747cb--8t2mn-eth0", GenerateName:"calico-apiserver-b98d747cb-", Namespace:"calico-apiserver", SelfLink:"", UID:"41990643-0c0f-44e1-bc9b-12b09a813a6e", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 14, 37, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b98d747cb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-rni4s.gb1.brightbox.com", ContainerID:"8c9c63497df2d1e1f0a1c69022411b5b8fd381189a43a3c7201cd51f80a69c36", Pod:"calico-apiserver-b98d747cb-8t2mn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.0.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8b988c0073c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 14:38:07.699734 containerd[1501]: 2025-01-29 14:38:07.594 [INFO][5858] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225" Jan 29 14:38:07.699734 containerd[1501]: 2025-01-29 14:38:07.594 [INFO][5858] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225" iface="eth0" netns="" Jan 29 14:38:07.699734 containerd[1501]: 2025-01-29 14:38:07.594 [INFO][5858] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225" Jan 29 14:38:07.699734 containerd[1501]: 2025-01-29 14:38:07.594 [INFO][5858] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225" Jan 29 14:38:07.699734 containerd[1501]: 2025-01-29 14:38:07.664 [INFO][5868] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225" HandleID="k8s-pod-network.16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225" Workload="srv--rni4s.gb1.brightbox.com-k8s-calico--apiserver--b98d747cb--8t2mn-eth0" Jan 29 14:38:07.699734 containerd[1501]: 2025-01-29 14:38:07.664 [INFO][5868] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 14:38:07.699734 containerd[1501]: 2025-01-29 14:38:07.664 [INFO][5868] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 14:38:07.699734 containerd[1501]: 2025-01-29 14:38:07.687 [WARNING][5868] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225" HandleID="k8s-pod-network.16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225" Workload="srv--rni4s.gb1.brightbox.com-k8s-calico--apiserver--b98d747cb--8t2mn-eth0" Jan 29 14:38:07.699734 containerd[1501]: 2025-01-29 14:38:07.687 [INFO][5868] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225" HandleID="k8s-pod-network.16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225" Workload="srv--rni4s.gb1.brightbox.com-k8s-calico--apiserver--b98d747cb--8t2mn-eth0" Jan 29 14:38:07.699734 containerd[1501]: 2025-01-29 14:38:07.696 [INFO][5868] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 14:38:07.699734 containerd[1501]: 2025-01-29 14:38:07.697 [INFO][5858] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225" Jan 29 14:38:07.704150 containerd[1501]: time="2025-01-29T14:38:07.699961734Z" level=info msg="TearDown network for sandbox \"16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225\" successfully" Jan 29 14:38:07.718960 containerd[1501]: time="2025-01-29T14:38:07.718900773Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 14:38:07.719491 containerd[1501]: time="2025-01-29T14:38:07.719231152Z" level=info msg="RemovePodSandbox \"16e8b71db2f1c00eabe99b85f2a3d12ea9e720786d00f8bc9d652c4f3def0225\" returns successfully" Jan 29 14:38:07.940927 kubelet[2727]: I0129 14:38:07.940832 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-zh8xr" podStartSLOduration=29.366201596 podStartE2EDuration="43.940770418s" podCreationTimestamp="2025-01-29 14:37:24 +0000 UTC" firstStartedPulling="2025-01-29 14:37:52.634051452 +0000 UTC m=+49.826742713" lastFinishedPulling="2025-01-29 14:38:07.208620266 +0000 UTC m=+64.401311535" observedRunningTime="2025-01-29 14:38:07.93991123 +0000 UTC m=+65.132602508" watchObservedRunningTime="2025-01-29 14:38:07.940770418 +0000 UTC m=+65.133461688" Jan 29 14:38:08.520493 kubelet[2727]: I0129 14:38:08.520328 2727 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 29 14:38:08.524603 kubelet[2727]: I0129 14:38:08.524220 2727 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 29 14:38:10.305777 systemd[1]: cri-containerd-0f53af9e4b5cf68071e751f430783b609e40209c251192be2238c928472a8991.scope: Deactivated successfully. Jan 29 14:38:10.308408 systemd[1]: cri-containerd-0f53af9e4b5cf68071e751f430783b609e40209c251192be2238c928472a8991.scope: Consumed 1.463s CPU time. Jan 29 14:38:10.420410 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f53af9e4b5cf68071e751f430783b609e40209c251192be2238c928472a8991-rootfs.mount: Deactivated successfully. Jan 29 14:38:10.450823 containerd[1501]: time="2025-01-29T14:38:10.450530001Z" level=info msg="shim disconnected" id=0f53af9e4b5cf68071e751f430783b609e40209c251192be2238c928472a8991 namespace=k8s.io Jan 29 14:38:10.450823 containerd[1501]: time="2025-01-29T14:38:10.450792785Z" level=warning msg="cleaning up after shim disconnected" id=0f53af9e4b5cf68071e751f430783b609e40209c251192be2238c928472a8991 namespace=k8s.io Jan 29 14:38:10.451598 containerd[1501]: time="2025-01-29T14:38:10.450845239Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 14:38:10.936820 containerd[1501]: time="2025-01-29T14:38:10.936747362Z" level=info msg="CreateContainer within sandbox \"a69cbee278048b8101fdc41bd4e8601c71153238390eb1e1ad4e3c2f0fde7315\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 14:38:10.964630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3806081265.mount: Deactivated successfully. Jan 29 14:38:10.966204 containerd[1501]: time="2025-01-29T14:38:10.966148906Z" level=info msg="CreateContainer within sandbox \"a69cbee278048b8101fdc41bd4e8601c71153238390eb1e1ad4e3c2f0fde7315\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4048e12cafa7072490e2a472de4dc7c5760eff44be511265f08612d9bf6e4f6a\"" Jan 29 14:38:10.969400 containerd[1501]: time="2025-01-29T14:38:10.967896577Z" level=info msg="StartContainer for \"4048e12cafa7072490e2a472de4dc7c5760eff44be511265f08612d9bf6e4f6a\"" Jan 29 14:38:11.010183 systemd[1]: Started cri-containerd-4048e12cafa7072490e2a472de4dc7c5760eff44be511265f08612d9bf6e4f6a.scope - libcontainer container 4048e12cafa7072490e2a472de4dc7c5760eff44be511265f08612d9bf6e4f6a. Jan 29 14:38:11.085358 containerd[1501]: time="2025-01-29T14:38:11.085277627Z" level=info msg="StartContainer for \"4048e12cafa7072490e2a472de4dc7c5760eff44be511265f08612d9bf6e4f6a\" returns successfully" Jan 29 14:38:11.317408 kubelet[2727]: I0129 14:38:11.317192 2727 topology_manager.go:215] "Topology Admit Handler" podUID="e81f792a-e490-41ea-9677-7287008a9cbc" podNamespace="calico-system" podName="calico-kube-controllers-67966b479b-zljkv" Jan 29 14:38:11.320100 kubelet[2727]: E0129 14:38:11.317407 2727 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5ff99ef3-cf8a-4707-99e1-ba45cf048b9d" containerName="calico-kube-controllers" Jan 29 14:38:11.320214 kubelet[2727]: I0129 14:38:11.320183 2727 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ff99ef3-cf8a-4707-99e1-ba45cf048b9d" containerName="calico-kube-controllers" Jan 29 14:38:11.335920 systemd[1]: Created slice kubepods-besteffort-pode81f792a_e490_41ea_9677_7287008a9cbc.slice - libcontainer container kubepods-besteffort-pode81f792a_e490_41ea_9677_7287008a9cbc.slice. Jan 29 14:38:11.441047 kubelet[2727]: I0129 14:38:11.440942 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq2z8\" (UniqueName: \"kubernetes.io/projected/e81f792a-e490-41ea-9677-7287008a9cbc-kube-api-access-pq2z8\") pod \"calico-kube-controllers-67966b479b-zljkv\" (UID: \"e81f792a-e490-41ea-9677-7287008a9cbc\") " pod="calico-system/calico-kube-controllers-67966b479b-zljkv" Jan 29 14:38:11.441047 kubelet[2727]: I0129 14:38:11.441019 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e81f792a-e490-41ea-9677-7287008a9cbc-tigera-ca-bundle\") pod \"calico-kube-controllers-67966b479b-zljkv\" (UID: \"e81f792a-e490-41ea-9677-7287008a9cbc\") " pod="calico-system/calico-kube-controllers-67966b479b-zljkv" Jan 29 14:38:11.644953 containerd[1501]: time="2025-01-29T14:38:11.644600262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67966b479b-zljkv,Uid:e81f792a-e490-41ea-9677-7287008a9cbc,Namespace:calico-system,Attempt:0,}" Jan 29 14:38:11.908011 systemd-networkd[1417]: cali275ce8e4165: Link UP Jan 29 14:38:11.909495 systemd-networkd[1417]: cali275ce8e4165: Gained carrier Jan 29 14:38:11.958296 kubelet[2727]: I0129 14:38:11.954746 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-6p8x4" podStartSLOduration=10.954583146000001 podStartE2EDuration="10.954583146s" podCreationTimestamp="2025-01-29 14:38:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 14:38:11.948080882 +0000 UTC m=+69.140772153" watchObservedRunningTime="2025-01-29 14:38:11.954583146 +0000 UTC m=+69.147274420" Jan 29 14:38:11.962090 containerd[1501]: 2025-01-29 14:38:11.775 [INFO][5964] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--rni4s.gb1.brightbox.com-k8s-calico--kube--controllers--67966b479b--zljkv-eth0 calico-kube-controllers-67966b479b- calico-system e81f792a-e490-41ea-9677-7287008a9cbc 1029 0 2025-01-29 14:38:06 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:67966b479b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-rni4s.gb1.brightbox.com calico-kube-controllers-67966b479b-zljkv eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali275ce8e4165 [] []}} ContainerID="a0217b5da9895c501704e99944dd70e48fa456a677cc85a04c2f5ab2de98b137" Namespace="calico-system" Pod="calico-kube-controllers-67966b479b-zljkv" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-calico--kube--controllers--67966b479b--zljkv-" Jan 29 14:38:11.962090 containerd[1501]: 2025-01-29 14:38:11.776 [INFO][5964] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a0217b5da9895c501704e99944dd70e48fa456a677cc85a04c2f5ab2de98b137" Namespace="calico-system" Pod="calico-kube-controllers-67966b479b-zljkv" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-calico--kube--controllers--67966b479b--zljkv-eth0" Jan 29 14:38:11.962090 containerd[1501]: 2025-01-29 14:38:11.823 [INFO][5975] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a0217b5da9895c501704e99944dd70e48fa456a677cc85a04c2f5ab2de98b137" HandleID="k8s-pod-network.a0217b5da9895c501704e99944dd70e48fa456a677cc85a04c2f5ab2de98b137" Workload="srv--rni4s.gb1.brightbox.com-k8s-calico--kube--controllers--67966b479b--zljkv-eth0" Jan 29 14:38:11.962090 containerd[1501]: 2025-01-29 14:38:11.837 [INFO][5975] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a0217b5da9895c501704e99944dd70e48fa456a677cc85a04c2f5ab2de98b137" HandleID="k8s-pod-network.a0217b5da9895c501704e99944dd70e48fa456a677cc85a04c2f5ab2de98b137" Workload="srv--rni4s.gb1.brightbox.com-k8s-calico--kube--controllers--67966b479b--zljkv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003355e0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-rni4s.gb1.brightbox.com", "pod":"calico-kube-controllers-67966b479b-zljkv", "timestamp":"2025-01-29 14:38:11.823272907 +0000 UTC"}, Hostname:"srv-rni4s.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 14:38:11.962090 containerd[1501]: 2025-01-29 14:38:11.837 [INFO][5975] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 14:38:11.962090 containerd[1501]: 2025-01-29 14:38:11.837 [INFO][5975] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 14:38:11.962090 containerd[1501]: 2025-01-29 14:38:11.837 [INFO][5975] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-rni4s.gb1.brightbox.com' Jan 29 14:38:11.962090 containerd[1501]: 2025-01-29 14:38:11.839 [INFO][5975] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a0217b5da9895c501704e99944dd70e48fa456a677cc85a04c2f5ab2de98b137" host="srv-rni4s.gb1.brightbox.com" Jan 29 14:38:11.962090 containerd[1501]: 2025-01-29 14:38:11.847 [INFO][5975] ipam/ipam.go 372: Looking up existing affinities for host host="srv-rni4s.gb1.brightbox.com" Jan 29 14:38:11.962090 containerd[1501]: 2025-01-29 14:38:11.854 [INFO][5975] ipam/ipam.go 489: Trying affinity for 192.168.0.192/26 host="srv-rni4s.gb1.brightbox.com" Jan 29 14:38:11.962090 containerd[1501]: 2025-01-29 14:38:11.857 [INFO][5975] ipam/ipam.go 155: Attempting to load block cidr=192.168.0.192/26 host="srv-rni4s.gb1.brightbox.com" Jan 29 14:38:11.962090 containerd[1501]: 2025-01-29 14:38:11.861 [INFO][5975] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.0.192/26 host="srv-rni4s.gb1.brightbox.com" Jan 29 14:38:11.962090 containerd[1501]: 2025-01-29 14:38:11.861 [INFO][5975] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.0.192/26 handle="k8s-pod-network.a0217b5da9895c501704e99944dd70e48fa456a677cc85a04c2f5ab2de98b137" host="srv-rni4s.gb1.brightbox.com" Jan 29 14:38:11.962090 containerd[1501]: 2025-01-29 14:38:11.864 [INFO][5975] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a0217b5da9895c501704e99944dd70e48fa456a677cc85a04c2f5ab2de98b137 Jan 29 14:38:11.962090 containerd[1501]: 2025-01-29 14:38:11.881 [INFO][5975] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.0.192/26 handle="k8s-pod-network.a0217b5da9895c501704e99944dd70e48fa456a677cc85a04c2f5ab2de98b137" host="srv-rni4s.gb1.brightbox.com" Jan 29 14:38:11.962090 containerd[1501]: 2025-01-29 14:38:11.899 [INFO][5975] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.0.199/26] block=192.168.0.192/26 handle="k8s-pod-network.a0217b5da9895c501704e99944dd70e48fa456a677cc85a04c2f5ab2de98b137" host="srv-rni4s.gb1.brightbox.com" Jan 29 14:38:11.962090 containerd[1501]: 2025-01-29 14:38:11.899 [INFO][5975] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.0.199/26] handle="k8s-pod-network.a0217b5da9895c501704e99944dd70e48fa456a677cc85a04c2f5ab2de98b137" host="srv-rni4s.gb1.brightbox.com" Jan 29 14:38:11.962090 containerd[1501]: 2025-01-29 14:38:11.899 [INFO][5975] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 14:38:11.962090 containerd[1501]: 2025-01-29 14:38:11.899 [INFO][5975] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.0.199/26] IPv6=[] ContainerID="a0217b5da9895c501704e99944dd70e48fa456a677cc85a04c2f5ab2de98b137" HandleID="k8s-pod-network.a0217b5da9895c501704e99944dd70e48fa456a677cc85a04c2f5ab2de98b137" Workload="srv--rni4s.gb1.brightbox.com-k8s-calico--kube--controllers--67966b479b--zljkv-eth0" Jan 29 14:38:11.964974 containerd[1501]: 2025-01-29 14:38:11.902 [INFO][5964] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a0217b5da9895c501704e99944dd70e48fa456a677cc85a04c2f5ab2de98b137" Namespace="calico-system" Pod="calico-kube-controllers-67966b479b-zljkv" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-calico--kube--controllers--67966b479b--zljkv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--rni4s.gb1.brightbox.com-k8s-calico--kube--controllers--67966b479b--zljkv-eth0", GenerateName:"calico-kube-controllers-67966b479b-", Namespace:"calico-system", SelfLink:"", UID:"e81f792a-e490-41ea-9677-7287008a9cbc", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 14, 38, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67966b479b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-rni4s.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-67966b479b-zljkv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.0.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali275ce8e4165", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 14:38:11.964974 containerd[1501]: 2025-01-29 14:38:11.902 [INFO][5964] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.0.199/32] ContainerID="a0217b5da9895c501704e99944dd70e48fa456a677cc85a04c2f5ab2de98b137" Namespace="calico-system" Pod="calico-kube-controllers-67966b479b-zljkv" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-calico--kube--controllers--67966b479b--zljkv-eth0" Jan 29 14:38:11.964974 containerd[1501]: 2025-01-29 14:38:11.902 [INFO][5964] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali275ce8e4165 ContainerID="a0217b5da9895c501704e99944dd70e48fa456a677cc85a04c2f5ab2de98b137" Namespace="calico-system" Pod="calico-kube-controllers-67966b479b-zljkv" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-calico--kube--controllers--67966b479b--zljkv-eth0" Jan 29 14:38:11.964974 containerd[1501]: 2025-01-29 14:38:11.910 [INFO][5964] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a0217b5da9895c501704e99944dd70e48fa456a677cc85a04c2f5ab2de98b137" Namespace="calico-system" Pod="calico-kube-controllers-67966b479b-zljkv" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-calico--kube--controllers--67966b479b--zljkv-eth0" Jan 29 14:38:11.964974 containerd[1501]: 2025-01-29 14:38:11.912 [INFO][5964] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a0217b5da9895c501704e99944dd70e48fa456a677cc85a04c2f5ab2de98b137" Namespace="calico-system" Pod="calico-kube-controllers-67966b479b-zljkv" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-calico--kube--controllers--67966b479b--zljkv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--rni4s.gb1.brightbox.com-k8s-calico--kube--controllers--67966b479b--zljkv-eth0", GenerateName:"calico-kube-controllers-67966b479b-", Namespace:"calico-system", SelfLink:"", UID:"e81f792a-e490-41ea-9677-7287008a9cbc", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 14, 38, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67966b479b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-rni4s.gb1.brightbox.com", ContainerID:"a0217b5da9895c501704e99944dd70e48fa456a677cc85a04c2f5ab2de98b137", Pod:"calico-kube-controllers-67966b479b-zljkv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.0.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali275ce8e4165", MAC:"8e:e2:fb:16:e0:20", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 14:38:11.964974 containerd[1501]: 2025-01-29 14:38:11.949 [INFO][5964] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a0217b5da9895c501704e99944dd70e48fa456a677cc85a04c2f5ab2de98b137" Namespace="calico-system" Pod="calico-kube-controllers-67966b479b-zljkv" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-calico--kube--controllers--67966b479b--zljkv-eth0" Jan 29 14:38:12.009698 containerd[1501]: time="2025-01-29T14:38:12.009279219Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 14:38:12.009698 containerd[1501]: time="2025-01-29T14:38:12.009413046Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 14:38:12.009698 containerd[1501]: time="2025-01-29T14:38:12.009438471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 14:38:12.010564 containerd[1501]: time="2025-01-29T14:38:12.010319532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 14:38:12.047036 systemd[1]: Started cri-containerd-a0217b5da9895c501704e99944dd70e48fa456a677cc85a04c2f5ab2de98b137.scope - libcontainer container a0217b5da9895c501704e99944dd70e48fa456a677cc85a04c2f5ab2de98b137. Jan 29 14:38:12.112473 containerd[1501]: time="2025-01-29T14:38:12.112423689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67966b479b-zljkv,Uid:e81f792a-e490-41ea-9677-7287008a9cbc,Namespace:calico-system,Attempt:0,} returns sandbox id \"a0217b5da9895c501704e99944dd70e48fa456a677cc85a04c2f5ab2de98b137\"" Jan 29 14:38:12.128312 containerd[1501]: time="2025-01-29T14:38:12.128222616Z" level=info msg="CreateContainer within sandbox \"a0217b5da9895c501704e99944dd70e48fa456a677cc85a04c2f5ab2de98b137\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 29 14:38:12.141982 containerd[1501]: time="2025-01-29T14:38:12.141916516Z" level=info msg="CreateContainer within sandbox \"a0217b5da9895c501704e99944dd70e48fa456a677cc85a04c2f5ab2de98b137\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"6c1400e03372e46655fece5034d712d6fb6d8fab71a6e172403944b75411c461\"" Jan 29 14:38:12.142846 containerd[1501]: time="2025-01-29T14:38:12.142704519Z" level=info msg="StartContainer for \"6c1400e03372e46655fece5034d712d6fb6d8fab71a6e172403944b75411c461\"" Jan 29 14:38:12.189167 systemd[1]: Started cri-containerd-6c1400e03372e46655fece5034d712d6fb6d8fab71a6e172403944b75411c461.scope - libcontainer container 6c1400e03372e46655fece5034d712d6fb6d8fab71a6e172403944b75411c461. Jan 29 14:38:12.255888 containerd[1501]: time="2025-01-29T14:38:12.255693019Z" level=info msg="StartContainer for \"6c1400e03372e46655fece5034d712d6fb6d8fab71a6e172403944b75411c461\" returns successfully" Jan 29 14:38:12.975898 kubelet[2727]: I0129 14:38:12.975752 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-67966b479b-zljkv" podStartSLOduration=6.97572172 podStartE2EDuration="6.97572172s" podCreationTimestamp="2025-01-29 14:38:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 14:38:12.969750046 +0000 UTC m=+70.162441323" watchObservedRunningTime="2025-01-29 14:38:12.97572172 +0000 UTC m=+70.168412984" Jan 29 14:38:13.585631 systemd-networkd[1417]: cali275ce8e4165: Gained IPv6LL Jan 29 14:38:13.682109 kubelet[2727]: I0129 14:38:13.681655 2727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 14:38:14.025236 systemd[1]: run-containerd-runc-k8s.io-6c1400e03372e46655fece5034d712d6fb6d8fab71a6e172403944b75411c461-runc.0jlNIu.mount: Deactivated successfully. Jan 29 14:38:30.505347 systemd[1]: Started sshd@9-10.244.17.238:22-139.178.68.195:60042.service - OpenSSH per-connection server daemon (139.178.68.195:60042). Jan 29 14:38:31.458644 sshd[6554]: Accepted publickey for core from 139.178.68.195 port 60042 ssh2: RSA SHA256:0vZJraS5L9jVCttGjAqyyzs9a0MPbdpNAxJdtCuEsy8 Jan 29 14:38:31.463238 sshd[6554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 14:38:31.474615 systemd-logind[1483]: New session 12 of user core. Jan 29 14:38:31.480096 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 14:38:32.543739 systemd[1]: run-containerd-runc-k8s.io-4048e12cafa7072490e2a472de4dc7c5760eff44be511265f08612d9bf6e4f6a-runc.GGSsLh.mount: Deactivated successfully. Jan 29 14:38:32.825933 sshd[6554]: pam_unix(sshd:session): session closed for user core Jan 29 14:38:32.834452 systemd-logind[1483]: Session 12 logged out. Waiting for processes to exit. Jan 29 14:38:32.836440 systemd[1]: sshd@9-10.244.17.238:22-139.178.68.195:60042.service: Deactivated successfully. Jan 29 14:38:32.841389 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 14:38:32.845517 systemd-logind[1483]: Removed session 12. Jan 29 14:38:37.917620 kubelet[2727]: I0129 14:38:37.916970 2727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 14:38:37.993476 systemd[1]: Started sshd@10-10.244.17.238:22-139.178.68.195:56032.service - OpenSSH per-connection server daemon (139.178.68.195:56032). Jan 29 14:38:38.954323 sshd[6830]: Accepted publickey for core from 139.178.68.195 port 56032 ssh2: RSA SHA256:0vZJraS5L9jVCttGjAqyyzs9a0MPbdpNAxJdtCuEsy8 Jan 29 14:38:38.959204 sshd[6830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 14:38:38.968356 systemd-logind[1483]: New session 13 of user core. Jan 29 14:38:38.976125 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 14:38:40.020792 sshd[6830]: pam_unix(sshd:session): session closed for user core Jan 29 14:38:40.026870 systemd[1]: sshd@10-10.244.17.238:22-139.178.68.195:56032.service: Deactivated successfully. Jan 29 14:38:40.026916 systemd-logind[1483]: Session 13 logged out. Waiting for processes to exit. Jan 29 14:38:40.029988 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 14:38:40.033182 systemd-logind[1483]: Removed session 13. Jan 29 14:38:45.193353 systemd[1]: Started sshd@11-10.244.17.238:22-139.178.68.195:53154.service - OpenSSH per-connection server daemon (139.178.68.195:53154). Jan 29 14:38:46.114033 sshd[6874]: Accepted publickey for core from 139.178.68.195 port 53154 ssh2: RSA SHA256:0vZJraS5L9jVCttGjAqyyzs9a0MPbdpNAxJdtCuEsy8 Jan 29 14:38:46.118247 sshd[6874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 14:38:46.130458 systemd-logind[1483]: New session 14 of user core. Jan 29 14:38:46.137149 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 14:38:46.932256 sshd[6874]: pam_unix(sshd:session): session closed for user core Jan 29 14:38:46.939424 systemd[1]: sshd@11-10.244.17.238:22-139.178.68.195:53154.service: Deactivated successfully. Jan 29 14:38:46.943306 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 14:38:46.946675 systemd-logind[1483]: Session 14 logged out. Waiting for processes to exit. Jan 29 14:38:46.948573 systemd-logind[1483]: Removed session 14. Jan 29 14:38:47.093337 systemd[1]: Started sshd@12-10.244.17.238:22-139.178.68.195:53168.service - OpenSSH per-connection server daemon (139.178.68.195:53168). Jan 29 14:38:47.982229 sshd[6896]: Accepted publickey for core from 139.178.68.195 port 53168 ssh2: RSA SHA256:0vZJraS5L9jVCttGjAqyyzs9a0MPbdpNAxJdtCuEsy8 Jan 29 14:38:47.985621 sshd[6896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 14:38:47.996330 systemd-logind[1483]: New session 15 of user core. Jan 29 14:38:48.003238 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 14:38:48.800842 sshd[6896]: pam_unix(sshd:session): session closed for user core Jan 29 14:38:48.807557 systemd[1]: sshd@12-10.244.17.238:22-139.178.68.195:53168.service: Deactivated successfully. Jan 29 14:38:48.810019 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 14:38:48.811880 systemd-logind[1483]: Session 15 logged out. Waiting for processes to exit. Jan 29 14:38:48.814160 systemd-logind[1483]: Removed session 15. Jan 29 14:38:48.959291 systemd[1]: Started sshd@13-10.244.17.238:22-139.178.68.195:53182.service - OpenSSH per-connection server daemon (139.178.68.195:53182). Jan 29 14:38:49.876111 sshd[6910]: Accepted publickey for core from 139.178.68.195 port 53182 ssh2: RSA SHA256:0vZJraS5L9jVCttGjAqyyzs9a0MPbdpNAxJdtCuEsy8 Jan 29 14:38:49.878006 sshd[6910]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 14:38:49.886458 systemd-logind[1483]: New session 16 of user core. Jan 29 14:38:49.894227 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 14:38:50.607294 sshd[6910]: pam_unix(sshd:session): session closed for user core Jan 29 14:38:50.612377 systemd[1]: sshd@13-10.244.17.238:22-139.178.68.195:53182.service: Deactivated successfully. Jan 29 14:38:50.615057 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 14:38:50.616418 systemd-logind[1483]: Session 16 logged out. Waiting for processes to exit. Jan 29 14:38:50.618699 systemd-logind[1483]: Removed session 16. Jan 29 14:38:55.772243 systemd[1]: Started sshd@14-10.244.17.238:22-139.178.68.195:55016.service - OpenSSH per-connection server daemon (139.178.68.195:55016). Jan 29 14:38:56.401307 systemd[1]: Started sshd@15-10.244.17.238:22-184.105.247.252:35224.service - OpenSSH per-connection server daemon (184.105.247.252:35224). Jan 29 14:38:56.423895 sshd[6931]: banner exchange: Connection from 184.105.247.252 port 35224: invalid format Jan 29 14:38:56.425329 systemd[1]: sshd@15-10.244.17.238:22-184.105.247.252:35224.service: Deactivated successfully. Jan 29 14:38:56.675002 sshd[6928]: Accepted publickey for core from 139.178.68.195 port 55016 ssh2: RSA SHA256:0vZJraS5L9jVCttGjAqyyzs9a0MPbdpNAxJdtCuEsy8 Jan 29 14:38:56.677232 sshd[6928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 14:38:56.688969 systemd-logind[1483]: New session 17 of user core. Jan 29 14:38:56.695025 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 14:38:57.427230 sshd[6928]: pam_unix(sshd:session): session closed for user core Jan 29 14:38:57.432603 systemd[1]: sshd@14-10.244.17.238:22-139.178.68.195:55016.service: Deactivated successfully. Jan 29 14:38:57.436589 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 14:38:57.437921 systemd-logind[1483]: Session 17 logged out. Waiting for processes to exit. Jan 29 14:38:57.439942 systemd-logind[1483]: Removed session 17. Jan 29 14:39:02.585466 systemd[1]: Started sshd@16-10.244.17.238:22-139.178.68.195:55018.service - OpenSSH per-connection server daemon (139.178.68.195:55018). Jan 29 14:39:03.502341 sshd[6975]: Accepted publickey for core from 139.178.68.195 port 55018 ssh2: RSA SHA256:0vZJraS5L9jVCttGjAqyyzs9a0MPbdpNAxJdtCuEsy8 Jan 29 14:39:03.505505 sshd[6975]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 14:39:03.513928 systemd-logind[1483]: New session 18 of user core. Jan 29 14:39:03.520411 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 14:39:04.305562 sshd[6975]: pam_unix(sshd:session): session closed for user core Jan 29 14:39:04.319442 systemd[1]: sshd@16-10.244.17.238:22-139.178.68.195:55018.service: Deactivated successfully. Jan 29 14:39:04.325643 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 14:39:04.328035 systemd-logind[1483]: Session 18 logged out. Waiting for processes to exit. Jan 29 14:39:04.332697 systemd-logind[1483]: Removed session 18. Jan 29 14:39:07.745183 kubelet[2727]: I0129 14:39:07.744925 2727 scope.go:117] "RemoveContainer" containerID="ebc1fbd9f3c36879854d7b5af1fc72c5a90d214b0d9d73465cb98b0b5ce2394a" Jan 29 14:39:07.789784 containerd[1501]: time="2025-01-29T14:39:07.783138236Z" level=info msg="RemoveContainer for \"ebc1fbd9f3c36879854d7b5af1fc72c5a90d214b0d9d73465cb98b0b5ce2394a\"" Jan 29 14:39:07.826554 containerd[1501]: time="2025-01-29T14:39:07.826435695Z" level=info msg="RemoveContainer for \"ebc1fbd9f3c36879854d7b5af1fc72c5a90d214b0d9d73465cb98b0b5ce2394a\" returns successfully" Jan 29 14:39:07.834315 containerd[1501]: time="2025-01-29T14:39:07.833998845Z" level=info msg="StopPodSandbox for \"0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81\"" Jan 29 14:39:08.231850 containerd[1501]: 2025-01-29 14:39:08.027 [WARNING][7001] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-calico--kube--controllers--77d6948c77--z6748-eth0" Jan 29 14:39:08.231850 containerd[1501]: 2025-01-29 14:39:08.029 [INFO][7001] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81" Jan 29 14:39:08.231850 containerd[1501]: 2025-01-29 14:39:08.029 [INFO][7001] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81" iface="eth0" netns="" Jan 29 14:39:08.231850 containerd[1501]: 2025-01-29 14:39:08.030 [INFO][7001] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81" Jan 29 14:39:08.231850 containerd[1501]: 2025-01-29 14:39:08.030 [INFO][7001] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81" Jan 29 14:39:08.231850 containerd[1501]: 2025-01-29 14:39:08.194 [INFO][7007] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81" HandleID="k8s-pod-network.0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81" Workload="srv--rni4s.gb1.brightbox.com-k8s-calico--kube--controllers--77d6948c77--z6748-eth0" Jan 29 14:39:08.231850 containerd[1501]: 2025-01-29 14:39:08.199 [INFO][7007] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 14:39:08.231850 containerd[1501]: 2025-01-29 14:39:08.204 [INFO][7007] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 14:39:08.231850 containerd[1501]: 2025-01-29 14:39:08.223 [WARNING][7007] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81" HandleID="k8s-pod-network.0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81" Workload="srv--rni4s.gb1.brightbox.com-k8s-calico--kube--controllers--77d6948c77--z6748-eth0" Jan 29 14:39:08.231850 containerd[1501]: 2025-01-29 14:39:08.224 [INFO][7007] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81" HandleID="k8s-pod-network.0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81" Workload="srv--rni4s.gb1.brightbox.com-k8s-calico--kube--controllers--77d6948c77--z6748-eth0" Jan 29 14:39:08.231850 containerd[1501]: 2025-01-29 14:39:08.226 [INFO][7007] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 14:39:08.231850 containerd[1501]: 2025-01-29 14:39:08.229 [INFO][7001] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81" Jan 29 14:39:08.267789 containerd[1501]: time="2025-01-29T14:39:08.267701358Z" level=info msg="TearDown network for sandbox \"0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81\" successfully" Jan 29 14:39:08.267789 containerd[1501]: time="2025-01-29T14:39:08.267817181Z" level=info msg="StopPodSandbox for \"0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81\" returns successfully" Jan 29 14:39:08.275669 containerd[1501]: time="2025-01-29T14:39:08.275273960Z" level=info msg="RemovePodSandbox for \"0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81\"" Jan 29 14:39:08.304536 containerd[1501]: time="2025-01-29T14:39:08.303689697Z" level=info msg="Forcibly stopping sandbox \"0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81\"" Jan 29 14:39:08.502016 containerd[1501]: 2025-01-29 14:39:08.420 [WARNING][7025] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81" WorkloadEndpoint="srv--rni4s.gb1.brightbox.com-k8s-calico--kube--controllers--77d6948c77--z6748-eth0" Jan 29 14:39:08.502016 containerd[1501]: 2025-01-29 14:39:08.420 [INFO][7025] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81" Jan 29 14:39:08.502016 containerd[1501]: 2025-01-29 14:39:08.420 [INFO][7025] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81" iface="eth0" netns="" Jan 29 14:39:08.502016 containerd[1501]: 2025-01-29 14:39:08.421 [INFO][7025] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81" Jan 29 14:39:08.502016 containerd[1501]: 2025-01-29 14:39:08.421 [INFO][7025] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81" Jan 29 14:39:08.502016 containerd[1501]: 2025-01-29 14:39:08.475 [INFO][7031] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81" HandleID="k8s-pod-network.0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81" Workload="srv--rni4s.gb1.brightbox.com-k8s-calico--kube--controllers--77d6948c77--z6748-eth0" Jan 29 14:39:08.502016 containerd[1501]: 2025-01-29 14:39:08.476 [INFO][7031] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 14:39:08.502016 containerd[1501]: 2025-01-29 14:39:08.476 [INFO][7031] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 14:39:08.502016 containerd[1501]: 2025-01-29 14:39:08.487 [WARNING][7031] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81" HandleID="k8s-pod-network.0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81" Workload="srv--rni4s.gb1.brightbox.com-k8s-calico--kube--controllers--77d6948c77--z6748-eth0" Jan 29 14:39:08.502016 containerd[1501]: 2025-01-29 14:39:08.487 [INFO][7031] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81" HandleID="k8s-pod-network.0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81" Workload="srv--rni4s.gb1.brightbox.com-k8s-calico--kube--controllers--77d6948c77--z6748-eth0" Jan 29 14:39:08.502016 containerd[1501]: 2025-01-29 14:39:08.491 [INFO][7031] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 14:39:08.502016 containerd[1501]: 2025-01-29 14:39:08.497 [INFO][7025] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81" Jan 29 14:39:08.502729 containerd[1501]: time="2025-01-29T14:39:08.502002966Z" level=info msg="TearDown network for sandbox \"0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81\" successfully" Jan 29 14:39:08.516337 containerd[1501]: time="2025-01-29T14:39:08.516203292Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 14:39:08.516548 containerd[1501]: time="2025-01-29T14:39:08.516366138Z" level=info msg="RemovePodSandbox \"0229fe1df6c75118ef1343f49b41379350b95deb5bc4bf1c62d79fab888e2f81\" returns successfully" Jan 29 14:39:09.467210 systemd[1]: Started sshd@17-10.244.17.238:22-139.178.68.195:43514.service - OpenSSH per-connection server daemon (139.178.68.195:43514). Jan 29 14:39:10.435256 sshd[7038]: Accepted publickey for core from 139.178.68.195 port 43514 ssh2: RSA SHA256:0vZJraS5L9jVCttGjAqyyzs9a0MPbdpNAxJdtCuEsy8 Jan 29 14:39:10.439253 sshd[7038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 14:39:10.452880 systemd-logind[1483]: New session 19 of user core. Jan 29 14:39:10.469161 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 14:39:11.256563 sshd[7038]: pam_unix(sshd:session): session closed for user core Jan 29 14:39:11.262176 systemd[1]: sshd@17-10.244.17.238:22-139.178.68.195:43514.service: Deactivated successfully. Jan 29 14:39:11.265252 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 14:39:11.266426 systemd-logind[1483]: Session 19 logged out. Waiting for processes to exit. Jan 29 14:39:11.267989 systemd-logind[1483]: Removed session 19. Jan 29 14:39:11.413280 systemd[1]: Started sshd@18-10.244.17.238:22-139.178.68.195:43520.service - OpenSSH per-connection server daemon (139.178.68.195:43520). Jan 29 14:39:12.312958 sshd[7051]: Accepted publickey for core from 139.178.68.195 port 43520 ssh2: RSA SHA256:0vZJraS5L9jVCttGjAqyyzs9a0MPbdpNAxJdtCuEsy8 Jan 29 14:39:12.315290 sshd[7051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 14:39:12.325209 systemd-logind[1483]: New session 20 of user core. Jan 29 14:39:12.334262 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 14:39:13.420792 sshd[7051]: pam_unix(sshd:session): session closed for user core Jan 29 14:39:13.431343 systemd[1]: sshd@18-10.244.17.238:22-139.178.68.195:43520.service: Deactivated successfully. Jan 29 14:39:13.435462 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 14:39:13.436707 systemd-logind[1483]: Session 20 logged out. Waiting for processes to exit. Jan 29 14:39:13.438457 systemd-logind[1483]: Removed session 20. Jan 29 14:39:13.582772 systemd[1]: Started sshd@19-10.244.17.238:22-139.178.68.195:43524.service - OpenSSH per-connection server daemon (139.178.68.195:43524). Jan 29 14:39:14.502291 sshd[7100]: Accepted publickey for core from 139.178.68.195 port 43524 ssh2: RSA SHA256:0vZJraS5L9jVCttGjAqyyzs9a0MPbdpNAxJdtCuEsy8 Jan 29 14:39:14.504915 sshd[7100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 14:39:14.517339 systemd-logind[1483]: New session 21 of user core. Jan 29 14:39:14.527430 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 14:39:18.077013 sshd[7100]: pam_unix(sshd:session): session closed for user core Jan 29 14:39:18.098986 systemd-logind[1483]: Session 21 logged out. Waiting for processes to exit. Jan 29 14:39:18.100146 systemd[1]: sshd@19-10.244.17.238:22-139.178.68.195:43524.service: Deactivated successfully. Jan 29 14:39:18.104512 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 14:39:18.106009 systemd-logind[1483]: Removed session 21. Jan 29 14:39:18.226984 systemd[1]: Started sshd@20-10.244.17.238:22-139.178.68.195:35616.service - OpenSSH per-connection server daemon (139.178.68.195:35616). Jan 29 14:39:19.155897 sshd[7128]: Accepted publickey for core from 139.178.68.195 port 35616 ssh2: RSA SHA256:0vZJraS5L9jVCttGjAqyyzs9a0MPbdpNAxJdtCuEsy8 Jan 29 14:39:19.158512 sshd[7128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 14:39:19.166874 systemd-logind[1483]: New session 22 of user core. Jan 29 14:39:19.176108 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 14:39:20.502606 sshd[7128]: pam_unix(sshd:session): session closed for user core Jan 29 14:39:20.509464 systemd-logind[1483]: Session 22 logged out. Waiting for processes to exit. Jan 29 14:39:20.510665 systemd[1]: sshd@20-10.244.17.238:22-139.178.68.195:35616.service: Deactivated successfully. Jan 29 14:39:20.514626 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 14:39:20.516571 systemd-logind[1483]: Removed session 22. Jan 29 14:39:20.661192 systemd[1]: Started sshd@21-10.244.17.238:22-139.178.68.195:35624.service - OpenSSH per-connection server daemon (139.178.68.195:35624). Jan 29 14:39:21.574310 sshd[7139]: Accepted publickey for core from 139.178.68.195 port 35624 ssh2: RSA SHA256:0vZJraS5L9jVCttGjAqyyzs9a0MPbdpNAxJdtCuEsy8 Jan 29 14:39:21.577196 sshd[7139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 14:39:21.585073 systemd-logind[1483]: New session 23 of user core. Jan 29 14:39:21.591029 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 14:39:22.341048 sshd[7139]: pam_unix(sshd:session): session closed for user core Jan 29 14:39:22.347575 systemd-logind[1483]: Session 23 logged out. Waiting for processes to exit. Jan 29 14:39:22.349295 systemd[1]: sshd@21-10.244.17.238:22-139.178.68.195:35624.service: Deactivated successfully. Jan 29 14:39:22.352345 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 14:39:22.354160 systemd-logind[1483]: Removed session 23. Jan 29 14:39:27.505266 systemd[1]: Started sshd@22-10.244.17.238:22-139.178.68.195:48444.service - OpenSSH per-connection server daemon (139.178.68.195:48444). Jan 29 14:39:28.407186 sshd[7156]: Accepted publickey for core from 139.178.68.195 port 48444 ssh2: RSA SHA256:0vZJraS5L9jVCttGjAqyyzs9a0MPbdpNAxJdtCuEsy8 Jan 29 14:39:28.409901 sshd[7156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 14:39:28.418183 systemd-logind[1483]: New session 24 of user core. Jan 29 14:39:28.424119 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 14:39:29.169422 sshd[7156]: pam_unix(sshd:session): session closed for user core Jan 29 14:39:29.176715 systemd[1]: sshd@22-10.244.17.238:22-139.178.68.195:48444.service: Deactivated successfully. Jan 29 14:39:29.180852 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 14:39:29.184371 systemd-logind[1483]: Session 24 logged out. Waiting for processes to exit. Jan 29 14:39:29.186758 systemd-logind[1483]: Removed session 24. Jan 29 14:39:34.327250 systemd[1]: Started sshd@23-10.244.17.238:22-139.178.68.195:48460.service - OpenSSH per-connection server daemon (139.178.68.195:48460). Jan 29 14:39:35.275278 sshd[7191]: Accepted publickey for core from 139.178.68.195 port 48460 ssh2: RSA SHA256:0vZJraS5L9jVCttGjAqyyzs9a0MPbdpNAxJdtCuEsy8 Jan 29 14:39:35.281755 sshd[7191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 14:39:35.297213 systemd-logind[1483]: New session 25 of user core. Jan 29 14:39:35.303100 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 29 14:39:36.086172 sshd[7191]: pam_unix(sshd:session): session closed for user core Jan 29 14:39:36.093522 systemd[1]: sshd@23-10.244.17.238:22-139.178.68.195:48460.service: Deactivated successfully. Jan 29 14:39:36.099304 systemd[1]: session-25.scope: Deactivated successfully. Jan 29 14:39:36.102024 systemd-logind[1483]: Session 25 logged out. Waiting for processes to exit. Jan 29 14:39:36.104354 systemd-logind[1483]: Removed session 25. Jan 29 14:39:41.256863 systemd[1]: Started sshd@24-10.244.17.238:22-139.178.68.195:46750.service - OpenSSH per-connection server daemon (139.178.68.195:46750). Jan 29 14:39:42.165692 sshd[7208]: Accepted publickey for core from 139.178.68.195 port 46750 ssh2: RSA SHA256:0vZJraS5L9jVCttGjAqyyzs9a0MPbdpNAxJdtCuEsy8 Jan 29 14:39:42.168990 sshd[7208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 14:39:42.181412 systemd-logind[1483]: New session 26 of user core. Jan 29 14:39:42.185562 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 29 14:39:42.906938 sshd[7208]: pam_unix(sshd:session): session closed for user core Jan 29 14:39:42.914637 systemd[1]: sshd@24-10.244.17.238:22-139.178.68.195:46750.service: Deactivated successfully. Jan 29 14:39:42.918772 systemd[1]: session-26.scope: Deactivated successfully. Jan 29 14:39:42.920764 systemd-logind[1483]: Session 26 logged out. Waiting for processes to exit. Jan 29 14:39:42.922560 systemd-logind[1483]: Removed session 26. Jan 29 14:39:45.444639 systemd[1]: Started sshd@25-10.244.17.238:22-92.255.85.188:49558.service - OpenSSH per-connection server daemon (92.255.85.188:49558).