Jan 30 18:28:44.905717 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 18:28:44.905752 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 18:28:44.905764 kernel: BIOS-provided physical RAM map: Jan 30 18:28:44.905776 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 30 18:28:44.905784 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 30 18:28:44.905802 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 30 18:28:44.905810 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jan 30 18:28:44.905818 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jan 30 18:28:44.905825 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 30 18:28:44.905832 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 30 18:28:44.905839 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 30 18:28:44.905847 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 30 18:28:44.905857 kernel: NX (Execute Disable) protection: active Jan 30 18:28:44.905864 kernel: APIC: Static calls initialized Jan 30 18:28:44.905873 kernel: SMBIOS 2.8 present. Jan 30 18:28:44.905882 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Jan 30 18:28:44.905890 kernel: Hypervisor detected: KVM Jan 30 18:28:44.905902 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 18:28:44.905910 kernel: kvm-clock: using sched offset of 3782186122 cycles Jan 30 18:28:44.905918 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 18:28:44.905927 kernel: tsc: Detected 2294.608 MHz processor Jan 30 18:28:44.905935 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 18:28:44.905944 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 18:28:44.905952 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 30 18:28:44.905960 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 30 18:28:44.905968 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 18:28:44.905980 kernel: Using GB pages for direct mapping Jan 30 18:28:44.905988 kernel: ACPI: Early table checksum verification disabled Jan 30 18:28:44.905996 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Jan 30 18:28:44.906004 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 18:28:44.906012 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 18:28:44.906020 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 18:28:44.906028 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jan 30 18:28:44.906037 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 18:28:44.906045 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 18:28:44.906056 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 18:28:44.906064 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 18:28:44.906072 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jan 30 18:28:44.906080 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jan 30 18:28:44.906089 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jan 30 18:28:44.906101 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jan 30 18:28:44.906110 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jan 30 18:28:44.906122 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jan 30 18:28:44.906130 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jan 30 18:28:44.906139 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 30 18:28:44.906148 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 30 18:28:44.906171 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jan 30 18:28:44.906181 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Jan 30 18:28:44.906190 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jan 30 18:28:44.906200 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Jan 30 18:28:44.906222 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jan 30 18:28:44.906230 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Jan 30 18:28:44.906239 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jan 30 18:28:44.906247 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Jan 30 18:28:44.906256 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jan 30 18:28:44.906264 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Jan 30 18:28:44.906273 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jan 30 18:28:44.906281 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Jan 30 18:28:44.906290 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jan 30 18:28:44.906302 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Jan 30 18:28:44.906316 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 30 18:28:44.906325 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 30 18:28:44.906334 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jan 30 18:28:44.906343 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Jan 30 18:28:44.906352 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Jan 30 18:28:44.906361 kernel: Zone ranges: Jan 30 18:28:44.906369 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 18:28:44.906378 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jan 30 18:28:44.906390 kernel: Normal empty Jan 30 18:28:44.906399 kernel: Movable zone start for each node Jan 30 18:28:44.906407 kernel: Early memory node ranges Jan 30 18:28:44.906416 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 30 18:28:44.906425 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jan 30 18:28:44.906433 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jan 30 18:28:44.906442 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 18:28:44.906450 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 30 18:28:44.906459 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jan 30 18:28:44.906468 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 30 18:28:44.906480 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 18:28:44.906488 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 18:28:44.906497 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 18:28:44.906506 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 18:28:44.906514 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 18:28:44.906523 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 18:28:44.906532 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 18:28:44.906540 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 18:28:44.906549 kernel: TSC deadline timer available Jan 30 18:28:44.906561 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Jan 30 18:28:44.906569 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 18:28:44.906578 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 30 18:28:44.906587 kernel: Booting paravirtualized kernel on KVM Jan 30 18:28:44.906596 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 18:28:44.906604 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 30 18:28:44.906613 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 30 18:28:44.906622 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 30 18:28:44.906630 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 30 18:28:44.906642 kernel: kvm-guest: PV spinlocks enabled Jan 30 18:28:44.906651 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 30 18:28:44.906661 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 18:28:44.906670 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 18:28:44.906678 kernel: random: crng init done Jan 30 18:28:44.908719 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 18:28:44.908729 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 30 18:28:44.908738 kernel: Fallback order for Node 0: 0 Jan 30 18:28:44.908753 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Jan 30 18:28:44.908762 kernel: Policy zone: DMA32 Jan 30 18:28:44.908771 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 18:28:44.908780 kernel: software IO TLB: area num 16. Jan 30 18:28:44.908789 kernel: Memory: 1901524K/2096616K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 194832K reserved, 0K cma-reserved) Jan 30 18:28:44.908798 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 30 18:28:44.908823 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 18:28:44.908832 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 18:28:44.908842 kernel: Dynamic Preempt: voluntary Jan 30 18:28:44.908855 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 18:28:44.908866 kernel: rcu: RCU event tracing is enabled. Jan 30 18:28:44.908876 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 30 18:28:44.908886 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 18:28:44.908896 kernel: Rude variant of Tasks RCU enabled. Jan 30 18:28:44.908916 kernel: Tracing variant of Tasks RCU enabled. Jan 30 18:28:44.908930 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 18:28:44.908940 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 30 18:28:44.908950 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jan 30 18:28:44.908960 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 18:28:44.908970 kernel: Console: colour VGA+ 80x25 Jan 30 18:28:44.908980 kernel: printk: console [tty0] enabled Jan 30 18:28:44.908994 kernel: printk: console [ttyS0] enabled Jan 30 18:28:44.909004 kernel: ACPI: Core revision 20230628 Jan 30 18:28:44.909015 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 18:28:44.909025 kernel: x2apic enabled Jan 30 18:28:44.909035 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 18:28:44.909049 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Jan 30 18:28:44.909060 kernel: Calibrating delay loop (skipped) preset value.. 4589.21 BogoMIPS (lpj=2294608) Jan 30 18:28:44.909070 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 30 18:28:44.909080 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 30 18:28:44.909090 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 30 18:28:44.909100 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 18:28:44.909111 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 30 18:28:44.909120 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 30 18:28:44.909131 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 30 18:28:44.909141 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 18:28:44.909154 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jan 30 18:28:44.909165 kernel: RETBleed: Mitigation: Enhanced IBRS Jan 30 18:28:44.909175 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 18:28:44.909185 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 18:28:44.909195 kernel: TAA: Mitigation: Clear CPU buffers Jan 30 18:28:44.909205 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 18:28:44.909215 kernel: GDS: Unknown: Dependent on hypervisor status Jan 30 18:28:44.909225 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 18:28:44.909235 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 18:28:44.909245 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 18:28:44.909256 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 30 18:28:44.909269 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 30 18:28:44.909280 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 30 18:28:44.909290 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 30 18:28:44.909300 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 18:28:44.909316 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 30 18:28:44.909327 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 30 18:28:44.909337 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 30 18:28:44.909347 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Jan 30 18:28:44.909357 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Jan 30 18:28:44.909367 kernel: Freeing SMP alternatives memory: 32K Jan 30 18:28:44.909377 kernel: pid_max: default: 32768 minimum: 301 Jan 30 18:28:44.909391 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 18:28:44.909401 kernel: landlock: Up and running. Jan 30 18:28:44.909411 kernel: SELinux: Initializing. Jan 30 18:28:44.909421 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 18:28:44.909432 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 18:28:44.909442 kernel: smpboot: CPU0: Intel Xeon Processor (Cascadelake) (family: 0x6, model: 0x55, stepping: 0x6) Jan 30 18:28:44.909452 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 30 18:28:44.909462 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 30 18:28:44.909473 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 30 18:28:44.909483 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 30 18:28:44.909497 kernel: signal: max sigframe size: 3632 Jan 30 18:28:44.909507 kernel: rcu: Hierarchical SRCU implementation. Jan 30 18:28:44.909518 kernel: rcu: Max phase no-delay instances is 400. Jan 30 18:28:44.909528 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 30 18:28:44.909538 kernel: smp: Bringing up secondary CPUs ... Jan 30 18:28:44.909548 kernel: smpboot: x86: Booting SMP configuration: Jan 30 18:28:44.909559 kernel: .... node #0, CPUs: #1 Jan 30 18:28:44.909569 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jan 30 18:28:44.909579 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 18:28:44.909593 kernel: smpboot: Max logical packages: 16 Jan 30 18:28:44.909603 kernel: smpboot: Total of 2 processors activated (9178.43 BogoMIPS) Jan 30 18:28:44.909613 kernel: devtmpfs: initialized Jan 30 18:28:44.909623 kernel: x86/mm: Memory block size: 128MB Jan 30 18:28:44.909634 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 18:28:44.909644 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 30 18:28:44.909654 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 18:28:44.909664 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 18:28:44.909675 kernel: audit: initializing netlink subsys (disabled) Jan 30 18:28:44.909703 kernel: audit: type=2000 audit(1738261723.393:1): state=initialized audit_enabled=0 res=1 Jan 30 18:28:44.909713 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 18:28:44.909724 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 18:28:44.909734 kernel: cpuidle: using governor menu Jan 30 18:28:44.909744 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 18:28:44.909755 kernel: dca service started, version 1.12.1 Jan 30 18:28:44.909765 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 30 18:28:44.909775 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 30 18:28:44.909786 kernel: PCI: Using configuration type 1 for base access Jan 30 18:28:44.909799 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 18:28:44.909810 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 18:28:44.909820 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 18:28:44.909830 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 18:28:44.909840 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 18:28:44.909851 kernel: ACPI: Added _OSI(Module Device) Jan 30 18:28:44.909861 kernel: ACPI: Added _OSI(Processor Device) Jan 30 18:28:44.909871 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 18:28:44.909881 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 18:28:44.909895 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 18:28:44.909905 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 18:28:44.909916 kernel: ACPI: Interpreter enabled Jan 30 18:28:44.909926 kernel: ACPI: PM: (supports S0 S5) Jan 30 18:28:44.909936 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 18:28:44.909946 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 18:28:44.909957 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 18:28:44.909967 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 30 18:28:44.909977 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 18:28:44.910155 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 18:28:44.910272 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 30 18:28:44.910383 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 30 18:28:44.910396 kernel: PCI host bridge to bus 0000:00 Jan 30 18:28:44.910506 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 18:28:44.910602 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 18:28:44.912776 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 18:28:44.912890 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 30 18:28:44.912985 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 30 18:28:44.913078 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jan 30 18:28:44.913171 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 18:28:44.913295 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 30 18:28:44.913414 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Jan 30 18:28:44.913537 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Jan 30 18:28:44.913641 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Jan 30 18:28:44.914797 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Jan 30 18:28:44.914911 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 18:28:44.915026 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 30 18:28:44.915135 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Jan 30 18:28:44.915255 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 30 18:28:44.915381 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Jan 30 18:28:44.915494 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 30 18:28:44.915619 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Jan 30 18:28:44.916777 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 30 18:28:44.916877 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Jan 30 18:28:44.916979 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 30 18:28:44.917084 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Jan 30 18:28:44.917191 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 30 18:28:44.917285 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Jan 30 18:28:44.917411 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 30 18:28:44.917520 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Jan 30 18:28:44.917632 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 30 18:28:44.918782 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Jan 30 18:28:44.918890 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 30 18:28:44.918985 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 30 18:28:44.919079 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Jan 30 18:28:44.919171 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jan 30 18:28:44.919264 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Jan 30 18:28:44.919390 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 30 18:28:44.919501 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 30 18:28:44.919604 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Jan 30 18:28:44.920726 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Jan 30 18:28:44.920848 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 30 18:28:44.920951 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 30 18:28:44.921059 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 30 18:28:44.921167 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Jan 30 18:28:44.921270 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Jan 30 18:28:44.921386 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 30 18:28:44.921490 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 30 18:28:44.921604 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Jan 30 18:28:44.922753 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Jan 30 18:28:44.922860 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 30 18:28:44.922954 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 30 18:28:44.923047 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 30 18:28:44.923149 kernel: pci_bus 0000:02: extended config space not accessible Jan 30 18:28:44.923257 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Jan 30 18:28:44.923363 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Jan 30 18:28:44.923460 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 30 18:28:44.923560 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 30 18:28:44.923665 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 30 18:28:44.924794 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Jan 30 18:28:44.924890 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 30 18:28:44.924983 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 30 18:28:44.925077 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 30 18:28:44.925182 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 30 18:28:44.925283 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jan 30 18:28:44.925384 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 30 18:28:44.925476 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 30 18:28:44.925570 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 30 18:28:44.925665 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 30 18:28:44.926789 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 30 18:28:44.926883 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 30 18:28:44.926977 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 30 18:28:44.927073 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 30 18:28:44.927165 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 30 18:28:44.927259 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 30 18:28:44.927360 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 30 18:28:44.927453 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 30 18:28:44.927546 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 30 18:28:44.927639 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 30 18:28:44.928762 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 30 18:28:44.928862 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 30 18:28:44.928954 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 30 18:28:44.929046 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 30 18:28:44.929059 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 18:28:44.929069 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 18:28:44.929079 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 18:28:44.929088 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 18:28:44.929097 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 30 18:28:44.929107 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 30 18:28:44.929120 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 30 18:28:44.929130 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 30 18:28:44.929139 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 30 18:28:44.929148 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 30 18:28:44.929158 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 30 18:28:44.929167 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 30 18:28:44.929176 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 30 18:28:44.929185 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 30 18:28:44.929195 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 30 18:28:44.929208 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 30 18:28:44.929217 kernel: iommu: Default domain type: Translated Jan 30 18:28:44.929227 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 18:28:44.929236 kernel: PCI: Using ACPI for IRQ routing Jan 30 18:28:44.929245 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 18:28:44.929254 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 30 18:28:44.929264 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jan 30 18:28:44.929362 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 30 18:28:44.929458 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 30 18:28:44.929569 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 18:28:44.929583 kernel: vgaarb: loaded Jan 30 18:28:44.929594 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 18:28:44.929604 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 18:28:44.929615 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 18:28:44.929625 kernel: pnp: PnP ACPI init Jan 30 18:28:44.930763 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 30 18:28:44.930784 kernel: pnp: PnP ACPI: found 5 devices Jan 30 18:28:44.930794 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 18:28:44.930804 kernel: NET: Registered PF_INET protocol family Jan 30 18:28:44.930813 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 18:28:44.930823 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 30 18:28:44.930832 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 18:28:44.930841 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 18:28:44.930851 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 18:28:44.930860 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 30 18:28:44.930873 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 18:28:44.930883 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 18:28:44.930892 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 18:28:44.930902 kernel: NET: Registered PF_XDP protocol family Jan 30 18:28:44.930997 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Jan 30 18:28:44.931093 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 30 18:28:44.931187 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 30 18:28:44.931285 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 30 18:28:44.931386 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 30 18:28:44.931481 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 30 18:28:44.931578 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 30 18:28:44.932731 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 30 18:28:44.932873 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 30 18:28:44.932987 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 30 18:28:44.933090 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 30 18:28:44.933198 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 30 18:28:44.933320 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 30 18:28:44.933424 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 30 18:28:44.933529 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 30 18:28:44.933632 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 30 18:28:44.933753 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 30 18:28:44.933861 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 30 18:28:44.933991 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 30 18:28:44.934094 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 30 18:28:44.934198 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 30 18:28:44.934307 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 30 18:28:44.934419 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 30 18:28:44.934534 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 30 18:28:44.934629 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 30 18:28:44.936731 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 30 18:28:44.936839 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 30 18:28:44.936936 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 30 18:28:44.937032 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 30 18:28:44.937127 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 30 18:28:44.937221 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 30 18:28:44.937322 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 30 18:28:44.937449 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 30 18:28:44.937561 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 30 18:28:44.937667 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 30 18:28:44.937782 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 30 18:28:44.937889 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 30 18:28:44.938012 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 30 18:28:44.938245 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 30 18:28:44.938410 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 30 18:28:44.939714 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 30 18:28:44.939852 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 30 18:28:44.939951 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 30 18:28:44.940047 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 30 18:28:44.940143 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 30 18:28:44.940238 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 30 18:28:44.940345 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 30 18:28:44.940440 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 30 18:28:44.940535 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 30 18:28:44.940629 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 30 18:28:44.940747 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 18:28:44.940831 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 18:28:44.940916 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 18:28:44.941000 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 30 18:28:44.941106 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 30 18:28:44.941203 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jan 30 18:28:44.941318 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 30 18:28:44.941421 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jan 30 18:28:44.941519 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 30 18:28:44.946748 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 30 18:28:44.946871 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Jan 30 18:28:44.946969 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 30 18:28:44.947057 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 30 18:28:44.947152 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Jan 30 18:28:44.947240 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 30 18:28:44.947335 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 30 18:28:44.947430 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jan 30 18:28:44.947519 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 30 18:28:44.947613 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 30 18:28:44.947770 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Jan 30 18:28:44.947860 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 30 18:28:44.947947 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 30 18:28:44.948040 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Jan 30 18:28:44.948127 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 30 18:28:44.948215 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 30 18:28:44.948323 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Jan 30 18:28:44.948412 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jan 30 18:28:44.948498 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 30 18:28:44.948591 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Jan 30 18:28:44.948688 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 30 18:28:44.948777 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 30 18:28:44.948791 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 30 18:28:44.948806 kernel: PCI: CLS 0 bytes, default 64 Jan 30 18:28:44.948817 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 30 18:28:44.948827 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jan 30 18:28:44.948837 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 30 18:28:44.948847 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Jan 30 18:28:44.948857 kernel: Initialise system trusted keyrings Jan 30 18:28:44.948867 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 30 18:28:44.948877 kernel: Key type asymmetric registered Jan 30 18:28:44.948890 kernel: Asymmetric key parser 'x509' registered Jan 30 18:28:44.948900 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 18:28:44.948910 kernel: io scheduler mq-deadline registered Jan 30 18:28:44.948920 kernel: io scheduler kyber registered Jan 30 18:28:44.948930 kernel: io scheduler bfq registered Jan 30 18:28:44.949027 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 30 18:28:44.949144 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 30 18:28:44.949255 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 18:28:44.949382 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 30 18:28:44.949487 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 30 18:28:44.949591 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 18:28:44.949715 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 30 18:28:44.949820 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 30 18:28:44.949924 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 18:28:44.950036 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 30 18:28:44.950141 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 30 18:28:44.950246 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 18:28:44.950357 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 30 18:28:44.950461 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 30 18:28:44.950566 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 18:28:44.950676 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 30 18:28:44.950809 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 30 18:28:44.950912 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 18:28:44.951021 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 30 18:28:44.951114 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 30 18:28:44.951209 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 18:28:44.951313 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 30 18:28:44.951408 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 30 18:28:44.951517 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 18:28:44.951532 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 18:28:44.951544 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 30 18:28:44.951555 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 30 18:28:44.951566 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 18:28:44.951582 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 18:28:44.951593 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 18:28:44.951604 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 18:28:44.951615 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 18:28:44.951627 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 18:28:44.951781 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 30 18:28:44.951881 kernel: rtc_cmos 00:03: registered as rtc0 Jan 30 18:28:44.951977 kernel: rtc_cmos 00:03: setting system clock to 2025-01-30T18:28:44 UTC (1738261724) Jan 30 18:28:44.952076 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 30 18:28:44.952091 kernel: intel_pstate: CPU model not supported Jan 30 18:28:44.952102 kernel: NET: Registered PF_INET6 protocol family Jan 30 18:28:44.952113 kernel: Segment Routing with IPv6 Jan 30 18:28:44.952124 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 18:28:44.952135 kernel: NET: Registered PF_PACKET protocol family Jan 30 18:28:44.952146 kernel: Key type dns_resolver registered Jan 30 18:28:44.952157 kernel: IPI shorthand broadcast: enabled Jan 30 18:28:44.952168 kernel: sched_clock: Marking stable (846004501, 123902568)->(1142210896, -172303827) Jan 30 18:28:44.952183 kernel: registered taskstats version 1 Jan 30 18:28:44.952194 kernel: Loading compiled-in X.509 certificates Jan 30 18:28:44.952206 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 18:28:44.952216 kernel: Key type .fscrypt registered Jan 30 18:28:44.952227 kernel: Key type fscrypt-provisioning registered Jan 30 18:28:44.952238 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 18:28:44.952249 kernel: ima: Allocated hash algorithm: sha1 Jan 30 18:28:44.952260 kernel: ima: No architecture policies found Jan 30 18:28:44.952271 kernel: clk: Disabling unused clocks Jan 30 18:28:44.952286 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 18:28:44.952297 kernel: Write protecting the kernel read-only data: 36864k Jan 30 18:28:44.952314 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 18:28:44.952325 kernel: Run /init as init process Jan 30 18:28:44.952336 kernel: with arguments: Jan 30 18:28:44.952347 kernel: /init Jan 30 18:28:44.952357 kernel: with environment: Jan 30 18:28:44.952368 kernel: HOME=/ Jan 30 18:28:44.952379 kernel: TERM=linux Jan 30 18:28:44.952393 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 18:28:44.952411 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 18:28:44.952426 systemd[1]: Detected virtualization kvm. Jan 30 18:28:44.952438 systemd[1]: Detected architecture x86-64. Jan 30 18:28:44.952449 systemd[1]: Running in initrd. Jan 30 18:28:44.952460 systemd[1]: No hostname configured, using default hostname. Jan 30 18:28:44.952471 systemd[1]: Hostname set to . Jan 30 18:28:44.952487 systemd[1]: Initializing machine ID from VM UUID. Jan 30 18:28:44.952499 systemd[1]: Queued start job for default target initrd.target. Jan 30 18:28:44.952510 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 18:28:44.952521 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 18:28:44.952533 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 18:28:44.952545 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 18:28:44.952556 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 18:28:44.952568 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 18:28:44.952584 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 18:28:44.952597 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 18:28:44.952608 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 18:28:44.952620 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 18:28:44.952631 systemd[1]: Reached target paths.target - Path Units. Jan 30 18:28:44.952642 systemd[1]: Reached target slices.target - Slice Units. Jan 30 18:28:44.952654 systemd[1]: Reached target swap.target - Swaps. Jan 30 18:28:44.952665 systemd[1]: Reached target timers.target - Timer Units. Jan 30 18:28:44.952689 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 18:28:44.952711 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 18:28:44.952724 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 18:28:44.952735 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 18:28:44.952747 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 18:28:44.952758 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 18:28:44.952770 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 18:28:44.952781 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 18:28:44.952798 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 18:28:44.952810 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 18:28:44.952821 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 18:28:44.952833 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 18:28:44.952844 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 18:28:44.952856 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 18:28:44.952867 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 18:28:44.952878 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 18:28:44.952890 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 18:28:44.952905 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 18:28:44.952946 systemd-journald[201]: Collecting audit messages is disabled. Jan 30 18:28:44.952977 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 18:28:44.952989 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 18:28:44.953001 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 18:28:44.953012 kernel: Bridge firewalling registered Jan 30 18:28:44.953024 systemd-journald[201]: Journal started Jan 30 18:28:44.953052 systemd-journald[201]: Runtime Journal (/run/log/journal/3d16af81094b4b5bacca30f17ff2b987) is 4.7M, max 38.0M, 33.2M free. Jan 30 18:28:44.914425 systemd-modules-load[202]: Inserted module 'overlay' Jan 30 18:28:44.946799 systemd-modules-load[202]: Inserted module 'br_netfilter' Jan 30 18:28:44.975753 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 18:28:44.976709 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 18:28:44.978047 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 18:28:44.987839 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 18:28:44.989822 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 18:28:44.992810 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 18:28:44.994794 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 18:28:45.014361 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 18:28:45.016915 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 18:28:45.021818 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 18:28:45.022434 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 18:28:45.023746 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 18:28:45.029001 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 18:28:45.039373 dracut-cmdline[231]: dracut-dracut-053 Jan 30 18:28:45.042721 dracut-cmdline[231]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 18:28:45.067294 systemd-resolved[235]: Positive Trust Anchors: Jan 30 18:28:45.067322 systemd-resolved[235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 18:28:45.067362 systemd-resolved[235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 18:28:45.070661 systemd-resolved[235]: Defaulting to hostname 'linux'. Jan 30 18:28:45.071815 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 18:28:45.073532 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 18:28:45.132774 kernel: SCSI subsystem initialized Jan 30 18:28:45.143730 kernel: Loading iSCSI transport class v2.0-870. Jan 30 18:28:45.154872 kernel: iscsi: registered transport (tcp) Jan 30 18:28:45.176741 kernel: iscsi: registered transport (qla4xxx) Jan 30 18:28:45.176861 kernel: QLogic iSCSI HBA Driver Jan 30 18:28:45.229569 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 18:28:45.234809 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 18:28:45.285983 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 18:28:45.286058 kernel: device-mapper: uevent: version 1.0.3 Jan 30 18:28:45.287400 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 18:28:45.335736 kernel: raid6: avx512x4 gen() 17433 MB/s Jan 30 18:28:45.352728 kernel: raid6: avx512x2 gen() 17636 MB/s Jan 30 18:28:45.369723 kernel: raid6: avx512x1 gen() 17750 MB/s Jan 30 18:28:45.386733 kernel: raid6: avx2x4 gen() 17616 MB/s Jan 30 18:28:45.403755 kernel: raid6: avx2x2 gen() 17668 MB/s Jan 30 18:28:45.420860 kernel: raid6: avx2x1 gen() 14346 MB/s Jan 30 18:28:45.420958 kernel: raid6: using algorithm avx512x1 gen() 17750 MB/s Jan 30 18:28:45.438943 kernel: raid6: .... xor() 19497 MB/s, rmw enabled Jan 30 18:28:45.439020 kernel: raid6: using avx512x2 recovery algorithm Jan 30 18:28:45.460736 kernel: xor: automatically using best checksumming function avx Jan 30 18:28:45.642730 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 18:28:45.660970 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 18:28:45.679106 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 18:28:45.693092 systemd-udevd[418]: Using default interface naming scheme 'v255'. Jan 30 18:28:45.698791 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 18:28:45.711002 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 18:28:45.734411 dracut-pre-trigger[427]: rd.md=0: removing MD RAID activation Jan 30 18:28:45.780091 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 18:28:45.785976 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 18:28:45.852859 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 18:28:45.861883 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 18:28:45.877392 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 18:28:45.879463 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 18:28:45.880531 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 18:28:45.881246 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 18:28:45.885080 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 18:28:45.903670 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 18:28:45.933743 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jan 30 18:28:45.983037 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 18:28:45.983057 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 30 18:28:45.983175 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 18:28:45.983189 kernel: GPT:17805311 != 125829119 Jan 30 18:28:45.983200 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 18:28:45.983225 kernel: GPT:17805311 != 125829119 Jan 30 18:28:45.983236 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 18:28:45.983248 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 18:28:45.956728 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 18:28:45.956869 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 18:28:45.958145 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 18:28:45.958804 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 18:28:45.958934 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 18:28:45.959767 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 18:28:45.966895 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 18:28:45.999413 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 18:28:45.999443 kernel: AES CTR mode by8 optimization enabled Jan 30 18:28:46.026724 kernel: ACPI: bus type USB registered Jan 30 18:28:46.027699 kernel: usbcore: registered new interface driver usbfs Jan 30 18:28:46.027736 kernel: usbcore: registered new interface driver hub Jan 30 18:28:46.027755 kernel: usbcore: registered new device driver usb Jan 30 18:28:46.039701 kernel: libata version 3.00 loaded. Jan 30 18:28:46.049133 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (465) Jan 30 18:28:46.050548 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 18:28:46.066062 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (474) Jan 30 18:28:46.064365 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 18:28:46.079911 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 18:28:46.087273 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 18:28:46.091947 kernel: ahci 0000:00:1f.2: version 3.0 Jan 30 18:28:46.122062 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 30 18:28:46.122098 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 30 18:28:46.122243 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 30 18:28:46.122362 kernel: scsi host0: ahci Jan 30 18:28:46.122475 kernel: scsi host1: ahci Jan 30 18:28:46.122584 kernel: scsi host2: ahci Jan 30 18:28:46.122683 kernel: scsi host3: ahci Jan 30 18:28:46.122801 kernel: scsi host4: ahci Jan 30 18:28:46.122900 kernel: scsi host5: ahci Jan 30 18:28:46.122997 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 30 18:28:46.126388 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Jan 30 18:28:46.126405 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jan 30 18:28:46.126539 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Jan 30 18:28:46.126553 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 30 18:28:46.126675 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Jan 30 18:28:46.126708 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 30 18:28:46.126826 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jan 30 18:28:46.126939 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Jan 30 18:28:46.126953 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jan 30 18:28:46.127064 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Jan 30 18:28:46.127078 kernel: hub 1-0:1.0: USB hub found Jan 30 18:28:46.127213 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Jan 30 18:28:46.127228 kernel: hub 1-0:1.0: 4 ports detected Jan 30 18:28:46.127337 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 30 18:28:46.127458 kernel: hub 2-0:1.0: USB hub found Jan 30 18:28:46.127580 kernel: hub 2-0:1.0: 4 ports detected Jan 30 18:28:46.095764 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 18:28:46.096224 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 18:28:46.104973 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 18:28:46.124888 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 18:28:46.133764 disk-uuid[567]: Primary Header is updated. Jan 30 18:28:46.133764 disk-uuid[567]: Secondary Entries is updated. Jan 30 18:28:46.133764 disk-uuid[567]: Secondary Header is updated. Jan 30 18:28:46.141768 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 18:28:46.146738 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 18:28:46.147738 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 18:28:46.151725 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 18:28:46.365768 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 30 18:28:46.426704 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 30 18:28:46.426775 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 30 18:28:46.436725 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 30 18:28:46.440300 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 30 18:28:46.440380 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 30 18:28:46.442978 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 30 18:28:46.516750 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 18:28:46.521733 kernel: usbcore: registered new interface driver usbhid Jan 30 18:28:46.521819 kernel: usbhid: USB HID core driver Jan 30 18:28:46.525717 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 30 18:28:46.525788 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jan 30 18:28:47.154790 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 18:28:47.154897 disk-uuid[569]: The operation has completed successfully. Jan 30 18:28:47.195917 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 18:28:47.196034 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 18:28:47.205839 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 18:28:47.210915 sh[586]: Success Jan 30 18:28:47.232699 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 30 18:28:47.274033 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 18:28:47.280777 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 18:28:47.284716 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 18:28:47.298251 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 18:28:47.298301 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 18:28:47.298316 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 18:28:47.299449 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 18:28:47.301217 kernel: BTRFS info (device dm-0): using free space tree Jan 30 18:28:47.307390 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 18:28:47.308278 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 18:28:47.313846 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 18:28:47.315998 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 18:28:47.327883 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 18:28:47.327920 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 18:28:47.328702 kernel: BTRFS info (device vda6): using free space tree Jan 30 18:28:47.333698 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 18:28:47.342224 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 18:28:47.343722 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 18:28:47.348356 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 18:28:47.354839 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 18:28:47.438328 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 18:28:47.445887 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 18:28:47.470122 systemd-networkd[769]: lo: Link UP Jan 30 18:28:47.471154 systemd-networkd[769]: lo: Gained carrier Jan 30 18:28:47.471501 ignition[681]: Ignition 2.19.0 Jan 30 18:28:47.472557 systemd-networkd[769]: Enumeration completed Jan 30 18:28:47.471509 ignition[681]: Stage: fetch-offline Jan 30 18:28:47.472728 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 18:28:47.471550 ignition[681]: no configs at "/usr/lib/ignition/base.d" Jan 30 18:28:47.473849 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 18:28:47.471560 ignition[681]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 18:28:47.473853 systemd-networkd[769]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 18:28:47.471724 ignition[681]: parsed url from cmdline: "" Jan 30 18:28:47.474691 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 18:28:47.471728 ignition[681]: no config URL provided Jan 30 18:28:47.474895 systemd-networkd[769]: eth0: Link UP Jan 30 18:28:47.471734 ignition[681]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 18:28:47.474899 systemd-networkd[769]: eth0: Gained carrier Jan 30 18:28:47.471742 ignition[681]: no config at "/usr/lib/ignition/user.ign" Jan 30 18:28:47.474906 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 18:28:47.471747 ignition[681]: failed to fetch config: resource requires networking Jan 30 18:28:47.475895 systemd[1]: Reached target network.target - Network. Jan 30 18:28:47.471973 ignition[681]: Ignition finished successfully Jan 30 18:28:47.480843 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 18:28:47.490730 systemd-networkd[769]: eth0: DHCPv4 address 10.244.90.134/30, gateway 10.244.90.133 acquired from 10.244.90.133 Jan 30 18:28:47.499039 ignition[777]: Ignition 2.19.0 Jan 30 18:28:47.499050 ignition[777]: Stage: fetch Jan 30 18:28:47.499261 ignition[777]: no configs at "/usr/lib/ignition/base.d" Jan 30 18:28:47.499271 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 18:28:47.499364 ignition[777]: parsed url from cmdline: "" Jan 30 18:28:47.499367 ignition[777]: no config URL provided Jan 30 18:28:47.499372 ignition[777]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 18:28:47.499379 ignition[777]: no config at "/usr/lib/ignition/user.ign" Jan 30 18:28:47.499547 ignition[777]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 30 18:28:47.499812 ignition[777]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 30 18:28:47.499893 ignition[777]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 30 18:28:47.519378 ignition[777]: GET result: OK Jan 30 18:28:47.519906 ignition[777]: parsing config with SHA512: 6126b80387087bdb95657266534f94a23e1fa8f65de080abde2509f43a97f84aad8a5d3a40592b1bd21850ddb1088bdfbaf4f1b37fb0c1aedf193427c5077a2e Jan 30 18:28:47.524938 unknown[777]: fetched base config from "system" Jan 30 18:28:47.525011 unknown[777]: fetched base config from "system" Jan 30 18:28:47.525087 unknown[777]: fetched user config from "openstack" Jan 30 18:28:47.527401 ignition[777]: fetch: fetch complete Jan 30 18:28:47.527411 ignition[777]: fetch: fetch passed Jan 30 18:28:47.527471 ignition[777]: Ignition finished successfully Jan 30 18:28:47.531528 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 18:28:47.536890 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 18:28:47.557638 ignition[784]: Ignition 2.19.0 Jan 30 18:28:47.557652 ignition[784]: Stage: kargs Jan 30 18:28:47.557892 ignition[784]: no configs at "/usr/lib/ignition/base.d" Jan 30 18:28:47.557903 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 18:28:47.560194 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 18:28:47.558949 ignition[784]: kargs: kargs passed Jan 30 18:28:47.558998 ignition[784]: Ignition finished successfully Jan 30 18:28:47.563855 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 18:28:47.586258 ignition[790]: Ignition 2.19.0 Jan 30 18:28:47.586289 ignition[790]: Stage: disks Jan 30 18:28:47.586836 ignition[790]: no configs at "/usr/lib/ignition/base.d" Jan 30 18:28:47.586864 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 18:28:47.589307 ignition[790]: disks: disks passed Jan 30 18:28:47.589413 ignition[790]: Ignition finished successfully Jan 30 18:28:47.591362 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 18:28:47.592497 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 18:28:47.593595 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 18:28:47.595369 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 18:28:47.597117 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 18:28:47.598600 systemd[1]: Reached target basic.target - Basic System. Jan 30 18:28:47.609906 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 18:28:47.632432 systemd-fsck[798]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 30 18:28:47.636772 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 18:28:47.645349 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 18:28:47.756091 kernel: EXT4-fs (vda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 18:28:47.756650 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 18:28:47.757623 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 18:28:47.768764 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 18:28:47.770778 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 18:28:47.771589 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 18:28:47.773912 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 30 18:28:47.774442 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 18:28:47.774494 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 18:28:47.780701 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (806) Jan 30 18:28:47.781532 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 18:28:47.785052 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 18:28:47.785086 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 18:28:47.785101 kernel: BTRFS info (device vda6): using free space tree Jan 30 18:28:47.788728 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 18:28:47.799855 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 18:28:47.803432 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 18:28:47.845820 initrd-setup-root[833]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 18:28:47.850716 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Jan 30 18:28:47.857549 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 18:28:47.860896 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 18:28:47.962451 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 18:28:47.967857 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 18:28:47.973986 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 18:28:47.984715 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 18:28:48.007410 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 18:28:48.008274 ignition[922]: INFO : Ignition 2.19.0 Jan 30 18:28:48.008274 ignition[922]: INFO : Stage: mount Jan 30 18:28:48.009311 ignition[922]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 18:28:48.009311 ignition[922]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 18:28:48.011075 ignition[922]: INFO : mount: mount passed Jan 30 18:28:48.011075 ignition[922]: INFO : Ignition finished successfully Jan 30 18:28:48.011778 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 18:28:48.298918 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 18:28:48.945986 systemd-networkd[769]: eth0: Gained IPv6LL Jan 30 18:28:49.593321 systemd-networkd[769]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:16a1:24:19ff:fef4:5a86/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:16a1:24:19ff:fef4:5a86/64 assigned by NDisc. Jan 30 18:28:49.593357 systemd-networkd[769]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 30 18:28:54.924772 coreos-metadata[808]: Jan 30 18:28:54.924 WARN failed to locate config-drive, using the metadata service API instead Jan 30 18:28:54.941944 coreos-metadata[808]: Jan 30 18:28:54.941 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 30 18:28:54.956972 coreos-metadata[808]: Jan 30 18:28:54.956 INFO Fetch successful Jan 30 18:28:54.958400 coreos-metadata[808]: Jan 30 18:28:54.958 INFO wrote hostname srv-eex0h.gb1.brightbox.com to /sysroot/etc/hostname Jan 30 18:28:54.962785 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 30 18:28:54.963031 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 30 18:28:54.971865 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 18:28:54.999881 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 18:28:55.007700 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (941) Jan 30 18:28:55.009926 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 18:28:55.009955 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 18:28:55.011185 kernel: BTRFS info (device vda6): using free space tree Jan 30 18:28:55.014695 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 18:28:55.016467 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 18:28:55.045487 ignition[958]: INFO : Ignition 2.19.0 Jan 30 18:28:55.045487 ignition[958]: INFO : Stage: files Jan 30 18:28:55.046433 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 18:28:55.046433 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 18:28:55.047451 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Jan 30 18:28:55.047451 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 18:28:55.048537 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 18:28:55.050175 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 18:28:55.050709 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 18:28:55.050709 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 18:28:55.050538 unknown[958]: wrote ssh authorized keys file for user: core Jan 30 18:28:55.055443 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 18:28:55.055443 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 18:28:55.291246 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 18:28:55.678787 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 18:28:55.680268 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 30 18:28:55.680268 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 18:28:55.680268 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 18:28:55.680268 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 18:28:55.680268 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 18:28:55.685065 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 18:28:55.685065 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 18:28:55.685065 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 18:28:55.685065 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 18:28:55.685065 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 18:28:55.685065 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 18:28:55.685065 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 18:28:55.685065 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 18:28:55.685065 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 30 18:28:56.245853 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 30 18:28:57.467880 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 18:28:57.467880 ignition[958]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 30 18:28:57.472381 ignition[958]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 18:28:57.472381 ignition[958]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 18:28:57.472381 ignition[958]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 30 18:28:57.472381 ignition[958]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 30 18:28:57.472381 ignition[958]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 18:28:57.472381 ignition[958]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 18:28:57.472381 ignition[958]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 18:28:57.472381 ignition[958]: INFO : files: files passed Jan 30 18:28:57.472381 ignition[958]: INFO : Ignition finished successfully Jan 30 18:28:57.473080 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 18:28:57.481938 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 18:28:57.483847 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 18:28:57.489824 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 18:28:57.489916 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 18:28:57.501215 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 18:28:57.502145 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 18:28:57.502772 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 18:28:57.504282 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 18:28:57.505767 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 18:28:57.509813 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 18:28:57.560516 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 18:28:57.560844 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 18:28:57.563369 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 18:28:57.564525 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 18:28:57.565636 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 18:28:57.578891 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 18:28:57.596916 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 18:28:57.604861 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 18:28:57.614188 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 18:28:57.615250 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 18:28:57.616327 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 18:28:57.617263 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 18:28:57.617382 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 18:28:57.618193 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 18:28:57.618658 systemd[1]: Stopped target basic.target - Basic System. Jan 30 18:28:57.619491 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 18:28:57.620208 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 18:28:57.620892 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 18:28:57.621658 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 18:28:57.622457 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 18:28:57.623231 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 18:28:57.623950 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 18:28:57.624781 systemd[1]: Stopped target swap.target - Swaps. Jan 30 18:28:57.625508 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 18:28:57.625620 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 18:28:57.626583 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 18:28:57.627412 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 18:28:57.628154 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 18:28:57.628243 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 18:28:57.628979 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 18:28:57.629078 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 18:28:57.630050 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 18:28:57.630155 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 18:28:57.631086 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 18:28:57.631180 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 18:28:57.643200 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 18:28:57.643610 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 18:28:57.643785 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 18:28:57.650915 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 18:28:57.651331 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 18:28:57.651488 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 18:28:57.652057 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 18:28:57.652205 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 18:28:57.656903 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 18:28:57.657006 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 18:28:57.664130 ignition[1011]: INFO : Ignition 2.19.0 Jan 30 18:28:57.664130 ignition[1011]: INFO : Stage: umount Jan 30 18:28:57.665129 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 18:28:57.665129 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 18:28:57.666185 ignition[1011]: INFO : umount: umount passed Jan 30 18:28:57.666185 ignition[1011]: INFO : Ignition finished successfully Jan 30 18:28:57.668084 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 18:28:57.669721 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 18:28:57.670328 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 18:28:57.670372 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 18:28:57.672025 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 18:28:57.672068 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 18:28:57.672469 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 18:28:57.672502 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 18:28:57.673987 systemd[1]: Stopped target network.target - Network. Jan 30 18:28:57.674329 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 18:28:57.674369 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 18:28:57.674791 systemd[1]: Stopped target paths.target - Path Units. Jan 30 18:28:57.675132 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 18:28:57.679738 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 18:28:57.680655 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 18:28:57.681034 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 18:28:57.681797 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 18:28:57.681836 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 18:28:57.682490 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 18:28:57.682522 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 18:28:57.683147 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 18:28:57.683185 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 18:28:57.684250 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 18:28:57.684291 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 18:28:57.685122 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 18:28:57.688659 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 18:28:57.690765 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 18:28:57.691850 systemd-networkd[769]: eth0: DHCPv6 lease lost Jan 30 18:28:57.695504 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 18:28:57.695628 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 18:28:57.696764 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 18:28:57.696852 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 18:28:57.705151 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 18:28:57.705544 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 18:28:57.705590 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 18:28:57.706155 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 18:28:57.709493 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 18:28:57.709608 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 18:28:57.713597 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 18:28:57.715141 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 18:28:57.716760 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 18:28:57.716806 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 18:28:57.717857 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 18:28:57.717900 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 18:28:57.726118 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 18:28:57.726804 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 18:28:57.728338 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 18:28:57.728758 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 18:28:57.730217 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 18:28:57.730275 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 18:28:57.731809 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 18:28:57.731845 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 18:28:57.732785 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 18:28:57.732828 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 18:28:57.734566 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 18:28:57.734608 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 18:28:57.735441 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 18:28:57.735482 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 18:28:57.742991 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 18:28:57.744208 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 18:28:57.744635 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 18:28:57.745394 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 18:28:57.745429 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 18:28:57.747874 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 18:28:57.747977 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 18:28:57.748786 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 18:28:57.748874 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 18:28:57.751433 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 18:28:57.751527 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 18:28:57.752397 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 18:28:57.756869 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 18:28:57.765466 systemd[1]: Switching root. Jan 30 18:28:57.793663 systemd-journald[201]: Journal stopped Jan 30 18:28:58.744120 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Jan 30 18:28:58.744192 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 18:28:58.744219 kernel: SELinux: policy capability open_perms=1 Jan 30 18:28:58.744229 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 18:28:58.744244 kernel: SELinux: policy capability always_check_network=0 Jan 30 18:28:58.744256 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 18:28:58.744268 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 18:28:58.744280 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 18:28:58.744294 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 18:28:58.744305 kernel: audit: type=1403 audit(1738261737.933:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 18:28:58.744318 systemd[1]: Successfully loaded SELinux policy in 39.935ms. Jan 30 18:28:58.744340 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.664ms. Jan 30 18:28:58.744358 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 18:28:58.744370 systemd[1]: Detected virtualization kvm. Jan 30 18:28:58.744383 systemd[1]: Detected architecture x86-64. Jan 30 18:28:58.744394 systemd[1]: Detected first boot. Jan 30 18:28:58.744406 systemd[1]: Hostname set to . Jan 30 18:28:58.744424 systemd[1]: Initializing machine ID from VM UUID. Jan 30 18:28:58.744436 zram_generator::config[1055]: No configuration found. Jan 30 18:28:58.744450 systemd[1]: Populated /etc with preset unit settings. Jan 30 18:28:58.744480 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 18:28:58.744493 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 18:28:58.744510 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 18:28:58.744525 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 18:28:58.744538 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 18:28:58.744553 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 18:28:58.744567 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 18:28:58.744580 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 18:28:58.744597 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 18:28:58.744610 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 18:28:58.744623 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 18:28:58.744637 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 18:28:58.744650 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 18:28:58.744664 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 18:28:58.744680 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 18:28:58.744693 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 18:28:58.746700 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 18:28:58.746722 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 18:28:58.746736 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 18:28:58.746755 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 18:28:58.746782 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 18:28:58.746798 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 18:28:58.746813 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 18:28:58.746826 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 18:28:58.746840 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 18:28:58.746855 systemd[1]: Reached target slices.target - Slice Units. Jan 30 18:28:58.746871 systemd[1]: Reached target swap.target - Swaps. Jan 30 18:28:58.746885 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 18:28:58.746899 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 18:28:58.746912 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 18:28:58.746926 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 18:28:58.746939 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 18:28:58.746953 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 18:28:58.746968 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 18:28:58.746984 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 18:28:58.747006 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 18:28:58.747020 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 18:28:58.747034 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 18:28:58.747048 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 18:28:58.747062 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 18:28:58.747076 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 18:28:58.747090 systemd[1]: Reached target machines.target - Containers. Jan 30 18:28:58.747103 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 18:28:58.747123 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 18:28:58.747137 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 18:28:58.747150 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 18:28:58.747163 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 18:28:58.747178 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 18:28:58.747191 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 18:28:58.747204 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 18:28:58.747217 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 18:28:58.747231 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 18:28:58.747258 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 18:28:58.747271 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 18:28:58.747283 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 18:28:58.747296 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 18:28:58.747309 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 18:28:58.747321 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 18:28:58.747333 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 18:28:58.747366 systemd-journald[1144]: Collecting audit messages is disabled. Jan 30 18:28:58.747393 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 18:28:58.747407 systemd-journald[1144]: Journal started Jan 30 18:28:58.747431 systemd-journald[1144]: Runtime Journal (/run/log/journal/3d16af81094b4b5bacca30f17ff2b987) is 4.7M, max 38.0M, 33.2M free. Jan 30 18:28:58.496054 systemd[1]: Queued start job for default target multi-user.target. Jan 30 18:28:58.524110 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 18:28:58.524925 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 18:28:58.754906 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 18:28:58.754955 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 18:28:58.754993 systemd[1]: Stopped verity-setup.service. Jan 30 18:28:58.756393 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 18:28:58.763703 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 18:28:58.765020 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 18:28:58.765526 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 18:28:58.765976 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 18:28:58.766385 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 18:28:58.771037 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 18:28:58.772170 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 18:28:58.773139 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 18:28:58.774304 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 18:28:58.775852 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 18:28:58.776481 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 18:28:58.776616 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 18:28:58.777308 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 18:28:58.777462 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 18:28:58.779124 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 18:28:58.779877 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 18:28:58.794304 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 18:28:58.802770 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 18:28:58.816703 kernel: fuse: init (API version 7.39) Jan 30 18:28:58.817835 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 18:28:58.819719 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 18:28:58.820396 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 18:28:58.823391 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 18:28:58.823451 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 18:28:58.825633 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 18:28:58.831913 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 18:28:58.840791 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 18:28:58.841333 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 18:28:58.851866 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 18:28:58.856719 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 18:28:58.860773 kernel: ACPI: bus type drm_connector registered Jan 30 18:28:58.857615 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 18:28:58.860032 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 18:28:58.869717 kernel: loop: module loaded Jan 30 18:28:58.870848 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 18:28:58.871919 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 18:28:58.872068 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 18:28:58.872757 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 18:28:58.872899 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 18:28:58.874000 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 18:28:58.874349 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 18:28:58.878361 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 18:28:58.880354 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 18:28:58.884343 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 18:28:58.885037 kernel: loop0: detected capacity change from 0 to 142488 Jan 30 18:28:58.890601 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 18:28:58.902762 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 18:28:58.919593 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 18:28:58.920957 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 18:28:58.923299 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 18:28:58.923696 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 18:28:58.926393 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 18:28:58.933770 systemd-journald[1144]: Time spent on flushing to /var/log/journal/3d16af81094b4b5bacca30f17ff2b987 is 65.137ms for 1156 entries. Jan 30 18:28:58.933770 systemd-journald[1144]: System Journal (/var/log/journal/3d16af81094b4b5bacca30f17ff2b987) is 8.0M, max 584.8M, 576.8M free. Jan 30 18:28:59.022648 systemd-journald[1144]: Received client request to flush runtime journal. Jan 30 18:28:59.022950 kernel: loop1: detected capacity change from 0 to 140768 Jan 30 18:28:59.023001 kernel: loop2: detected capacity change from 0 to 205544 Jan 30 18:28:58.942785 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 18:28:58.972432 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 18:28:58.974038 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 18:28:59.025761 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 18:28:59.028751 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 18:28:59.040873 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 18:28:59.042238 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 18:28:59.053913 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 18:28:59.060744 kernel: loop3: detected capacity change from 0 to 8 Jan 30 18:28:59.099455 systemd-tmpfiles[1207]: ACLs are not supported, ignoring. Jan 30 18:28:59.099478 systemd-tmpfiles[1207]: ACLs are not supported, ignoring. Jan 30 18:28:59.103486 udevadm[1209]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 30 18:28:59.110726 kernel: loop4: detected capacity change from 0 to 142488 Jan 30 18:28:59.115249 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 18:28:59.133052 kernel: loop5: detected capacity change from 0 to 140768 Jan 30 18:28:59.152704 kernel: loop6: detected capacity change from 0 to 205544 Jan 30 18:28:59.169780 kernel: loop7: detected capacity change from 0 to 8 Jan 30 18:28:59.170030 (sd-merge)[1213]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 30 18:28:59.170662 (sd-merge)[1213]: Merged extensions into '/usr'. Jan 30 18:28:59.175125 systemd[1]: Reloading requested from client PID 1186 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 18:28:59.175257 systemd[1]: Reloading... Jan 30 18:28:59.282758 zram_generator::config[1243]: No configuration found. Jan 30 18:28:59.452489 ldconfig[1181]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 18:28:59.524670 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 18:28:59.572111 systemd[1]: Reloading finished in 396 ms. Jan 30 18:28:59.609431 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 18:28:59.610474 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 18:28:59.619887 systemd[1]: Starting ensure-sysext.service... Jan 30 18:28:59.624898 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 18:28:59.641764 systemd[1]: Reloading requested from client PID 1296 ('systemctl') (unit ensure-sysext.service)... Jan 30 18:28:59.641777 systemd[1]: Reloading... Jan 30 18:28:59.677502 systemd-tmpfiles[1297]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 18:28:59.677887 systemd-tmpfiles[1297]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 18:28:59.679946 systemd-tmpfiles[1297]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 18:28:59.680245 systemd-tmpfiles[1297]: ACLs are not supported, ignoring. Jan 30 18:28:59.680308 systemd-tmpfiles[1297]: ACLs are not supported, ignoring. Jan 30 18:28:59.686853 systemd-tmpfiles[1297]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 18:28:59.686865 systemd-tmpfiles[1297]: Skipping /boot Jan 30 18:28:59.710128 systemd-tmpfiles[1297]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 18:28:59.710141 systemd-tmpfiles[1297]: Skipping /boot Jan 30 18:28:59.745802 zram_generator::config[1320]: No configuration found. Jan 30 18:28:59.899294 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 18:28:59.948592 systemd[1]: Reloading finished in 306 ms. Jan 30 18:28:59.962247 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 18:28:59.963179 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 18:28:59.976885 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 18:28:59.980916 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 18:28:59.982827 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 18:28:59.991830 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 18:28:59.993923 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 18:28:59.997889 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 18:29:00.004107 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 18:29:00.004300 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 18:29:00.011971 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 18:29:00.016012 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 18:29:00.018943 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 18:29:00.019468 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 18:29:00.019591 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 18:29:00.022812 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 18:29:00.023015 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 18:29:00.023166 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 18:29:00.026931 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 18:29:00.027356 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 18:29:00.030591 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 18:29:00.031855 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 18:29:00.039983 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 18:29:00.040532 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 18:29:00.040672 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 18:29:00.043768 systemd[1]: Finished ensure-sysext.service. Jan 30 18:29:00.054963 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 18:29:00.068066 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 18:29:00.071011 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 18:29:00.081665 systemd-udevd[1387]: Using default interface naming scheme 'v255'. Jan 30 18:29:00.082869 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 18:29:00.085039 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 18:29:00.085840 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 18:29:00.086741 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 18:29:00.090014 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 18:29:00.090907 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 18:29:00.091599 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 18:29:00.092735 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 18:29:00.097503 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 18:29:00.103127 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 18:29:00.106459 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 18:29:00.106538 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 18:29:00.106593 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 18:29:00.118938 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 18:29:00.130871 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 18:29:00.132470 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 18:29:00.158957 augenrules[1435]: No rules Jan 30 18:29:00.160728 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 18:29:00.172202 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 18:29:00.271741 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 30 18:29:00.274700 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 18:29:00.281701 kernel: ACPI: button: Power Button [PWRF] Jan 30 18:29:00.286577 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 30 18:29:00.346553 systemd-networkd[1424]: lo: Link UP Jan 30 18:29:00.347024 systemd-networkd[1424]: lo: Gained carrier Jan 30 18:29:00.347868 systemd-networkd[1424]: Enumeration completed Jan 30 18:29:00.348074 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 18:29:00.355937 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 18:29:00.368152 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 18:29:00.368749 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 18:29:00.376266 systemd-resolved[1385]: Positive Trust Anchors: Jan 30 18:29:00.376288 systemd-resolved[1385]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 18:29:00.376330 systemd-resolved[1385]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 18:29:00.385179 systemd-resolved[1385]: Using system hostname 'srv-eex0h.gb1.brightbox.com'. Jan 30 18:29:00.387479 systemd-networkd[1424]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 18:29:00.387606 systemd-networkd[1424]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 18:29:00.388002 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 18:29:00.388864 systemd[1]: Reached target network.target - Network. Jan 30 18:29:00.389523 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 18:29:00.391423 systemd-networkd[1424]: eth0: Link UP Jan 30 18:29:00.391498 systemd-networkd[1424]: eth0: Gained carrier Jan 30 18:29:00.391577 systemd-networkd[1424]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 18:29:00.394698 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1428) Jan 30 18:29:00.401729 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 30 18:29:00.408101 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 30 18:29:00.408124 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 30 18:29:00.408301 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 30 18:29:00.408787 systemd-networkd[1424]: eth0: DHCPv4 address 10.244.90.134/30, gateway 10.244.90.133 acquired from 10.244.90.133 Jan 30 18:29:00.409580 systemd-timesyncd[1400]: Network configuration changed, trying to establish connection. Jan 30 18:29:00.474986 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 18:29:00.480345 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 18:29:00.486887 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 18:29:00.528848 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 18:29:00.619329 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 18:29:00.635743 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 18:29:00.644934 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 18:29:00.658737 lvm[1470]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 18:29:00.684581 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 18:29:00.687788 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 18:29:00.689156 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 18:29:00.691019 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 18:29:00.692358 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 18:29:00.694105 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 18:29:00.694215 systemd-timesyncd[1400]: Contacted time server 212.69.41.125:123 (0.flatcar.pool.ntp.org). Jan 30 18:29:00.694329 systemd-timesyncd[1400]: Initial clock synchronization to Thu 2025-01-30 18:29:00.635119 UTC. Jan 30 18:29:00.695244 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 18:29:00.695813 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 18:29:00.696342 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 18:29:00.696401 systemd[1]: Reached target paths.target - Path Units. Jan 30 18:29:00.696903 systemd[1]: Reached target timers.target - Timer Units. Jan 30 18:29:00.698459 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 18:29:00.701162 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 18:29:00.708117 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 18:29:00.712247 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 18:29:00.714973 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 18:29:00.715656 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 18:29:00.716248 systemd[1]: Reached target basic.target - Basic System. Jan 30 18:29:00.716843 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 18:29:00.716977 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 18:29:00.721838 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 18:29:00.726216 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 18:29:00.733752 lvm[1474]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 18:29:00.734879 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 18:29:00.737403 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 18:29:00.743780 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 18:29:00.744281 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 18:29:00.745881 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 18:29:00.754783 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 18:29:00.759789 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 18:29:00.761962 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 18:29:00.767862 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 18:29:00.769489 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 18:29:00.770062 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 18:29:00.778948 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 18:29:00.780593 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 18:29:00.808673 dbus-daemon[1477]: [system] SELinux support is enabled Jan 30 18:29:00.819368 dbus-daemon[1477]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1424 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 30 18:29:00.812011 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 18:29:00.817606 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 18:29:00.818177 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 18:29:00.818797 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 18:29:00.818818 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 18:29:00.822842 dbus-daemon[1477]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 30 18:29:00.837025 jq[1478]: false Jan 30 18:29:00.832895 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 30 18:29:00.836536 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 18:29:00.836762 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 18:29:00.839007 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 18:29:00.839216 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 18:29:00.848029 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 18:29:00.854187 extend-filesystems[1479]: Found loop4 Jan 30 18:29:00.854187 extend-filesystems[1479]: Found loop5 Jan 30 18:29:00.854187 extend-filesystems[1479]: Found loop6 Jan 30 18:29:00.854187 extend-filesystems[1479]: Found loop7 Jan 30 18:29:00.854187 extend-filesystems[1479]: Found vda Jan 30 18:29:00.854187 extend-filesystems[1479]: Found vda1 Jan 30 18:29:00.854187 extend-filesystems[1479]: Found vda2 Jan 30 18:29:00.854187 extend-filesystems[1479]: Found vda3 Jan 30 18:29:00.854187 extend-filesystems[1479]: Found usr Jan 30 18:29:00.854187 extend-filesystems[1479]: Found vda4 Jan 30 18:29:00.854187 extend-filesystems[1479]: Found vda6 Jan 30 18:29:00.854187 extend-filesystems[1479]: Found vda7 Jan 30 18:29:00.854187 extend-filesystems[1479]: Found vda9 Jan 30 18:29:00.854187 extend-filesystems[1479]: Checking size of /dev/vda9 Jan 30 18:29:00.868801 jq[1488]: true Jan 30 18:29:00.887220 extend-filesystems[1479]: Resized partition /dev/vda9 Jan 30 18:29:00.888751 tar[1492]: linux-amd64/helm Jan 30 18:29:00.892051 update_engine[1487]: I20250130 18:29:00.890381 1487 main.cc:92] Flatcar Update Engine starting Jan 30 18:29:00.898614 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Jan 30 18:29:00.898435 (ntainerd)[1503]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 18:29:00.898969 extend-filesystems[1512]: resize2fs 1.47.1 (20-May-2024) Jan 30 18:29:00.905039 systemd[1]: Started update-engine.service - Update Engine. Jan 30 18:29:00.915509 update_engine[1487]: I20250130 18:29:00.912910 1487 update_check_scheduler.cc:74] Next update check in 5m1s Jan 30 18:29:00.913904 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 18:29:00.924567 jq[1509]: true Jan 30 18:29:00.957984 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1423) Jan 30 18:29:00.976834 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 18:29:00.977032 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 18:29:01.093663 systemd-logind[1486]: Watching system buttons on /dev/input/event2 (Power Button) Jan 30 18:29:01.095738 systemd-logind[1486]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 18:29:01.096022 systemd-logind[1486]: New seat seat0. Jan 30 18:29:01.099521 bash[1535]: Updated "/home/core/.ssh/authorized_keys" Jan 30 18:29:01.101260 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 18:29:01.104838 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 18:29:01.118761 systemd[1]: Starting sshkeys.service... Jan 30 18:29:01.138864 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 30 18:29:01.154062 extend-filesystems[1512]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 18:29:01.154062 extend-filesystems[1512]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 30 18:29:01.154062 extend-filesystems[1512]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 30 18:29:01.155910 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 18:29:01.156717 extend-filesystems[1479]: Resized filesystem in /dev/vda9 Jan 30 18:29:01.157271 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 18:29:01.169572 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 18:29:01.179821 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 18:29:01.223861 dbus-daemon[1477]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 30 18:29:01.224012 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 30 18:29:01.226927 dbus-daemon[1477]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1494 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 30 18:29:01.235963 systemd[1]: Starting polkit.service - Authorization Manager... Jan 30 18:29:01.272553 polkitd[1551]: Started polkitd version 121 Jan 30 18:29:01.273994 locksmithd[1513]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 18:29:01.299062 polkitd[1551]: Loading rules from directory /etc/polkit-1/rules.d Jan 30 18:29:01.299130 polkitd[1551]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 30 18:29:01.302557 polkitd[1551]: Finished loading, compiling and executing 2 rules Jan 30 18:29:01.303760 dbus-daemon[1477]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 30 18:29:01.303914 systemd[1]: Started polkit.service - Authorization Manager. Jan 30 18:29:01.304176 polkitd[1551]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 30 18:29:01.334149 systemd-hostnamed[1494]: Hostname set to (static) Jan 30 18:29:01.400901 containerd[1503]: time="2025-01-30T18:29:01.400813897Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 18:29:01.405596 sshd_keygen[1506]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 18:29:01.447376 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 18:29:01.458028 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 18:29:01.462715 containerd[1503]: time="2025-01-30T18:29:01.462028422Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 18:29:01.465872 containerd[1503]: time="2025-01-30T18:29:01.465837710Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 18:29:01.466925 containerd[1503]: time="2025-01-30T18:29:01.466555461Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 18:29:01.466925 containerd[1503]: time="2025-01-30T18:29:01.466586997Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 18:29:01.466925 containerd[1503]: time="2025-01-30T18:29:01.466783571Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 18:29:01.466925 containerd[1503]: time="2025-01-30T18:29:01.466801905Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 18:29:01.466925 containerd[1503]: time="2025-01-30T18:29:01.466856235Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 18:29:01.466925 containerd[1503]: time="2025-01-30T18:29:01.466870400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 18:29:01.467243 containerd[1503]: time="2025-01-30T18:29:01.467223912Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 18:29:01.467313 containerd[1503]: time="2025-01-30T18:29:01.467301905Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 18:29:01.467578 containerd[1503]: time="2025-01-30T18:29:01.467558072Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 18:29:01.467639 containerd[1503]: time="2025-01-30T18:29:01.467627695Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 18:29:01.468058 containerd[1503]: time="2025-01-30T18:29:01.467786166Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 18:29:01.468058 containerd[1503]: time="2025-01-30T18:29:01.468013900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 18:29:01.468415 containerd[1503]: time="2025-01-30T18:29:01.468394630Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 18:29:01.468473 containerd[1503]: time="2025-01-30T18:29:01.468462358Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 18:29:01.468601 containerd[1503]: time="2025-01-30T18:29:01.468588370Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 18:29:01.468724 containerd[1503]: time="2025-01-30T18:29:01.468710899Z" level=info msg="metadata content store policy set" policy=shared Jan 30 18:29:01.469994 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 18:29:01.470268 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 18:29:01.472131 containerd[1503]: time="2025-01-30T18:29:01.471518391Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 18:29:01.472131 containerd[1503]: time="2025-01-30T18:29:01.471568009Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 18:29:01.472131 containerd[1503]: time="2025-01-30T18:29:01.471585643Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 18:29:01.472131 containerd[1503]: time="2025-01-30T18:29:01.471641420Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 18:29:01.472131 containerd[1503]: time="2025-01-30T18:29:01.471659809Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 18:29:01.472131 containerd[1503]: time="2025-01-30T18:29:01.471810707Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 18:29:01.473007 containerd[1503]: time="2025-01-30T18:29:01.472338646Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 18:29:01.473007 containerd[1503]: time="2025-01-30T18:29:01.472468625Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 18:29:01.473007 containerd[1503]: time="2025-01-30T18:29:01.472486296Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 18:29:01.473007 containerd[1503]: time="2025-01-30T18:29:01.472499600Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 18:29:01.473007 containerd[1503]: time="2025-01-30T18:29:01.472518914Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 18:29:01.473007 containerd[1503]: time="2025-01-30T18:29:01.472538698Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 18:29:01.473007 containerd[1503]: time="2025-01-30T18:29:01.472560242Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 18:29:01.473007 containerd[1503]: time="2025-01-30T18:29:01.472581078Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 18:29:01.473007 containerd[1503]: time="2025-01-30T18:29:01.472595835Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 18:29:01.473007 containerd[1503]: time="2025-01-30T18:29:01.472609183Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 18:29:01.473007 containerd[1503]: time="2025-01-30T18:29:01.472621328Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 18:29:01.473007 containerd[1503]: time="2025-01-30T18:29:01.472635202Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 18:29:01.473007 containerd[1503]: time="2025-01-30T18:29:01.472659352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 18:29:01.473007 containerd[1503]: time="2025-01-30T18:29:01.472673618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 18:29:01.473348 containerd[1503]: time="2025-01-30T18:29:01.472705911Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 18:29:01.473348 containerd[1503]: time="2025-01-30T18:29:01.472720102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 18:29:01.473348 containerd[1503]: time="2025-01-30T18:29:01.472731964Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 18:29:01.473348 containerd[1503]: time="2025-01-30T18:29:01.472769973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 18:29:01.473348 containerd[1503]: time="2025-01-30T18:29:01.472797821Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 18:29:01.473348 containerd[1503]: time="2025-01-30T18:29:01.472812252Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 18:29:01.473348 containerd[1503]: time="2025-01-30T18:29:01.472824617Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 18:29:01.473348 containerd[1503]: time="2025-01-30T18:29:01.472839045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 18:29:01.473348 containerd[1503]: time="2025-01-30T18:29:01.472850348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 18:29:01.473348 containerd[1503]: time="2025-01-30T18:29:01.472862023Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 18:29:01.473348 containerd[1503]: time="2025-01-30T18:29:01.472875168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 18:29:01.473348 containerd[1503]: time="2025-01-30T18:29:01.472896705Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 18:29:01.473348 containerd[1503]: time="2025-01-30T18:29:01.472922163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 18:29:01.473348 containerd[1503]: time="2025-01-30T18:29:01.472939017Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 18:29:01.473348 containerd[1503]: time="2025-01-30T18:29:01.472950531Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 18:29:01.475736 containerd[1503]: time="2025-01-30T18:29:01.474731583Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 18:29:01.475736 containerd[1503]: time="2025-01-30T18:29:01.474760884Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 18:29:01.475736 containerd[1503]: time="2025-01-30T18:29:01.474854633Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 18:29:01.475736 containerd[1503]: time="2025-01-30T18:29:01.474869500Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 18:29:01.475736 containerd[1503]: time="2025-01-30T18:29:01.474879638Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 18:29:01.475736 containerd[1503]: time="2025-01-30T18:29:01.474893562Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 18:29:01.475736 containerd[1503]: time="2025-01-30T18:29:01.474904165Z" level=info msg="NRI interface is disabled by configuration." Jan 30 18:29:01.475736 containerd[1503]: time="2025-01-30T18:29:01.474914987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 18:29:01.475990 containerd[1503]: time="2025-01-30T18:29:01.475222150Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 18:29:01.475990 containerd[1503]: time="2025-01-30T18:29:01.475282934Z" level=info msg="Connect containerd service" Jan 30 18:29:01.475990 containerd[1503]: time="2025-01-30T18:29:01.475316844Z" level=info msg="using legacy CRI server" Jan 30 18:29:01.475990 containerd[1503]: time="2025-01-30T18:29:01.475327760Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 18:29:01.475990 containerd[1503]: time="2025-01-30T18:29:01.475455181Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 18:29:01.478368 containerd[1503]: time="2025-01-30T18:29:01.478334089Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 18:29:01.480792 containerd[1503]: time="2025-01-30T18:29:01.480050217Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 18:29:01.480792 containerd[1503]: time="2025-01-30T18:29:01.480103623Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 18:29:01.480792 containerd[1503]: time="2025-01-30T18:29:01.480166581Z" level=info msg="Start subscribing containerd event" Jan 30 18:29:01.480792 containerd[1503]: time="2025-01-30T18:29:01.480232526Z" level=info msg="Start recovering state" Jan 30 18:29:01.480792 containerd[1503]: time="2025-01-30T18:29:01.480309623Z" level=info msg="Start event monitor" Jan 30 18:29:01.480792 containerd[1503]: time="2025-01-30T18:29:01.480331558Z" level=info msg="Start snapshots syncer" Jan 30 18:29:01.480792 containerd[1503]: time="2025-01-30T18:29:01.480342202Z" level=info msg="Start cni network conf syncer for default" Jan 30 18:29:01.480792 containerd[1503]: time="2025-01-30T18:29:01.480353772Z" level=info msg="Start streaming server" Jan 30 18:29:01.480792 containerd[1503]: time="2025-01-30T18:29:01.480420550Z" level=info msg="containerd successfully booted in 0.080478s" Jan 30 18:29:01.482123 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 18:29:01.482816 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 18:29:01.499503 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 18:29:01.507466 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 18:29:01.509465 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 18:29:01.510295 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 18:29:01.576095 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 18:29:01.584998 systemd[1]: Started sshd@0-10.244.90.134:22-139.178.89.65:52266.service - OpenSSH per-connection server daemon (139.178.89.65:52266). Jan 30 18:29:01.672343 tar[1492]: linux-amd64/LICENSE Jan 30 18:29:01.672343 tar[1492]: linux-amd64/README.md Jan 30 18:29:01.685885 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 18:29:02.130289 systemd-networkd[1424]: eth0: Gained IPv6LL Jan 30 18:29:02.136548 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 18:29:02.139721 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 18:29:02.151954 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 18:29:02.154026 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 18:29:02.183764 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 18:29:02.479271 sshd[1580]: Accepted publickey for core from 139.178.89.65 port 52266 ssh2: RSA SHA256:mi9Ffww0GZqlgqqsrskunsrI33jB/1uB1d3dx4wABvw Jan 30 18:29:02.483499 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 18:29:02.500576 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 18:29:02.500577 systemd-logind[1486]: New session 1 of user core. Jan 30 18:29:02.512122 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 18:29:02.527147 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 18:29:02.534598 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 18:29:02.538434 (systemd)[1600]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 18:29:02.636764 systemd[1600]: Queued start job for default target default.target. Jan 30 18:29:02.643822 systemd[1600]: Created slice app.slice - User Application Slice. Jan 30 18:29:02.644168 systemd[1600]: Reached target paths.target - Paths. Jan 30 18:29:02.644217 systemd[1600]: Reached target timers.target - Timers. Jan 30 18:29:02.655966 systemd[1600]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 18:29:02.664463 systemd[1600]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 18:29:02.665083 systemd[1600]: Reached target sockets.target - Sockets. Jan 30 18:29:02.665105 systemd[1600]: Reached target basic.target - Basic System. Jan 30 18:29:02.665146 systemd[1600]: Reached target default.target - Main User Target. Jan 30 18:29:02.665193 systemd[1600]: Startup finished in 119ms. Jan 30 18:29:02.666152 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 18:29:02.672904 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 18:29:02.934255 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 18:29:02.938868 (kubelet)[1614]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 18:29:03.303783 systemd[1]: Started sshd@1-10.244.90.134:22-139.178.89.65:37474.service - OpenSSH per-connection server daemon (139.178.89.65:37474). Jan 30 18:29:03.539495 kubelet[1614]: E0130 18:29:03.539070 1614 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 18:29:03.544001 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 18:29:03.544189 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 18:29:03.544722 systemd[1]: kubelet.service: Consumed 1.082s CPU time. Jan 30 18:29:04.188614 sshd[1621]: Accepted publickey for core from 139.178.89.65 port 37474 ssh2: RSA SHA256:mi9Ffww0GZqlgqqsrskunsrI33jB/1uB1d3dx4wABvw Jan 30 18:29:04.192389 sshd[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 18:29:04.203131 systemd-logind[1486]: New session 2 of user core. Jan 30 18:29:04.216095 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 18:29:04.338424 systemd-networkd[1424]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:16a1:24:19ff:fef4:5a86/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:16a1:24:19ff:fef4:5a86/64 assigned by NDisc. Jan 30 18:29:04.338446 systemd-networkd[1424]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 30 18:29:04.811032 sshd[1621]: pam_unix(sshd:session): session closed for user core Jan 30 18:29:04.818441 systemd[1]: sshd@1-10.244.90.134:22-139.178.89.65:37474.service: Deactivated successfully. Jan 30 18:29:04.821725 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 18:29:04.823868 systemd-logind[1486]: Session 2 logged out. Waiting for processes to exit. Jan 30 18:29:04.825199 systemd-logind[1486]: Removed session 2. Jan 30 18:29:04.974084 systemd[1]: Started sshd@2-10.244.90.134:22-139.178.89.65:37480.service - OpenSSH per-connection server daemon (139.178.89.65:37480). Jan 30 18:29:05.872621 sshd[1634]: Accepted publickey for core from 139.178.89.65 port 37480 ssh2: RSA SHA256:mi9Ffww0GZqlgqqsrskunsrI33jB/1uB1d3dx4wABvw Jan 30 18:29:05.876229 sshd[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 18:29:05.887455 systemd-logind[1486]: New session 3 of user core. Jan 30 18:29:05.899167 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 18:29:06.497289 sshd[1634]: pam_unix(sshd:session): session closed for user core Jan 30 18:29:06.506373 systemd[1]: sshd@2-10.244.90.134:22-139.178.89.65:37480.service: Deactivated successfully. Jan 30 18:29:06.509870 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 18:29:06.511668 systemd-logind[1486]: Session 3 logged out. Waiting for processes to exit. Jan 30 18:29:06.513745 systemd-logind[1486]: Removed session 3. Jan 30 18:29:06.559628 login[1578]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 30 18:29:06.560727 login[1577]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 30 18:29:06.567040 systemd-logind[1486]: New session 4 of user core. Jan 30 18:29:06.576021 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 18:29:06.579200 systemd-logind[1486]: New session 5 of user core. Jan 30 18:29:06.580147 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 18:29:07.921896 coreos-metadata[1476]: Jan 30 18:29:07.921 WARN failed to locate config-drive, using the metadata service API instead Jan 30 18:29:07.947228 coreos-metadata[1476]: Jan 30 18:29:07.947 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 30 18:29:07.953834 coreos-metadata[1476]: Jan 30 18:29:07.953 INFO Fetch failed with 404: resource not found Jan 30 18:29:07.953834 coreos-metadata[1476]: Jan 30 18:29:07.953 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 30 18:29:07.954400 coreos-metadata[1476]: Jan 30 18:29:07.954 INFO Fetch successful Jan 30 18:29:07.954744 coreos-metadata[1476]: Jan 30 18:29:07.954 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 30 18:29:07.970765 coreos-metadata[1476]: Jan 30 18:29:07.970 INFO Fetch successful Jan 30 18:29:07.970765 coreos-metadata[1476]: Jan 30 18:29:07.970 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 30 18:29:07.987598 coreos-metadata[1476]: Jan 30 18:29:07.987 INFO Fetch successful Jan 30 18:29:07.987598 coreos-metadata[1476]: Jan 30 18:29:07.987 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 30 18:29:08.002643 coreos-metadata[1476]: Jan 30 18:29:08.002 INFO Fetch successful Jan 30 18:29:08.002643 coreos-metadata[1476]: Jan 30 18:29:08.002 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 30 18:29:08.020167 coreos-metadata[1476]: Jan 30 18:29:08.020 INFO Fetch successful Jan 30 18:29:08.070839 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 18:29:08.072723 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 18:29:08.310254 coreos-metadata[1546]: Jan 30 18:29:08.310 WARN failed to locate config-drive, using the metadata service API instead Jan 30 18:29:08.333768 coreos-metadata[1546]: Jan 30 18:29:08.333 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 30 18:29:08.358845 coreos-metadata[1546]: Jan 30 18:29:08.358 INFO Fetch successful Jan 30 18:29:08.359181 coreos-metadata[1546]: Jan 30 18:29:08.359 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 30 18:29:08.387784 coreos-metadata[1546]: Jan 30 18:29:08.387 INFO Fetch successful Jan 30 18:29:08.390272 unknown[1546]: wrote ssh authorized keys file for user: core Jan 30 18:29:08.419011 update-ssh-keys[1675]: Updated "/home/core/.ssh/authorized_keys" Jan 30 18:29:08.419835 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 18:29:08.422223 systemd[1]: Finished sshkeys.service. Jan 30 18:29:08.425943 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 18:29:08.426122 systemd[1]: Startup finished in 979ms (kernel) + 13.243s (initrd) + 10.531s (userspace) = 24.754s. Jan 30 18:29:13.750847 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 18:29:13.770177 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 18:29:13.897984 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 18:29:13.915383 (kubelet)[1687]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 18:29:13.969843 kubelet[1687]: E0130 18:29:13.969761 1687 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 18:29:13.973085 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 18:29:13.973225 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 18:29:16.647451 systemd[1]: Started sshd@3-10.244.90.134:22-139.178.89.65:40594.service - OpenSSH per-connection server daemon (139.178.89.65:40594). Jan 30 18:29:17.549944 sshd[1696]: Accepted publickey for core from 139.178.89.65 port 40594 ssh2: RSA SHA256:mi9Ffww0GZqlgqqsrskunsrI33jB/1uB1d3dx4wABvw Jan 30 18:29:17.554209 sshd[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 18:29:17.566621 systemd-logind[1486]: New session 6 of user core. Jan 30 18:29:17.573961 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 18:29:18.179458 sshd[1696]: pam_unix(sshd:session): session closed for user core Jan 30 18:29:18.189543 systemd[1]: sshd@3-10.244.90.134:22-139.178.89.65:40594.service: Deactivated successfully. Jan 30 18:29:18.193089 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 18:29:18.193987 systemd-logind[1486]: Session 6 logged out. Waiting for processes to exit. Jan 30 18:29:18.195436 systemd-logind[1486]: Removed session 6. Jan 30 18:29:18.342122 systemd[1]: Started sshd@4-10.244.90.134:22-139.178.89.65:40600.service - OpenSSH per-connection server daemon (139.178.89.65:40600). Jan 30 18:29:19.226804 sshd[1703]: Accepted publickey for core from 139.178.89.65 port 40600 ssh2: RSA SHA256:mi9Ffww0GZqlgqqsrskunsrI33jB/1uB1d3dx4wABvw Jan 30 18:29:19.230550 sshd[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 18:29:19.241457 systemd-logind[1486]: New session 7 of user core. Jan 30 18:29:19.250964 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 18:29:19.839893 sshd[1703]: pam_unix(sshd:session): session closed for user core Jan 30 18:29:19.847928 systemd[1]: sshd@4-10.244.90.134:22-139.178.89.65:40600.service: Deactivated successfully. Jan 30 18:29:19.850335 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 18:29:19.851224 systemd-logind[1486]: Session 7 logged out. Waiting for processes to exit. Jan 30 18:29:19.852506 systemd-logind[1486]: Removed session 7. Jan 30 18:29:20.003068 systemd[1]: Started sshd@5-10.244.90.134:22-139.178.89.65:40614.service - OpenSSH per-connection server daemon (139.178.89.65:40614). Jan 30 18:29:20.907047 sshd[1710]: Accepted publickey for core from 139.178.89.65 port 40614 ssh2: RSA SHA256:mi9Ffww0GZqlgqqsrskunsrI33jB/1uB1d3dx4wABvw Jan 30 18:29:20.910839 sshd[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 18:29:20.920637 systemd-logind[1486]: New session 8 of user core. Jan 30 18:29:20.927242 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 18:29:21.532302 sshd[1710]: pam_unix(sshd:session): session closed for user core Jan 30 18:29:21.539219 systemd[1]: sshd@5-10.244.90.134:22-139.178.89.65:40614.service: Deactivated successfully. Jan 30 18:29:21.542559 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 18:29:21.545924 systemd-logind[1486]: Session 8 logged out. Waiting for processes to exit. Jan 30 18:29:21.547736 systemd-logind[1486]: Removed session 8. Jan 30 18:29:21.696993 systemd[1]: Started sshd@6-10.244.90.134:22-139.178.89.65:38234.service - OpenSSH per-connection server daemon (139.178.89.65:38234). Jan 30 18:29:22.585468 sshd[1717]: Accepted publickey for core from 139.178.89.65 port 38234 ssh2: RSA SHA256:mi9Ffww0GZqlgqqsrskunsrI33jB/1uB1d3dx4wABvw Jan 30 18:29:22.589219 sshd[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 18:29:22.598051 systemd-logind[1486]: New session 9 of user core. Jan 30 18:29:22.604897 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 18:29:23.074777 sudo[1720]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 18:29:23.075079 sudo[1720]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 18:29:23.090627 sudo[1720]: pam_unix(sudo:session): session closed for user root Jan 30 18:29:23.234841 sshd[1717]: pam_unix(sshd:session): session closed for user core Jan 30 18:29:23.241841 systemd-logind[1486]: Session 9 logged out. Waiting for processes to exit. Jan 30 18:29:23.243786 systemd[1]: sshd@6-10.244.90.134:22-139.178.89.65:38234.service: Deactivated successfully. Jan 30 18:29:23.245973 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 18:29:23.248096 systemd-logind[1486]: Removed session 9. Jan 30 18:29:23.390995 systemd[1]: Started sshd@7-10.244.90.134:22-139.178.89.65:38240.service - OpenSSH per-connection server daemon (139.178.89.65:38240). Jan 30 18:29:24.000711 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 18:29:24.011111 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 18:29:24.127520 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 18:29:24.133777 (kubelet)[1735]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 18:29:24.174658 kubelet[1735]: E0130 18:29:24.174602 1735 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 18:29:24.177955 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 18:29:24.178368 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 18:29:24.302426 sshd[1725]: Accepted publickey for core from 139.178.89.65 port 38240 ssh2: RSA SHA256:mi9Ffww0GZqlgqqsrskunsrI33jB/1uB1d3dx4wABvw Jan 30 18:29:24.306242 sshd[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 18:29:24.316290 systemd-logind[1486]: New session 10 of user core. Jan 30 18:29:24.322857 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 18:29:24.786183 sudo[1744]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 18:29:24.787432 sudo[1744]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 18:29:24.794051 sudo[1744]: pam_unix(sudo:session): session closed for user root Jan 30 18:29:24.803218 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 18:29:24.803530 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 18:29:24.824957 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 18:29:24.827523 auditctl[1747]: No rules Jan 30 18:29:24.827975 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 18:29:24.828198 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 18:29:24.830931 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 18:29:24.862018 augenrules[1765]: No rules Jan 30 18:29:24.862734 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 18:29:24.864450 sudo[1743]: pam_unix(sudo:session): session closed for user root Jan 30 18:29:25.009534 sshd[1725]: pam_unix(sshd:session): session closed for user core Jan 30 18:29:25.019841 systemd-logind[1486]: Session 10 logged out. Waiting for processes to exit. Jan 30 18:29:25.021365 systemd[1]: sshd@7-10.244.90.134:22-139.178.89.65:38240.service: Deactivated successfully. Jan 30 18:29:25.024180 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 18:29:25.025337 systemd-logind[1486]: Removed session 10. Jan 30 18:29:25.172200 systemd[1]: Started sshd@8-10.244.90.134:22-139.178.89.65:38246.service - OpenSSH per-connection server daemon (139.178.89.65:38246). Jan 30 18:29:26.078363 sshd[1773]: Accepted publickey for core from 139.178.89.65 port 38246 ssh2: RSA SHA256:mi9Ffww0GZqlgqqsrskunsrI33jB/1uB1d3dx4wABvw Jan 30 18:29:26.081810 sshd[1773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 18:29:26.091352 systemd-logind[1486]: New session 11 of user core. Jan 30 18:29:26.103986 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 18:29:26.561995 sudo[1776]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 18:29:26.562746 sudo[1776]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 18:29:26.961903 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 18:29:26.975032 (dockerd)[1792]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 18:29:27.341836 dockerd[1792]: time="2025-01-30T18:29:27.341756965Z" level=info msg="Starting up" Jan 30 18:29:27.467647 dockerd[1792]: time="2025-01-30T18:29:27.467544537Z" level=info msg="Loading containers: start." Jan 30 18:29:27.575804 kernel: Initializing XFRM netlink socket Jan 30 18:29:27.651276 systemd-networkd[1424]: docker0: Link UP Jan 30 18:29:27.672436 dockerd[1792]: time="2025-01-30T18:29:27.672026882Z" level=info msg="Loading containers: done." Jan 30 18:29:27.696805 dockerd[1792]: time="2025-01-30T18:29:27.696237002Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 18:29:27.696805 dockerd[1792]: time="2025-01-30T18:29:27.696372053Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 18:29:27.696805 dockerd[1792]: time="2025-01-30T18:29:27.696505949Z" level=info msg="Daemon has completed initialization" Jan 30 18:29:27.698568 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck972155943-merged.mount: Deactivated successfully. Jan 30 18:29:27.723148 dockerd[1792]: time="2025-01-30T18:29:27.723090870Z" level=info msg="API listen on /run/docker.sock" Jan 30 18:29:27.723267 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 18:29:29.132721 containerd[1503]: time="2025-01-30T18:29:29.132107180Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 30 18:29:30.076249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3719788143.mount: Deactivated successfully. Jan 30 18:29:31.424716 containerd[1503]: time="2025-01-30T18:29:31.424375978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:29:31.426029 containerd[1503]: time="2025-01-30T18:29:31.425983703Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=27976729" Jan 30 18:29:31.426875 containerd[1503]: time="2025-01-30T18:29:31.426801428Z" level=info msg="ImageCreate event name:\"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:29:31.429139 containerd[1503]: time="2025-01-30T18:29:31.429093105Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:29:31.430852 containerd[1503]: time="2025-01-30T18:29:31.430207503Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"27973521\" in 2.298007779s" Jan 30 18:29:31.430852 containerd[1503]: time="2025-01-30T18:29:31.430258034Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\"" Jan 30 18:29:31.432130 containerd[1503]: time="2025-01-30T18:29:31.432052063Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 30 18:29:33.170653 containerd[1503]: time="2025-01-30T18:29:33.170589112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:29:33.171553 containerd[1503]: time="2025-01-30T18:29:33.171514146Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=24701151" Jan 30 18:29:33.172233 containerd[1503]: time="2025-01-30T18:29:33.171862197Z" level=info msg="ImageCreate event name:\"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:29:33.174532 containerd[1503]: time="2025-01-30T18:29:33.174502762Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:29:33.175708 containerd[1503]: time="2025-01-30T18:29:33.175660096Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"26147725\" in 1.743572002s" Jan 30 18:29:33.175818 containerd[1503]: time="2025-01-30T18:29:33.175802185Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\"" Jan 30 18:29:33.176797 containerd[1503]: time="2025-01-30T18:29:33.176776664Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 30 18:29:34.250574 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 30 18:29:34.258060 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 18:29:34.384878 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 30 18:29:34.408912 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 18:29:34.411309 (kubelet)[2010]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 18:29:34.462723 kubelet[2010]: E0130 18:29:34.462398 2010 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 18:29:34.465214 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 18:29:34.465375 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 18:29:34.673929 containerd[1503]: time="2025-01-30T18:29:34.672996698Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:29:34.673929 containerd[1503]: time="2025-01-30T18:29:34.673809058Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=18652061" Jan 30 18:29:34.678006 containerd[1503]: time="2025-01-30T18:29:34.677820941Z" level=info msg="ImageCreate event name:\"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:29:34.681117 containerd[1503]: time="2025-01-30T18:29:34.681054256Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:29:34.682891 containerd[1503]: time="2025-01-30T18:29:34.682486971Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"20098653\" in 1.505586451s" Jan 30 18:29:34.682891 containerd[1503]: time="2025-01-30T18:29:34.682554277Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\"" Jan 30 18:29:34.684269 containerd[1503]: time="2025-01-30T18:29:34.684213946Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 30 18:29:36.275173 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1974139447.mount: Deactivated successfully. Jan 30 18:29:36.811077 containerd[1503]: time="2025-01-30T18:29:36.809616317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:29:36.813057 containerd[1503]: time="2025-01-30T18:29:36.812960866Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=30231136" Jan 30 18:29:36.815722 containerd[1503]: time="2025-01-30T18:29:36.814811527Z" level=info msg="ImageCreate event name:\"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:29:36.819614 containerd[1503]: time="2025-01-30T18:29:36.819565212Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"30230147\" in 2.135164413s" Jan 30 18:29:36.819839 containerd[1503]: time="2025-01-30T18:29:36.819808148Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\"" Jan 30 18:29:36.820105 containerd[1503]: time="2025-01-30T18:29:36.820069308Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:29:36.820960 containerd[1503]: time="2025-01-30T18:29:36.820925105Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 18:29:37.499820 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2064847242.mount: Deactivated successfully. Jan 30 18:29:38.434187 containerd[1503]: time="2025-01-30T18:29:38.433060870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:29:38.434187 containerd[1503]: time="2025-01-30T18:29:38.433575044Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jan 30 18:29:38.434187 containerd[1503]: time="2025-01-30T18:29:38.434138691Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:29:38.436970 containerd[1503]: time="2025-01-30T18:29:38.436941757Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:29:38.438119 containerd[1503]: time="2025-01-30T18:29:38.438094127Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.616994576s" Jan 30 18:29:38.438215 containerd[1503]: time="2025-01-30T18:29:38.438201620Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 30 18:29:38.438756 containerd[1503]: time="2025-01-30T18:29:38.438626888Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 30 18:29:39.068085 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2184728750.mount: Deactivated successfully. Jan 30 18:29:39.071862 containerd[1503]: time="2025-01-30T18:29:39.071823651Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:29:39.072970 containerd[1503]: time="2025-01-30T18:29:39.072931424Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jan 30 18:29:39.073620 containerd[1503]: time="2025-01-30T18:29:39.073591000Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:29:39.075438 containerd[1503]: time="2025-01-30T18:29:39.075395573Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:29:39.076710 containerd[1503]: time="2025-01-30T18:29:39.076234742Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 637.369911ms" Jan 30 18:29:39.076710 containerd[1503]: time="2025-01-30T18:29:39.076266971Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 30 18:29:39.077293 containerd[1503]: time="2025-01-30T18:29:39.077156139Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 30 18:29:39.701767 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3532222558.mount: Deactivated successfully. Jan 30 18:29:41.522768 containerd[1503]: time="2025-01-30T18:29:41.522707507Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:29:41.523856 containerd[1503]: time="2025-01-30T18:29:41.523814384Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779981" Jan 30 18:29:41.524470 containerd[1503]: time="2025-01-30T18:29:41.524193449Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:29:41.527008 containerd[1503]: time="2025-01-30T18:29:41.526960460Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:29:41.528479 containerd[1503]: time="2025-01-30T18:29:41.528296622Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.451115036s" Jan 30 18:29:41.528479 containerd[1503]: time="2025-01-30T18:29:41.528339710Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jan 30 18:29:44.344438 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 18:29:44.350920 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 18:29:44.383617 systemd[1]: Reloading requested from client PID 2153 ('systemctl') (unit session-11.scope)... Jan 30 18:29:44.383643 systemd[1]: Reloading... Jan 30 18:29:44.520769 zram_generator::config[2188]: No configuration found. Jan 30 18:29:44.660444 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 18:29:44.735533 systemd[1]: Reloading finished in 351 ms. Jan 30 18:29:44.793488 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 18:29:44.795521 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 18:29:44.799321 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 18:29:44.799690 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 18:29:44.809080 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 18:29:44.918817 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 18:29:44.932987 (kubelet)[2261]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 18:29:44.997397 kubelet[2261]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 18:29:44.997397 kubelet[2261]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 18:29:44.997397 kubelet[2261]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 18:29:44.997801 kubelet[2261]: I0130 18:29:44.997466 2261 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 18:29:45.384511 kubelet[2261]: I0130 18:29:45.384111 2261 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 30 18:29:45.384511 kubelet[2261]: I0130 18:29:45.384167 2261 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 18:29:45.385995 kubelet[2261]: I0130 18:29:45.385960 2261 server.go:929] "Client rotation is on, will bootstrap in background" Jan 30 18:29:45.414904 kubelet[2261]: I0130 18:29:45.414860 2261 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 18:29:45.424112 kubelet[2261]: E0130 18:29:45.424042 2261 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.244.90.134:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.244.90.134:6443: connect: connection refused" logger="UnhandledError" Jan 30 18:29:45.433726 kubelet[2261]: E0130 18:29:45.433496 2261 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 18:29:45.433726 kubelet[2261]: I0130 18:29:45.433558 2261 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 18:29:45.440863 kubelet[2261]: I0130 18:29:45.440819 2261 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 18:29:45.442243 kubelet[2261]: I0130 18:29:45.442208 2261 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 18:29:45.442524 kubelet[2261]: I0130 18:29:45.442474 2261 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 18:29:45.442809 kubelet[2261]: I0130 18:29:45.442522 2261 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-eex0h.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 18:29:45.442973 kubelet[2261]: I0130 18:29:45.442833 2261 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 18:29:45.442973 kubelet[2261]: I0130 18:29:45.442847 2261 container_manager_linux.go:300] "Creating device plugin manager" Jan 30 18:29:45.443062 kubelet[2261]: I0130 18:29:45.443043 2261 state_mem.go:36] "Initialized new in-memory state store" Jan 30 18:29:45.445571 kubelet[2261]: I0130 18:29:45.445250 2261 kubelet.go:408] "Attempting to sync node with API server" Jan 30 18:29:45.445571 kubelet[2261]: I0130 18:29:45.445282 2261 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 18:29:45.445571 kubelet[2261]: I0130 18:29:45.445344 2261 kubelet.go:314] "Adding apiserver pod source" Jan 30 18:29:45.445571 kubelet[2261]: I0130 18:29:45.445377 2261 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 18:29:45.450041 kubelet[2261]: W0130 18:29:45.449592 2261 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.244.90.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-eex0h.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.90.134:6443: connect: connection refused Jan 30 18:29:45.450041 kubelet[2261]: E0130 18:29:45.449738 2261 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.244.90.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-eex0h.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.90.134:6443: connect: connection refused" logger="UnhandledError" Jan 30 18:29:45.453050 kubelet[2261]: W0130 18:29:45.452519 2261 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.244.90.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.244.90.134:6443: connect: connection refused Jan 30 18:29:45.453050 kubelet[2261]: E0130 18:29:45.452611 2261 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.244.90.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.90.134:6443: connect: connection refused" logger="UnhandledError" Jan 30 18:29:45.453050 kubelet[2261]: I0130 18:29:45.452829 2261 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 18:29:45.459151 kubelet[2261]: I0130 18:29:45.459130 2261 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 18:29:45.459903 kubelet[2261]: W0130 18:29:45.459880 2261 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 18:29:45.462520 kubelet[2261]: I0130 18:29:45.462502 2261 server.go:1269] "Started kubelet" Jan 30 18:29:45.466209 kubelet[2261]: I0130 18:29:45.466134 2261 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 18:29:45.472923 kubelet[2261]: I0130 18:29:45.472238 2261 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 18:29:45.472923 kubelet[2261]: I0130 18:29:45.472776 2261 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 18:29:45.474288 kubelet[2261]: I0130 18:29:45.473808 2261 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 18:29:45.477118 kubelet[2261]: I0130 18:29:45.477096 2261 server.go:460] "Adding debug handlers to kubelet server" Jan 30 18:29:45.481270 kubelet[2261]: E0130 18:29:45.478261 2261 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.244.90.134:6443/api/v1/namespaces/default/events\": dial tcp 10.244.90.134:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-eex0h.gb1.brightbox.com.181f8be259207d4f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-eex0h.gb1.brightbox.com,UID:srv-eex0h.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-eex0h.gb1.brightbox.com,},FirstTimestamp:2025-01-30 18:29:45.462472015 +0000 UTC m=+0.523520211,LastTimestamp:2025-01-30 18:29:45.462472015 +0000 UTC m=+0.523520211,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-eex0h.gb1.brightbox.com,}" Jan 30 18:29:45.481982 kubelet[2261]: I0130 18:29:45.481961 2261 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 18:29:45.486095 kubelet[2261]: I0130 18:29:45.486064 2261 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 18:29:45.486862 kubelet[2261]: E0130 18:29:45.486332 2261 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-eex0h.gb1.brightbox.com\" not found" Jan 30 18:29:45.487230 kubelet[2261]: I0130 18:29:45.487209 2261 factory.go:221] Registration of the systemd container factory successfully Jan 30 18:29:45.487559 kubelet[2261]: I0130 18:29:45.487537 2261 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 18:29:45.487906 kubelet[2261]: E0130 18:29:45.487871 2261 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.90.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-eex0h.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.90.134:6443: connect: connection refused" interval="200ms" Jan 30 18:29:45.488464 kubelet[2261]: I0130 18:29:45.488448 2261 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 18:29:45.488998 kubelet[2261]: W0130 18:29:45.488961 2261 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.244.90.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.90.134:6443: connect: connection refused Jan 30 18:29:45.489963 kubelet[2261]: E0130 18:29:45.489405 2261 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.244.90.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.90.134:6443: connect: connection refused" logger="UnhandledError" Jan 30 18:29:45.489963 kubelet[2261]: I0130 18:29:45.489922 2261 reconciler.go:26] "Reconciler: start to sync state" Jan 30 18:29:45.490847 kubelet[2261]: E0130 18:29:45.490827 2261 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 18:29:45.491138 kubelet[2261]: I0130 18:29:45.491120 2261 factory.go:221] Registration of the containerd container factory successfully Jan 30 18:29:45.502306 kubelet[2261]: I0130 18:29:45.502207 2261 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 18:29:45.503727 kubelet[2261]: I0130 18:29:45.503707 2261 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 18:29:45.504121 kubelet[2261]: I0130 18:29:45.503811 2261 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 18:29:45.504121 kubelet[2261]: I0130 18:29:45.503838 2261 kubelet.go:2321] "Starting kubelet main sync loop" Jan 30 18:29:45.504121 kubelet[2261]: E0130 18:29:45.503882 2261 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 18:29:45.512819 kubelet[2261]: W0130 18:29:45.512758 2261 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.244.90.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.90.134:6443: connect: connection refused Jan 30 18:29:45.512990 kubelet[2261]: E0130 18:29:45.512923 2261 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.244.90.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.90.134:6443: connect: connection refused" logger="UnhandledError" Jan 30 18:29:45.535800 kubelet[2261]: I0130 18:29:45.535701 2261 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 18:29:45.535800 kubelet[2261]: I0130 18:29:45.535726 2261 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 18:29:45.535800 kubelet[2261]: I0130 18:29:45.535744 2261 state_mem.go:36] "Initialized new in-memory state store" Jan 30 18:29:45.537881 kubelet[2261]: I0130 18:29:45.537812 2261 policy_none.go:49] "None policy: Start" Jan 30 18:29:45.539461 kubelet[2261]: I0130 18:29:45.539420 2261 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 18:29:45.539613 kubelet[2261]: I0130 18:29:45.539488 2261 state_mem.go:35] "Initializing new in-memory state store" Jan 30 18:29:45.549419 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 18:29:45.567766 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 18:29:45.572289 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 18:29:45.580786 kubelet[2261]: I0130 18:29:45.580760 2261 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 18:29:45.581176 kubelet[2261]: I0130 18:29:45.581159 2261 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 18:29:45.581239 kubelet[2261]: I0130 18:29:45.581181 2261 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 18:29:45.582136 kubelet[2261]: I0130 18:29:45.581484 2261 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 18:29:45.583514 kubelet[2261]: E0130 18:29:45.583500 2261 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-eex0h.gb1.brightbox.com\" not found" Jan 30 18:29:45.629930 systemd[1]: Created slice kubepods-burstable-podbb52533bd05b4c3474f3fdfa9376b3fb.slice - libcontainer container kubepods-burstable-podbb52533bd05b4c3474f3fdfa9376b3fb.slice. Jan 30 18:29:45.656830 systemd[1]: Created slice kubepods-burstable-pod76424f028d3773aa1075d572f7bf0f33.slice - libcontainer container kubepods-burstable-pod76424f028d3773aa1075d572f7bf0f33.slice. Jan 30 18:29:45.672617 update_engine[1487]: I20250130 18:29:45.672526 1487 update_attempter.cc:509] Updating boot flags... Jan 30 18:29:45.676332 systemd[1]: Created slice kubepods-burstable-podc88d0f9cb3e91e83c7036ddb7438748e.slice - libcontainer container kubepods-burstable-podc88d0f9cb3e91e83c7036ddb7438748e.slice. Jan 30 18:29:45.686764 kubelet[2261]: I0130 18:29:45.686738 2261 kubelet_node_status.go:72] "Attempting to register node" node="srv-eex0h.gb1.brightbox.com" Jan 30 18:29:45.687135 kubelet[2261]: E0130 18:29:45.687097 2261 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.244.90.134:6443/api/v1/nodes\": dial tcp 10.244.90.134:6443: connect: connection refused" node="srv-eex0h.gb1.brightbox.com" Jan 30 18:29:45.688452 kubelet[2261]: E0130 18:29:45.688408 2261 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.90.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-eex0h.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.90.134:6443: connect: connection refused" interval="400ms" Jan 30 18:29:45.691115 kubelet[2261]: I0130 18:29:45.690911 2261 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/76424f028d3773aa1075d572f7bf0f33-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-eex0h.gb1.brightbox.com\" (UID: \"76424f028d3773aa1075d572f7bf0f33\") " pod="kube-system/kube-controller-manager-srv-eex0h.gb1.brightbox.com" Jan 30 18:29:45.691115 kubelet[2261]: I0130 18:29:45.690944 2261 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bb52533bd05b4c3474f3fdfa9376b3fb-ca-certs\") pod \"kube-apiserver-srv-eex0h.gb1.brightbox.com\" (UID: \"bb52533bd05b4c3474f3fdfa9376b3fb\") " pod="kube-system/kube-apiserver-srv-eex0h.gb1.brightbox.com" Jan 30 18:29:45.691115 kubelet[2261]: I0130 18:29:45.690961 2261 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bb52533bd05b4c3474f3fdfa9376b3fb-k8s-certs\") pod \"kube-apiserver-srv-eex0h.gb1.brightbox.com\" (UID: \"bb52533bd05b4c3474f3fdfa9376b3fb\") " pod="kube-system/kube-apiserver-srv-eex0h.gb1.brightbox.com" Jan 30 18:29:45.691115 kubelet[2261]: I0130 18:29:45.690979 2261 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/76424f028d3773aa1075d572f7bf0f33-flexvolume-dir\") pod \"kube-controller-manager-srv-eex0h.gb1.brightbox.com\" (UID: \"76424f028d3773aa1075d572f7bf0f33\") " pod="kube-system/kube-controller-manager-srv-eex0h.gb1.brightbox.com" Jan 30 18:29:45.691115 kubelet[2261]: I0130 18:29:45.690996 2261 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/76424f028d3773aa1075d572f7bf0f33-k8s-certs\") pod \"kube-controller-manager-srv-eex0h.gb1.brightbox.com\" (UID: \"76424f028d3773aa1075d572f7bf0f33\") " pod="kube-system/kube-controller-manager-srv-eex0h.gb1.brightbox.com" Jan 30 18:29:45.691339 kubelet[2261]: I0130 18:29:45.691016 2261 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/76424f028d3773aa1075d572f7bf0f33-kubeconfig\") pod \"kube-controller-manager-srv-eex0h.gb1.brightbox.com\" (UID: \"76424f028d3773aa1075d572f7bf0f33\") " pod="kube-system/kube-controller-manager-srv-eex0h.gb1.brightbox.com" Jan 30 18:29:45.691339 kubelet[2261]: I0130 18:29:45.691034 2261 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bb52533bd05b4c3474f3fdfa9376b3fb-usr-share-ca-certificates\") pod \"kube-apiserver-srv-eex0h.gb1.brightbox.com\" (UID: \"bb52533bd05b4c3474f3fdfa9376b3fb\") " pod="kube-system/kube-apiserver-srv-eex0h.gb1.brightbox.com" Jan 30 18:29:45.691339 kubelet[2261]: I0130 18:29:45.691062 2261 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/76424f028d3773aa1075d572f7bf0f33-ca-certs\") pod \"kube-controller-manager-srv-eex0h.gb1.brightbox.com\" (UID: \"76424f028d3773aa1075d572f7bf0f33\") " pod="kube-system/kube-controller-manager-srv-eex0h.gb1.brightbox.com" Jan 30 18:29:45.691339 kubelet[2261]: I0130 18:29:45.691083 2261 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c88d0f9cb3e91e83c7036ddb7438748e-kubeconfig\") pod \"kube-scheduler-srv-eex0h.gb1.brightbox.com\" (UID: \"c88d0f9cb3e91e83c7036ddb7438748e\") " pod="kube-system/kube-scheduler-srv-eex0h.gb1.brightbox.com" Jan 30 18:29:45.708024 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2299) Jan 30 18:29:45.758724 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2298) Jan 30 18:29:45.894617 kubelet[2261]: I0130 18:29:45.894532 2261 kubelet_node_status.go:72] "Attempting to register node" node="srv-eex0h.gb1.brightbox.com" Jan 30 18:29:45.895449 kubelet[2261]: E0130 18:29:45.895395 2261 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.244.90.134:6443/api/v1/nodes\": dial tcp 10.244.90.134:6443: connect: connection refused" node="srv-eex0h.gb1.brightbox.com" Jan 30 18:29:45.955136 containerd[1503]: time="2025-01-30T18:29:45.954939006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-eex0h.gb1.brightbox.com,Uid:bb52533bd05b4c3474f3fdfa9376b3fb,Namespace:kube-system,Attempt:0,}" Jan 30 18:29:45.964486 containerd[1503]: time="2025-01-30T18:29:45.964156626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-eex0h.gb1.brightbox.com,Uid:76424f028d3773aa1075d572f7bf0f33,Namespace:kube-system,Attempt:0,}" Jan 30 18:29:45.982450 containerd[1503]: time="2025-01-30T18:29:45.982342609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-eex0h.gb1.brightbox.com,Uid:c88d0f9cb3e91e83c7036ddb7438748e,Namespace:kube-system,Attempt:0,}" Jan 30 18:29:46.089362 kubelet[2261]: E0130 18:29:46.089265 2261 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.90.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-eex0h.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.90.134:6443: connect: connection refused" interval="800ms" Jan 30 18:29:46.284444 kubelet[2261]: W0130 18:29:46.284303 2261 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.244.90.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.244.90.134:6443: connect: connection refused Jan 30 18:29:46.284862 kubelet[2261]: E0130 18:29:46.284472 2261 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.244.90.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.90.134:6443: connect: connection refused" logger="UnhandledError" Jan 30 18:29:46.300745 kubelet[2261]: I0130 18:29:46.300652 2261 kubelet_node_status.go:72] "Attempting to register node" node="srv-eex0h.gb1.brightbox.com" Jan 30 18:29:46.301526 kubelet[2261]: E0130 18:29:46.301468 2261 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.244.90.134:6443/api/v1/nodes\": dial tcp 10.244.90.134:6443: connect: connection refused" node="srv-eex0h.gb1.brightbox.com" Jan 30 18:29:46.432624 kubelet[2261]: W0130 18:29:46.432445 2261 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.244.90.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.90.134:6443: connect: connection refused Jan 30 18:29:46.432624 kubelet[2261]: E0130 18:29:46.432607 2261 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.244.90.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.90.134:6443: connect: connection refused" logger="UnhandledError" Jan 30 18:29:46.584109 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2862607824.mount: Deactivated successfully. Jan 30 18:29:46.588758 containerd[1503]: time="2025-01-30T18:29:46.588608564Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 18:29:46.590091 containerd[1503]: time="2025-01-30T18:29:46.590039992Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 30 18:29:46.590925 containerd[1503]: time="2025-01-30T18:29:46.590844081Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 18:29:46.593707 containerd[1503]: time="2025-01-30T18:29:46.593587880Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 18:29:46.595568 containerd[1503]: time="2025-01-30T18:29:46.595480995Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 18:29:46.596587 containerd[1503]: time="2025-01-30T18:29:46.596510004Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 18:29:46.598164 containerd[1503]: time="2025-01-30T18:29:46.598097538Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 18:29:46.598296 containerd[1503]: time="2025-01-30T18:29:46.598179496Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 18:29:46.599725 containerd[1503]: time="2025-01-30T18:29:46.599300529Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 643.228953ms" Jan 30 18:29:46.612630 containerd[1503]: time="2025-01-30T18:29:46.612580259Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 648.325698ms" Jan 30 18:29:46.617288 containerd[1503]: time="2025-01-30T18:29:46.617244625Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 634.69552ms" Jan 30 18:29:46.784396 containerd[1503]: time="2025-01-30T18:29:46.783767920Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 18:29:46.784396 containerd[1503]: time="2025-01-30T18:29:46.783859964Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 18:29:46.784396 containerd[1503]: time="2025-01-30T18:29:46.783896399Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 18:29:46.784396 containerd[1503]: time="2025-01-30T18:29:46.784047218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 18:29:46.798781 containerd[1503]: time="2025-01-30T18:29:46.797667577Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 18:29:46.799223 containerd[1503]: time="2025-01-30T18:29:46.798450765Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 18:29:46.799514 containerd[1503]: time="2025-01-30T18:29:46.799460413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 18:29:46.800984 containerd[1503]: time="2025-01-30T18:29:46.800827280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 18:29:46.804324 containerd[1503]: time="2025-01-30T18:29:46.804011928Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 18:29:46.804620 containerd[1503]: time="2025-01-30T18:29:46.804571897Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 18:29:46.804811 containerd[1503]: time="2025-01-30T18:29:46.804600420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 18:29:46.805044 containerd[1503]: time="2025-01-30T18:29:46.804974244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 18:29:46.828009 systemd[1]: Started cri-containerd-764be8e8b46a69aa0cb5f28af675e51363d19942599d143b9c3350e93f5f3d1a.scope - libcontainer container 764be8e8b46a69aa0cb5f28af675e51363d19942599d143b9c3350e93f5f3d1a. Jan 30 18:29:46.834777 kubelet[2261]: W0130 18:29:46.834535 2261 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.244.90.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.90.134:6443: connect: connection refused Jan 30 18:29:46.834777 kubelet[2261]: E0130 18:29:46.834636 2261 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.244.90.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.90.134:6443: connect: connection refused" logger="UnhandledError" Jan 30 18:29:46.841261 systemd[1]: Started cri-containerd-e2f5675593128dddfc3aef83723957caee83db637269846ad81488cfefc8acbe.scope - libcontainer container e2f5675593128dddfc3aef83723957caee83db637269846ad81488cfefc8acbe. Jan 30 18:29:46.851898 systemd[1]: Started cri-containerd-3eb60b4399ee8232ce8afff0cdca1413ea86cd2f59a13854a84ad3339838aec5.scope - libcontainer container 3eb60b4399ee8232ce8afff0cdca1413ea86cd2f59a13854a84ad3339838aec5. Jan 30 18:29:46.892702 kubelet[2261]: E0130 18:29:46.892477 2261 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.90.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-eex0h.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.90.134:6443: connect: connection refused" interval="1.6s" Jan 30 18:29:46.911693 containerd[1503]: time="2025-01-30T18:29:46.911634051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-eex0h.gb1.brightbox.com,Uid:bb52533bd05b4c3474f3fdfa9376b3fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"764be8e8b46a69aa0cb5f28af675e51363d19942599d143b9c3350e93f5f3d1a\"" Jan 30 18:29:46.918699 containerd[1503]: time="2025-01-30T18:29:46.918652782Z" level=info msg="CreateContainer within sandbox \"764be8e8b46a69aa0cb5f28af675e51363d19942599d143b9c3350e93f5f3d1a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 18:29:46.938587 containerd[1503]: time="2025-01-30T18:29:46.938284609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-eex0h.gb1.brightbox.com,Uid:76424f028d3773aa1075d572f7bf0f33,Namespace:kube-system,Attempt:0,} returns sandbox id \"3eb60b4399ee8232ce8afff0cdca1413ea86cd2f59a13854a84ad3339838aec5\"" Jan 30 18:29:46.940642 containerd[1503]: time="2025-01-30T18:29:46.940615312Z" level=info msg="CreateContainer within sandbox \"3eb60b4399ee8232ce8afff0cdca1413ea86cd2f59a13854a84ad3339838aec5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 18:29:46.946448 containerd[1503]: time="2025-01-30T18:29:46.946411134Z" level=info msg="CreateContainer within sandbox \"764be8e8b46a69aa0cb5f28af675e51363d19942599d143b9c3350e93f5f3d1a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"acfce6e135f1bfc5039e804abed9c9e83e2a7a70c769a60f1759ee61ef592cc7\"" Jan 30 18:29:46.949504 containerd[1503]: time="2025-01-30T18:29:46.949473589Z" level=info msg="CreateContainer within sandbox \"3eb60b4399ee8232ce8afff0cdca1413ea86cd2f59a13854a84ad3339838aec5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c30366c199de5ed042f216c4dd82728282e478ede2824e972a3c4b9f49902017\"" Jan 30 18:29:46.949895 containerd[1503]: time="2025-01-30T18:29:46.949859746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-eex0h.gb1.brightbox.com,Uid:c88d0f9cb3e91e83c7036ddb7438748e,Namespace:kube-system,Attempt:0,} returns sandbox id \"e2f5675593128dddfc3aef83723957caee83db637269846ad81488cfefc8acbe\"" Jan 30 18:29:46.950106 containerd[1503]: time="2025-01-30T18:29:46.950048228Z" level=info msg="StartContainer for \"acfce6e135f1bfc5039e804abed9c9e83e2a7a70c769a60f1759ee61ef592cc7\"" Jan 30 18:29:46.960719 containerd[1503]: time="2025-01-30T18:29:46.960372447Z" level=info msg="StartContainer for \"c30366c199de5ed042f216c4dd82728282e478ede2824e972a3c4b9f49902017\"" Jan 30 18:29:46.966311 containerd[1503]: time="2025-01-30T18:29:46.966281878Z" level=info msg="CreateContainer within sandbox \"e2f5675593128dddfc3aef83723957caee83db637269846ad81488cfefc8acbe\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 18:29:46.985538 containerd[1503]: time="2025-01-30T18:29:46.985501067Z" level=info msg="CreateContainer within sandbox \"e2f5675593128dddfc3aef83723957caee83db637269846ad81488cfefc8acbe\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e62fcdd36a59f4d177a4176c99a41996b6d0b6137681b10862c0d165316b43e9\"" Jan 30 18:29:46.985836 systemd[1]: Started cri-containerd-acfce6e135f1bfc5039e804abed9c9e83e2a7a70c769a60f1759ee61ef592cc7.scope - libcontainer container acfce6e135f1bfc5039e804abed9c9e83e2a7a70c769a60f1759ee61ef592cc7. Jan 30 18:29:46.987518 containerd[1503]: time="2025-01-30T18:29:46.986845234Z" level=info msg="StartContainer for \"e62fcdd36a59f4d177a4176c99a41996b6d0b6137681b10862c0d165316b43e9\"" Jan 30 18:29:47.000295 systemd[1]: Started cri-containerd-c30366c199de5ed042f216c4dd82728282e478ede2824e972a3c4b9f49902017.scope - libcontainer container c30366c199de5ed042f216c4dd82728282e478ede2824e972a3c4b9f49902017. Jan 30 18:29:47.043823 systemd[1]: Started cri-containerd-e62fcdd36a59f4d177a4176c99a41996b6d0b6137681b10862c0d165316b43e9.scope - libcontainer container e62fcdd36a59f4d177a4176c99a41996b6d0b6137681b10862c0d165316b43e9. Jan 30 18:29:47.049718 kubelet[2261]: W0130 18:29:47.049099 2261 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.244.90.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-eex0h.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.90.134:6443: connect: connection refused Jan 30 18:29:47.049718 kubelet[2261]: E0130 18:29:47.049183 2261 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.244.90.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-eex0h.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.90.134:6443: connect: connection refused" logger="UnhandledError" Jan 30 18:29:47.079906 containerd[1503]: time="2025-01-30T18:29:47.079230936Z" level=info msg="StartContainer for \"acfce6e135f1bfc5039e804abed9c9e83e2a7a70c769a60f1759ee61ef592cc7\" returns successfully" Jan 30 18:29:47.087495 containerd[1503]: time="2025-01-30T18:29:47.086742325Z" level=info msg="StartContainer for \"c30366c199de5ed042f216c4dd82728282e478ede2824e972a3c4b9f49902017\" returns successfully" Jan 30 18:29:47.106549 kubelet[2261]: I0130 18:29:47.106002 2261 kubelet_node_status.go:72] "Attempting to register node" node="srv-eex0h.gb1.brightbox.com" Jan 30 18:29:47.106549 kubelet[2261]: E0130 18:29:47.106407 2261 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.244.90.134:6443/api/v1/nodes\": dial tcp 10.244.90.134:6443: connect: connection refused" node="srv-eex0h.gb1.brightbox.com" Jan 30 18:29:47.132807 containerd[1503]: time="2025-01-30T18:29:47.132757130Z" level=info msg="StartContainer for \"e62fcdd36a59f4d177a4176c99a41996b6d0b6137681b10862c0d165316b43e9\" returns successfully" Jan 30 18:29:47.284628 kubelet[2261]: E0130 18:29:47.284478 2261 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.244.90.134:6443/api/v1/namespaces/default/events\": dial tcp 10.244.90.134:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-eex0h.gb1.brightbox.com.181f8be259207d4f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-eex0h.gb1.brightbox.com,UID:srv-eex0h.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-eex0h.gb1.brightbox.com,},FirstTimestamp:2025-01-30 18:29:45.462472015 +0000 UTC m=+0.523520211,LastTimestamp:2025-01-30 18:29:45.462472015 +0000 UTC m=+0.523520211,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-eex0h.gb1.brightbox.com,}" Jan 30 18:29:48.710001 kubelet[2261]: I0130 18:29:48.709955 2261 kubelet_node_status.go:72] "Attempting to register node" node="srv-eex0h.gb1.brightbox.com" Jan 30 18:29:49.211890 kubelet[2261]: E0130 18:29:49.211825 2261 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-eex0h.gb1.brightbox.com\" not found" node="srv-eex0h.gb1.brightbox.com" Jan 30 18:29:49.346064 kubelet[2261]: I0130 18:29:49.345718 2261 kubelet_node_status.go:75] "Successfully registered node" node="srv-eex0h.gb1.brightbox.com" Jan 30 18:29:49.453326 kubelet[2261]: I0130 18:29:49.453009 2261 apiserver.go:52] "Watching apiserver" Jan 30 18:29:49.489359 kubelet[2261]: I0130 18:29:49.488758 2261 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 18:29:49.554538 kubelet[2261]: E0130 18:29:49.554442 2261 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-srv-eex0h.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-eex0h.gb1.brightbox.com" Jan 30 18:29:49.854408 kubelet[2261]: E0130 18:29:49.853958 2261 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-srv-eex0h.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-eex0h.gb1.brightbox.com" Jan 30 18:29:51.394526 systemd[1]: Reloading requested from client PID 2554 ('systemctl') (unit session-11.scope)... Jan 30 18:29:51.394545 systemd[1]: Reloading... Jan 30 18:29:51.504944 zram_generator::config[2596]: No configuration found. Jan 30 18:29:51.672273 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 18:29:51.758569 systemd[1]: Reloading finished in 363 ms. Jan 30 18:29:51.803659 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 18:29:51.822967 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 18:29:51.823249 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 18:29:51.823350 systemd[1]: kubelet.service: Consumed 1.022s CPU time, 114.0M memory peak, 0B memory swap peak. Jan 30 18:29:51.830035 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 18:29:51.982879 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 18:29:51.987847 (kubelet)[2657]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 18:29:52.059418 kubelet[2657]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 18:29:52.059418 kubelet[2657]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 18:29:52.059418 kubelet[2657]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 18:29:52.059418 kubelet[2657]: I0130 18:29:52.058595 2657 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 18:29:52.068557 kubelet[2657]: I0130 18:29:52.068523 2657 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 30 18:29:52.069701 kubelet[2657]: I0130 18:29:52.068719 2657 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 18:29:52.069701 kubelet[2657]: I0130 18:29:52.068984 2657 server.go:929] "Client rotation is on, will bootstrap in background" Jan 30 18:29:52.070389 kubelet[2657]: I0130 18:29:52.070367 2657 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 18:29:52.074864 kubelet[2657]: I0130 18:29:52.074688 2657 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 18:29:52.078647 kubelet[2657]: E0130 18:29:52.078605 2657 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 18:29:52.078647 kubelet[2657]: I0130 18:29:52.078646 2657 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 18:29:52.081345 kubelet[2657]: I0130 18:29:52.081322 2657 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 18:29:52.081448 kubelet[2657]: I0130 18:29:52.081437 2657 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 18:29:52.081609 kubelet[2657]: I0130 18:29:52.081580 2657 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 18:29:52.081818 kubelet[2657]: I0130 18:29:52.081612 2657 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-eex0h.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 18:29:52.081935 kubelet[2657]: I0130 18:29:52.081827 2657 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 18:29:52.081935 kubelet[2657]: I0130 18:29:52.081838 2657 container_manager_linux.go:300] "Creating device plugin manager" Jan 30 18:29:52.081935 kubelet[2657]: I0130 18:29:52.081870 2657 state_mem.go:36] "Initialized new in-memory state store" Jan 30 18:29:52.082022 kubelet[2657]: I0130 18:29:52.081986 2657 kubelet.go:408] "Attempting to sync node with API server" Jan 30 18:29:52.082022 kubelet[2657]: I0130 18:29:52.081999 2657 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 18:29:52.082078 kubelet[2657]: I0130 18:29:52.082027 2657 kubelet.go:314] "Adding apiserver pod source" Jan 30 18:29:52.082078 kubelet[2657]: I0130 18:29:52.082048 2657 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 18:29:52.092477 kubelet[2657]: I0130 18:29:52.092399 2657 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 18:29:52.093082 kubelet[2657]: I0130 18:29:52.092987 2657 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 18:29:52.093462 kubelet[2657]: I0130 18:29:52.093448 2657 server.go:1269] "Started kubelet" Jan 30 18:29:52.098248 kubelet[2657]: I0130 18:29:52.095577 2657 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 18:29:52.098248 kubelet[2657]: I0130 18:29:52.097938 2657 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 18:29:52.098493 kubelet[2657]: I0130 18:29:52.098480 2657 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 18:29:52.098698 kubelet[2657]: I0130 18:29:52.098594 2657 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 18:29:52.102457 kubelet[2657]: I0130 18:29:52.101518 2657 server.go:460] "Adding debug handlers to kubelet server" Jan 30 18:29:52.109083 kubelet[2657]: I0130 18:29:52.109059 2657 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 18:29:52.111852 kubelet[2657]: I0130 18:29:52.111834 2657 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 18:29:52.114446 kubelet[2657]: I0130 18:29:52.114426 2657 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 18:29:52.114839 kubelet[2657]: I0130 18:29:52.114550 2657 reconciler.go:26] "Reconciler: start to sync state" Jan 30 18:29:52.117724 kubelet[2657]: I0130 18:29:52.117700 2657 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 18:29:52.118588 kubelet[2657]: E0130 18:29:52.118572 2657 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 18:29:52.120934 kubelet[2657]: I0130 18:29:52.119556 2657 factory.go:221] Registration of the systemd container factory successfully Jan 30 18:29:52.120934 kubelet[2657]: I0130 18:29:52.119643 2657 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 18:29:52.120934 kubelet[2657]: I0130 18:29:52.119717 2657 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 18:29:52.120934 kubelet[2657]: I0130 18:29:52.119744 2657 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 18:29:52.120934 kubelet[2657]: I0130 18:29:52.119761 2657 kubelet.go:2321] "Starting kubelet main sync loop" Jan 30 18:29:52.120934 kubelet[2657]: E0130 18:29:52.119807 2657 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 18:29:52.130332 kubelet[2657]: I0130 18:29:52.130144 2657 factory.go:221] Registration of the containerd container factory successfully Jan 30 18:29:52.182525 kubelet[2657]: I0130 18:29:52.182486 2657 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 18:29:52.182880 kubelet[2657]: I0130 18:29:52.182858 2657 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 18:29:52.183119 kubelet[2657]: I0130 18:29:52.182987 2657 state_mem.go:36] "Initialized new in-memory state store" Jan 30 18:29:52.183371 kubelet[2657]: I0130 18:29:52.183349 2657 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 18:29:52.183608 kubelet[2657]: I0130 18:29:52.183454 2657 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 18:29:52.183608 kubelet[2657]: I0130 18:29:52.183488 2657 policy_none.go:49] "None policy: Start" Jan 30 18:29:52.184727 kubelet[2657]: I0130 18:29:52.184413 2657 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 18:29:52.184727 kubelet[2657]: I0130 18:29:52.184449 2657 state_mem.go:35] "Initializing new in-memory state store" Jan 30 18:29:52.184858 kubelet[2657]: I0130 18:29:52.184676 2657 state_mem.go:75] "Updated machine memory state" Jan 30 18:29:52.190284 kubelet[2657]: I0130 18:29:52.190097 2657 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 18:29:52.191804 kubelet[2657]: I0130 18:29:52.191777 2657 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 18:29:52.191867 kubelet[2657]: I0130 18:29:52.191818 2657 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 18:29:52.192130 kubelet[2657]: I0130 18:29:52.192109 2657 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 18:29:52.230192 kubelet[2657]: W0130 18:29:52.229940 2657 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 18:29:52.230192 kubelet[2657]: W0130 18:29:52.230021 2657 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 18:29:52.231564 kubelet[2657]: W0130 18:29:52.231545 2657 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 18:29:52.305894 kubelet[2657]: I0130 18:29:52.305175 2657 kubelet_node_status.go:72] "Attempting to register node" node="srv-eex0h.gb1.brightbox.com" Jan 30 18:29:52.317770 kubelet[2657]: I0130 18:29:52.317678 2657 kubelet_node_status.go:111] "Node was previously registered" node="srv-eex0h.gb1.brightbox.com" Jan 30 18:29:52.318022 kubelet[2657]: I0130 18:29:52.317821 2657 kubelet_node_status.go:75] "Successfully registered node" node="srv-eex0h.gb1.brightbox.com" Jan 30 18:29:52.416640 kubelet[2657]: I0130 18:29:52.415931 2657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bb52533bd05b4c3474f3fdfa9376b3fb-ca-certs\") pod \"kube-apiserver-srv-eex0h.gb1.brightbox.com\" (UID: \"bb52533bd05b4c3474f3fdfa9376b3fb\") " pod="kube-system/kube-apiserver-srv-eex0h.gb1.brightbox.com" Jan 30 18:29:52.416640 kubelet[2657]: I0130 18:29:52.416133 2657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bb52533bd05b4c3474f3fdfa9376b3fb-usr-share-ca-certificates\") pod \"kube-apiserver-srv-eex0h.gb1.brightbox.com\" (UID: \"bb52533bd05b4c3474f3fdfa9376b3fb\") " pod="kube-system/kube-apiserver-srv-eex0h.gb1.brightbox.com" Jan 30 18:29:52.416640 kubelet[2657]: I0130 18:29:52.416228 2657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/76424f028d3773aa1075d572f7bf0f33-ca-certs\") pod \"kube-controller-manager-srv-eex0h.gb1.brightbox.com\" (UID: \"76424f028d3773aa1075d572f7bf0f33\") " pod="kube-system/kube-controller-manager-srv-eex0h.gb1.brightbox.com" Jan 30 18:29:52.416640 kubelet[2657]: I0130 18:29:52.416331 2657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/76424f028d3773aa1075d572f7bf0f33-k8s-certs\") pod \"kube-controller-manager-srv-eex0h.gb1.brightbox.com\" (UID: \"76424f028d3773aa1075d572f7bf0f33\") " pod="kube-system/kube-controller-manager-srv-eex0h.gb1.brightbox.com" Jan 30 18:29:52.416640 kubelet[2657]: I0130 18:29:52.416432 2657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bb52533bd05b4c3474f3fdfa9376b3fb-k8s-certs\") pod \"kube-apiserver-srv-eex0h.gb1.brightbox.com\" (UID: \"bb52533bd05b4c3474f3fdfa9376b3fb\") " pod="kube-system/kube-apiserver-srv-eex0h.gb1.brightbox.com" Jan 30 18:29:52.416950 kubelet[2657]: I0130 18:29:52.416531 2657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/76424f028d3773aa1075d572f7bf0f33-flexvolume-dir\") pod \"kube-controller-manager-srv-eex0h.gb1.brightbox.com\" (UID: \"76424f028d3773aa1075d572f7bf0f33\") " pod="kube-system/kube-controller-manager-srv-eex0h.gb1.brightbox.com" Jan 30 18:29:52.416950 kubelet[2657]: I0130 18:29:52.416631 2657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/76424f028d3773aa1075d572f7bf0f33-kubeconfig\") pod \"kube-controller-manager-srv-eex0h.gb1.brightbox.com\" (UID: \"76424f028d3773aa1075d572f7bf0f33\") " pod="kube-system/kube-controller-manager-srv-eex0h.gb1.brightbox.com" Jan 30 18:29:52.416950 kubelet[2657]: I0130 18:29:52.416755 2657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/76424f028d3773aa1075d572f7bf0f33-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-eex0h.gb1.brightbox.com\" (UID: \"76424f028d3773aa1075d572f7bf0f33\") " pod="kube-system/kube-controller-manager-srv-eex0h.gb1.brightbox.com" Jan 30 18:29:52.416950 kubelet[2657]: I0130 18:29:52.416872 2657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c88d0f9cb3e91e83c7036ddb7438748e-kubeconfig\") pod \"kube-scheduler-srv-eex0h.gb1.brightbox.com\" (UID: \"c88d0f9cb3e91e83c7036ddb7438748e\") " pod="kube-system/kube-scheduler-srv-eex0h.gb1.brightbox.com" Jan 30 18:29:53.090791 kubelet[2657]: I0130 18:29:53.089446 2657 apiserver.go:52] "Watching apiserver" Jan 30 18:29:53.115439 kubelet[2657]: I0130 18:29:53.114710 2657 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 18:29:53.179718 kubelet[2657]: I0130 18:29:53.176108 2657 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-eex0h.gb1.brightbox.com" podStartSLOduration=1.176069502 podStartE2EDuration="1.176069502s" podCreationTimestamp="2025-01-30 18:29:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 18:29:53.175466007 +0000 UTC m=+1.182222218" watchObservedRunningTime="2025-01-30 18:29:53.176069502 +0000 UTC m=+1.182825707" Jan 30 18:29:53.193721 kubelet[2657]: I0130 18:29:53.193477 2657 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-eex0h.gb1.brightbox.com" podStartSLOduration=1.193435583 podStartE2EDuration="1.193435583s" podCreationTimestamp="2025-01-30 18:29:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 18:29:53.192709828 +0000 UTC m=+1.199466018" watchObservedRunningTime="2025-01-30 18:29:53.193435583 +0000 UTC m=+1.200191871" Jan 30 18:29:53.214230 kubelet[2657]: I0130 18:29:53.214162 2657 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-eex0h.gb1.brightbox.com" podStartSLOduration=1.21412036 podStartE2EDuration="1.21412036s" podCreationTimestamp="2025-01-30 18:29:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 18:29:53.213490618 +0000 UTC m=+1.220246845" watchObservedRunningTime="2025-01-30 18:29:53.21412036 +0000 UTC m=+1.220876576" Jan 30 18:29:57.235066 kubelet[2657]: I0130 18:29:57.234924 2657 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 18:29:57.238337 containerd[1503]: time="2025-01-30T18:29:57.237837549Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 18:29:57.240178 kubelet[2657]: I0130 18:29:57.239387 2657 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 18:29:57.898212 systemd[1]: Created slice kubepods-besteffort-pod821f0875_71d4_4f99_a738_8600a344647e.slice - libcontainer container kubepods-besteffort-pod821f0875_71d4_4f99_a738_8600a344647e.slice. Jan 30 18:29:57.953243 kubelet[2657]: I0130 18:29:57.953165 2657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/821f0875-71d4-4f99-a738-8600a344647e-kube-proxy\") pod \"kube-proxy-htk7g\" (UID: \"821f0875-71d4-4f99-a738-8600a344647e\") " pod="kube-system/kube-proxy-htk7g" Jan 30 18:29:57.953243 kubelet[2657]: I0130 18:29:57.953253 2657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/821f0875-71d4-4f99-a738-8600a344647e-lib-modules\") pod \"kube-proxy-htk7g\" (UID: \"821f0875-71d4-4f99-a738-8600a344647e\") " pod="kube-system/kube-proxy-htk7g" Jan 30 18:29:57.953243 kubelet[2657]: I0130 18:29:57.953301 2657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpwft\" (UniqueName: \"kubernetes.io/projected/821f0875-71d4-4f99-a738-8600a344647e-kube-api-access-kpwft\") pod \"kube-proxy-htk7g\" (UID: \"821f0875-71d4-4f99-a738-8600a344647e\") " pod="kube-system/kube-proxy-htk7g" Jan 30 18:29:57.953243 kubelet[2657]: I0130 18:29:57.953369 2657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/821f0875-71d4-4f99-a738-8600a344647e-xtables-lock\") pod \"kube-proxy-htk7g\" (UID: \"821f0875-71d4-4f99-a738-8600a344647e\") " pod="kube-system/kube-proxy-htk7g" Jan 30 18:29:58.209944 containerd[1503]: time="2025-01-30T18:29:58.209809401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-htk7g,Uid:821f0875-71d4-4f99-a738-8600a344647e,Namespace:kube-system,Attempt:0,}" Jan 30 18:29:58.218194 sudo[1776]: pam_unix(sudo:session): session closed for user root Jan 30 18:29:58.246899 containerd[1503]: time="2025-01-30T18:29:58.246318160Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 18:29:58.246899 containerd[1503]: time="2025-01-30T18:29:58.246398989Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 18:29:58.246899 containerd[1503]: time="2025-01-30T18:29:58.246440499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 18:29:58.246899 containerd[1503]: time="2025-01-30T18:29:58.246645466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 18:29:58.275451 systemd[1]: run-containerd-runc-k8s.io-7c19f8e01abc38fb3bd071dee6d946877701166c6df0c54f7ea63d29c19de52f-runc.hRuLpU.mount: Deactivated successfully. Jan 30 18:29:58.288058 systemd[1]: Started cri-containerd-7c19f8e01abc38fb3bd071dee6d946877701166c6df0c54f7ea63d29c19de52f.scope - libcontainer container 7c19f8e01abc38fb3bd071dee6d946877701166c6df0c54f7ea63d29c19de52f. Jan 30 18:29:58.329495 containerd[1503]: time="2025-01-30T18:29:58.328952095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-htk7g,Uid:821f0875-71d4-4f99-a738-8600a344647e,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c19f8e01abc38fb3bd071dee6d946877701166c6df0c54f7ea63d29c19de52f\"" Jan 30 18:29:58.336941 containerd[1503]: time="2025-01-30T18:29:58.336885550Z" level=info msg="CreateContainer within sandbox \"7c19f8e01abc38fb3bd071dee6d946877701166c6df0c54f7ea63d29c19de52f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 18:29:58.360158 containerd[1503]: time="2025-01-30T18:29:58.358147151Z" level=info msg="CreateContainer within sandbox \"7c19f8e01abc38fb3bd071dee6d946877701166c6df0c54f7ea63d29c19de52f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f1fc4c7c3a842ceea72d425bb333db135d28de58068a748d5dd41c28a39d8edd\"" Jan 30 18:29:58.360158 containerd[1503]: time="2025-01-30T18:29:58.359122749Z" level=info msg="StartContainer for \"f1fc4c7c3a842ceea72d425bb333db135d28de58068a748d5dd41c28a39d8edd\"" Jan 30 18:29:58.372034 sshd[1773]: pam_unix(sshd:session): session closed for user core Jan 30 18:29:58.380178 systemd[1]: sshd@8-10.244.90.134:22-139.178.89.65:38246.service: Deactivated successfully. Jan 30 18:29:58.382007 systemd-logind[1486]: Session 11 logged out. Waiting for processes to exit. Jan 30 18:29:58.384888 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 18:29:58.385206 systemd[1]: session-11.scope: Consumed 4.844s CPU time, 145.4M memory peak, 0B memory swap peak. Jan 30 18:29:58.387616 systemd-logind[1486]: Removed session 11. Jan 30 18:29:58.404275 kubelet[2657]: W0130 18:29:58.402943 2657 reflector.go:561] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:srv-eex0h.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'srv-eex0h.gb1.brightbox.com' and this object Jan 30 18:29:58.404275 kubelet[2657]: E0130 18:29:58.403097 2657 reflector.go:158] "Unhandled Error" err="object-\"tigera-operator\"/\"kubernetes-services-endpoint\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kubernetes-services-endpoint\" is forbidden: User \"system:node:srv-eex0h.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'srv-eex0h.gb1.brightbox.com' and this object" logger="UnhandledError" Jan 30 18:29:58.411658 systemd[1]: Created slice kubepods-besteffort-pod6aabdf00_4943_4abd_b2b7_b63ddb657a83.slice - libcontainer container kubepods-besteffort-pod6aabdf00_4943_4abd_b2b7_b63ddb657a83.slice. Jan 30 18:29:58.423877 systemd[1]: Started cri-containerd-f1fc4c7c3a842ceea72d425bb333db135d28de58068a748d5dd41c28a39d8edd.scope - libcontainer container f1fc4c7c3a842ceea72d425bb333db135d28de58068a748d5dd41c28a39d8edd. Jan 30 18:29:58.452357 containerd[1503]: time="2025-01-30T18:29:58.452089920Z" level=info msg="StartContainer for \"f1fc4c7c3a842ceea72d425bb333db135d28de58068a748d5dd41c28a39d8edd\" returns successfully" Jan 30 18:29:58.455950 kubelet[2657]: I0130 18:29:58.455912 2657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6aabdf00-4943-4abd-b2b7-b63ddb657a83-var-lib-calico\") pod \"tigera-operator-76c4976dd7-hjmnn\" (UID: \"6aabdf00-4943-4abd-b2b7-b63ddb657a83\") " pod="tigera-operator/tigera-operator-76c4976dd7-hjmnn" Jan 30 18:29:58.456165 kubelet[2657]: I0130 18:29:58.456150 2657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsfw9\" (UniqueName: \"kubernetes.io/projected/6aabdf00-4943-4abd-b2b7-b63ddb657a83-kube-api-access-xsfw9\") pod \"tigera-operator-76c4976dd7-hjmnn\" (UID: \"6aabdf00-4943-4abd-b2b7-b63ddb657a83\") " pod="tigera-operator/tigera-operator-76c4976dd7-hjmnn" Jan 30 18:29:58.722442 containerd[1503]: time="2025-01-30T18:29:58.722382107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-hjmnn,Uid:6aabdf00-4943-4abd-b2b7-b63ddb657a83,Namespace:tigera-operator,Attempt:0,}" Jan 30 18:29:58.760709 containerd[1503]: time="2025-01-30T18:29:58.758887171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 18:29:58.760709 containerd[1503]: time="2025-01-30T18:29:58.758957150Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 18:29:58.760709 containerd[1503]: time="2025-01-30T18:29:58.758974493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 18:29:58.760709 containerd[1503]: time="2025-01-30T18:29:58.759070396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 18:29:58.792867 systemd[1]: Started cri-containerd-b6d933f16b9b0d03a2dc1932297227a5a6dfffe12e53e5bb77aee7e4a294d8ef.scope - libcontainer container b6d933f16b9b0d03a2dc1932297227a5a6dfffe12e53e5bb77aee7e4a294d8ef. Jan 30 18:29:58.844257 containerd[1503]: time="2025-01-30T18:29:58.844194827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-hjmnn,Uid:6aabdf00-4943-4abd-b2b7-b63ddb657a83,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b6d933f16b9b0d03a2dc1932297227a5a6dfffe12e53e5bb77aee7e4a294d8ef\"" Jan 30 18:29:58.849130 containerd[1503]: time="2025-01-30T18:29:58.849027050Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 30 18:29:59.194584 kubelet[2657]: I0130 18:29:59.194176 2657 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-htk7g" podStartSLOduration=2.194095723 podStartE2EDuration="2.194095723s" podCreationTimestamp="2025-01-30 18:29:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 18:29:59.189982907 +0000 UTC m=+7.196739120" watchObservedRunningTime="2025-01-30 18:29:59.194095723 +0000 UTC m=+7.200852025" Jan 30 18:30:01.375754 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4282842922.mount: Deactivated successfully. Jan 30 18:30:02.428768 containerd[1503]: time="2025-01-30T18:30:02.427887034Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:30:02.429332 containerd[1503]: time="2025-01-30T18:30:02.428962090Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Jan 30 18:30:02.429532 containerd[1503]: time="2025-01-30T18:30:02.429507526Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:30:02.431748 containerd[1503]: time="2025-01-30T18:30:02.431716164Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:30:02.432759 containerd[1503]: time="2025-01-30T18:30:02.432731497Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 3.583669193s" Jan 30 18:30:02.432850 containerd[1503]: time="2025-01-30T18:30:02.432764866Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 30 18:30:02.436467 containerd[1503]: time="2025-01-30T18:30:02.436432812Z" level=info msg="CreateContainer within sandbox \"b6d933f16b9b0d03a2dc1932297227a5a6dfffe12e53e5bb77aee7e4a294d8ef\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 30 18:30:02.451000 containerd[1503]: time="2025-01-30T18:30:02.450960717Z" level=info msg="CreateContainer within sandbox \"b6d933f16b9b0d03a2dc1932297227a5a6dfffe12e53e5bb77aee7e4a294d8ef\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a5ea475275658d1936ab1480bcae081d26ffaf4d418abdf567a81ffeb2c6f6dc\"" Jan 30 18:30:02.451606 containerd[1503]: time="2025-01-30T18:30:02.451504078Z" level=info msg="StartContainer for \"a5ea475275658d1936ab1480bcae081d26ffaf4d418abdf567a81ffeb2c6f6dc\"" Jan 30 18:30:02.497870 systemd[1]: Started cri-containerd-a5ea475275658d1936ab1480bcae081d26ffaf4d418abdf567a81ffeb2c6f6dc.scope - libcontainer container a5ea475275658d1936ab1480bcae081d26ffaf4d418abdf567a81ffeb2c6f6dc. Jan 30 18:30:02.528224 containerd[1503]: time="2025-01-30T18:30:02.528165051Z" level=info msg="StartContainer for \"a5ea475275658d1936ab1480bcae081d26ffaf4d418abdf567a81ffeb2c6f6dc\" returns successfully" Jan 30 18:30:03.213869 kubelet[2657]: I0130 18:30:03.213561 2657 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-hjmnn" podStartSLOduration=1.6270945829999999 podStartE2EDuration="5.213531602s" podCreationTimestamp="2025-01-30 18:29:58 +0000 UTC" firstStartedPulling="2025-01-30 18:29:58.848539968 +0000 UTC m=+6.855296164" lastFinishedPulling="2025-01-30 18:30:02.43497699 +0000 UTC m=+10.441733183" observedRunningTime="2025-01-30 18:30:03.212915471 +0000 UTC m=+11.219671781" watchObservedRunningTime="2025-01-30 18:30:03.213531602 +0000 UTC m=+11.220287847" Jan 30 18:30:05.835749 systemd[1]: Created slice kubepods-besteffort-pod3fcf1ca2_bd91_49fd_9bc2_1c81c88cd820.slice - libcontainer container kubepods-besteffort-pod3fcf1ca2_bd91_49fd_9bc2_1c81c88cd820.slice. Jan 30 18:30:05.977456 systemd[1]: Created slice kubepods-besteffort-pod520c3df4_471d_42fe_a007_29ae42503a9c.slice - libcontainer container kubepods-besteffort-pod520c3df4_471d_42fe_a007_29ae42503a9c.slice. Jan 30 18:30:06.005537 kubelet[2657]: I0130 18:30:06.005347 2657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sdl6\" (UniqueName: \"kubernetes.io/projected/3fcf1ca2-bd91-49fd-9bc2-1c81c88cd820-kube-api-access-8sdl6\") pod \"calico-typha-8d87bbbf8-f96tl\" (UID: \"3fcf1ca2-bd91-49fd-9bc2-1c81c88cd820\") " pod="calico-system/calico-typha-8d87bbbf8-f96tl" Jan 30 18:30:06.005537 kubelet[2657]: I0130 18:30:06.005409 2657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3fcf1ca2-bd91-49fd-9bc2-1c81c88cd820-tigera-ca-bundle\") pod \"calico-typha-8d87bbbf8-f96tl\" (UID: \"3fcf1ca2-bd91-49fd-9bc2-1c81c88cd820\") " pod="calico-system/calico-typha-8d87bbbf8-f96tl" Jan 30 18:30:06.005537 kubelet[2657]: I0130 18:30:06.005435 2657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3fcf1ca2-bd91-49fd-9bc2-1c81c88cd820-typha-certs\") pod \"calico-typha-8d87bbbf8-f96tl\" (UID: \"3fcf1ca2-bd91-49fd-9bc2-1c81c88cd820\") " pod="calico-system/calico-typha-8d87bbbf8-f96tl" Jan 30 18:30:06.094727 kubelet[2657]: E0130 18:30:06.094078 2657 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zzj4j" podUID="24904d3d-1841-497d-aeec-b53f05fc7a48" Jan 30 18:30:06.107738 kubelet[2657]: I0130 18:30:06.107014 2657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/24904d3d-1841-497d-aeec-b53f05fc7a48-varrun\") pod \"csi-node-driver-zzj4j\" (UID: \"24904d3d-1841-497d-aeec-b53f05fc7a48\") " pod="calico-system/csi-node-driver-zzj4j" Jan 30 18:30:06.107738 kubelet[2657]: I0130 18:30:06.107061 2657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/520c3df4-471d-42fe-a007-29ae42503a9c-cni-log-dir\") pod \"calico-node-59cd8\" (UID: \"520c3df4-471d-42fe-a007-29ae42503a9c\") " pod="calico-system/calico-node-59cd8" Jan 30 18:30:06.107738 kubelet[2657]: I0130 18:30:06.107081 2657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/24904d3d-1841-497d-aeec-b53f05fc7a48-registration-dir\") pod \"csi-node-driver-zzj4j\" (UID: \"24904d3d-1841-497d-aeec-b53f05fc7a48\") " pod="calico-system/csi-node-driver-zzj4j" Jan 30 18:30:06.107738 kubelet[2657]: I0130 18:30:06.107102 2657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwr22\" (UniqueName: \"kubernetes.io/projected/24904d3d-1841-497d-aeec-b53f05fc7a48-kube-api-access-hwr22\") pod \"csi-node-driver-zzj4j\" (UID: \"24904d3d-1841-497d-aeec-b53f05fc7a48\") " pod="calico-system/csi-node-driver-zzj4j" Jan 30 18:30:06.107738 kubelet[2657]: I0130 18:30:06.107129 2657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/520c3df4-471d-42fe-a007-29ae42503a9c-policysync\") pod \"calico-node-59cd8\" (UID: \"520c3df4-471d-42fe-a007-29ae42503a9c\") " pod="calico-system/calico-node-59cd8" Jan 30 18:30:06.108108 kubelet[2657]: I0130 18:30:06.107147 2657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/520c3df4-471d-42fe-a007-29ae42503a9c-var-run-calico\") pod \"calico-node-59cd8\" (UID: \"520c3df4-471d-42fe-a007-29ae42503a9c\") " pod="calico-system/calico-node-59cd8" Jan 30 18:30:06.108108 kubelet[2657]: I0130 18:30:06.107165 2657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/520c3df4-471d-42fe-a007-29ae42503a9c-node-certs\") pod \"calico-node-59cd8\" (UID: \"520c3df4-471d-42fe-a007-29ae42503a9c\") " pod="calico-system/calico-node-59cd8" Jan 30 18:30:06.108108 kubelet[2657]: I0130 18:30:06.107186 2657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/520c3df4-471d-42fe-a007-29ae42503a9c-cni-bin-dir\") pod \"calico-node-59cd8\" (UID: \"520c3df4-471d-42fe-a007-29ae42503a9c\") " pod="calico-system/calico-node-59cd8" Jan 30 18:30:06.108108 kubelet[2657]: I0130 18:30:06.107213 2657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/520c3df4-471d-42fe-a007-29ae42503a9c-flexvol-driver-host\") pod \"calico-node-59cd8\" (UID: \"520c3df4-471d-42fe-a007-29ae42503a9c\") " pod="calico-system/calico-node-59cd8" Jan 30 18:30:06.108108 kubelet[2657]: I0130 18:30:06.107245 2657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/520c3df4-471d-42fe-a007-29ae42503a9c-var-lib-calico\") pod \"calico-node-59cd8\" (UID: \"520c3df4-471d-42fe-a007-29ae42503a9c\") " pod="calico-system/calico-node-59cd8" Jan 30 18:30:06.108270 kubelet[2657]: I0130 18:30:06.107262 2657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/520c3df4-471d-42fe-a007-29ae42503a9c-cni-net-dir\") pod \"calico-node-59cd8\" (UID: \"520c3df4-471d-42fe-a007-29ae42503a9c\") " pod="calico-system/calico-node-59cd8" Jan 30 18:30:06.108270 kubelet[2657]: I0130 18:30:06.107282 2657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhjtb\" (UniqueName: \"kubernetes.io/projected/520c3df4-471d-42fe-a007-29ae42503a9c-kube-api-access-dhjtb\") pod \"calico-node-59cd8\" (UID: \"520c3df4-471d-42fe-a007-29ae42503a9c\") " pod="calico-system/calico-node-59cd8" Jan 30 18:30:06.108270 kubelet[2657]: I0130 18:30:06.107356 2657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/520c3df4-471d-42fe-a007-29ae42503a9c-tigera-ca-bundle\") pod \"calico-node-59cd8\" (UID: \"520c3df4-471d-42fe-a007-29ae42503a9c\") " pod="calico-system/calico-node-59cd8" Jan 30 18:30:06.108270 kubelet[2657]: I0130 18:30:06.107374 2657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/24904d3d-1841-497d-aeec-b53f05fc7a48-kubelet-dir\") pod \"csi-node-driver-zzj4j\" (UID: \"24904d3d-1841-497d-aeec-b53f05fc7a48\") " pod="calico-system/csi-node-driver-zzj4j" Jan 30 18:30:06.108270 kubelet[2657]: I0130 18:30:06.107393 2657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/24904d3d-1841-497d-aeec-b53f05fc7a48-socket-dir\") pod \"csi-node-driver-zzj4j\" (UID: \"24904d3d-1841-497d-aeec-b53f05fc7a48\") " pod="calico-system/csi-node-driver-zzj4j" Jan 30 18:30:06.108419 kubelet[2657]: I0130 18:30:06.107417 2657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/520c3df4-471d-42fe-a007-29ae42503a9c-lib-modules\") pod \"calico-node-59cd8\" (UID: \"520c3df4-471d-42fe-a007-29ae42503a9c\") " pod="calico-system/calico-node-59cd8" Jan 30 18:30:06.108419 kubelet[2657]: I0130 18:30:06.107434 2657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/520c3df4-471d-42fe-a007-29ae42503a9c-xtables-lock\") pod \"calico-node-59cd8\" (UID: \"520c3df4-471d-42fe-a007-29ae42503a9c\") " pod="calico-system/calico-node-59cd8" Jan 30 18:30:06.141454 containerd[1503]: time="2025-01-30T18:30:06.140850727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8d87bbbf8-f96tl,Uid:3fcf1ca2-bd91-49fd-9bc2-1c81c88cd820,Namespace:calico-system,Attempt:0,}" Jan 30 18:30:06.217480 kubelet[2657]: E0130 18:30:06.217214 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:06.217480 kubelet[2657]: W0130 18:30:06.217260 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:06.217480 kubelet[2657]: E0130 18:30:06.217314 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:06.220710 kubelet[2657]: E0130 18:30:06.219994 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:06.220710 kubelet[2657]: W0130 18:30:06.220017 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:06.220710 kubelet[2657]: E0130 18:30:06.220574 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:06.221319 kubelet[2657]: E0130 18:30:06.221144 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:06.221886 kubelet[2657]: W0130 18:30:06.221156 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:06.221886 kubelet[2657]: E0130 18:30:06.221746 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:06.222394 containerd[1503]: time="2025-01-30T18:30:06.215174751Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 18:30:06.222394 containerd[1503]: time="2025-01-30T18:30:06.220213572Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 18:30:06.222394 containerd[1503]: time="2025-01-30T18:30:06.220233090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 18:30:06.222394 containerd[1503]: time="2025-01-30T18:30:06.220349328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 18:30:06.241445 kubelet[2657]: E0130 18:30:06.237790 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:06.241445 kubelet[2657]: W0130 18:30:06.237820 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:06.241445 kubelet[2657]: E0130 18:30:06.237865 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:06.250867 kubelet[2657]: E0130 18:30:06.250840 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:06.251193 kubelet[2657]: W0130 18:30:06.251033 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:06.251193 kubelet[2657]: E0130 18:30:06.251070 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:06.252068 kubelet[2657]: E0130 18:30:06.251852 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:06.252068 kubelet[2657]: W0130 18:30:06.251867 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:06.252068 kubelet[2657]: E0130 18:30:06.251890 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:06.258941 kubelet[2657]: E0130 18:30:06.258314 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:06.258941 kubelet[2657]: W0130 18:30:06.258334 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:06.260205 kubelet[2657]: E0130 18:30:06.259205 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:06.265695 kubelet[2657]: E0130 18:30:06.264720 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:06.265695 kubelet[2657]: W0130 18:30:06.264744 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:06.265695 kubelet[2657]: E0130 18:30:06.265115 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:06.266110 kubelet[2657]: E0130 18:30:06.265996 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:06.266110 kubelet[2657]: W0130 18:30:06.266009 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:06.266792 kubelet[2657]: E0130 18:30:06.266691 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:06.267241 kubelet[2657]: E0130 18:30:06.267229 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:06.269843 kubelet[2657]: W0130 18:30:06.268890 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:06.269843 kubelet[2657]: E0130 18:30:06.269794 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:06.270544 kubelet[2657]: E0130 18:30:06.270427 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:06.270544 kubelet[2657]: W0130 18:30:06.270439 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:06.270544 kubelet[2657]: E0130 18:30:06.270463 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:06.271718 kubelet[2657]: E0130 18:30:06.271053 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:06.271718 kubelet[2657]: W0130 18:30:06.271065 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:06.271718 kubelet[2657]: E0130 18:30:06.271087 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:06.275967 kubelet[2657]: E0130 18:30:06.275751 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:06.275967 kubelet[2657]: W0130 18:30:06.275880 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:06.276906 systemd[1]: Started cri-containerd-5c316e2679283c45cb4a4c5b1c13e00d6ff254afd93204a912a4cdf2800faf02.scope - libcontainer container 5c316e2679283c45cb4a4c5b1c13e00d6ff254afd93204a912a4cdf2800faf02. Jan 30 18:30:06.278204 kubelet[2657]: E0130 18:30:06.277163 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:06.278204 kubelet[2657]: W0130 18:30:06.277174 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:06.278455 kubelet[2657]: E0130 18:30:06.278442 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:06.278540 kubelet[2657]: W0130 18:30:06.278530 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:06.278848 kubelet[2657]: E0130 18:30:06.278837 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:06.278982 kubelet[2657]: W0130 18:30:06.278890 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:06.279627 kubelet[2657]: E0130 18:30:06.279614 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:06.279874 kubelet[2657]: E0130 18:30:06.279697 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:06.279874 kubelet[2657]: E0130 18:30:06.279709 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:06.279874 kubelet[2657]: E0130 18:30:06.279717 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:06.279874 kubelet[2657]: E0130 18:30:06.279725 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:06.280196 kubelet[2657]: W0130 18:30:06.279768 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:06.280498 kubelet[2657]: E0130 18:30:06.280396 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:06.282031 kubelet[2657]: E0130 18:30:06.281940 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:06.282031 kubelet[2657]: W0130 18:30:06.281951 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:06.282399 kubelet[2657]: E0130 18:30:06.282284 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:06.282399 kubelet[2657]: E0130 18:30:06.282292 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:06.282399 kubelet[2657]: W0130 18:30:06.282312 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:06.282399 kubelet[2657]: E0130 18:30:06.282347 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:06.282818 kubelet[2657]: E0130 18:30:06.282545 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:06.282818 kubelet[2657]: W0130 18:30:06.282555 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:06.282818 kubelet[2657]: E0130 18:30:06.282751 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:06.282818 kubelet[2657]: W0130 18:30:06.282759 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:06.282818 kubelet[2657]: E0130 18:30:06.282775 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:06.282818 kubelet[2657]: E0130 18:30:06.282790 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:06.282998 kubelet[2657]: E0130 18:30:06.282922 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:06.282998 kubelet[2657]: W0130 18:30:06.282940 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:06.283978 kubelet[2657]: E0130 18:30:06.283096 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:06.283978 kubelet[2657]: W0130 18:30:06.283106 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:06.283978 kubelet[2657]: E0130 18:30:06.283328 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:06.283978 kubelet[2657]: W0130 18:30:06.283336 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:06.283978 kubelet[2657]: E0130 18:30:06.283347 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:06.283978 kubelet[2657]: E0130 18:30:06.283706 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:06.283978 kubelet[2657]: E0130 18:30:06.283735 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:06.284525 kubelet[2657]: E0130 18:30:06.284254 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:06.284525 kubelet[2657]: W0130 18:30:06.284265 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:06.284525 kubelet[2657]: E0130 18:30:06.284292 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:06.286699 kubelet[2657]: E0130 18:30:06.284833 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:06.286699 kubelet[2657]: W0130 18:30:06.284851 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:06.286699 kubelet[2657]: E0130 18:30:06.284868 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:06.286903 kubelet[2657]: E0130 18:30:06.286892 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:06.287029 kubelet[2657]: W0130 18:30:06.286959 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:06.287029 kubelet[2657]: E0130 18:30:06.286980 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:06.287301 kubelet[2657]: E0130 18:30:06.287291 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:06.287447 kubelet[2657]: W0130 18:30:06.287357 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:06.287447 kubelet[2657]: E0130 18:30:06.287376 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:06.287631 kubelet[2657]: E0130 18:30:06.287617 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:06.287674 kubelet[2657]: W0130 18:30:06.287631 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:06.287674 kubelet[2657]: E0130 18:30:06.287659 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:06.288068 kubelet[2657]: E0130 18:30:06.288055 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:06.288068 kubelet[2657]: W0130 18:30:06.288067 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:06.288246 kubelet[2657]: E0130 18:30:06.288171 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:06.288279 kubelet[2657]: E0130 18:30:06.288260 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:06.288279 kubelet[2657]: W0130 18:30:06.288266 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:06.288370 kubelet[2657]: E0130 18:30:06.288358 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:06.288459 kubelet[2657]: E0130 18:30:06.288449 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:06.288488 kubelet[2657]: W0130 18:30:06.288459 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:06.288488 kubelet[2657]: E0130 18:30:06.288468 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:06.341949 containerd[1503]: time="2025-01-30T18:30:06.341804750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8d87bbbf8-f96tl,Uid:3fcf1ca2-bd91-49fd-9bc2-1c81c88cd820,Namespace:calico-system,Attempt:0,} returns sandbox id \"5c316e2679283c45cb4a4c5b1c13e00d6ff254afd93204a912a4cdf2800faf02\"" Jan 30 18:30:06.346825 containerd[1503]: time="2025-01-30T18:30:06.345670767Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 30 18:30:06.584008 containerd[1503]: time="2025-01-30T18:30:06.583873385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-59cd8,Uid:520c3df4-471d-42fe-a007-29ae42503a9c,Namespace:calico-system,Attempt:0,}" Jan 30 18:30:06.613379 containerd[1503]: time="2025-01-30T18:30:06.612919331Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 18:30:06.613379 containerd[1503]: time="2025-01-30T18:30:06.612976805Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 18:30:06.613379 containerd[1503]: time="2025-01-30T18:30:06.612998471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 18:30:06.614446 containerd[1503]: time="2025-01-30T18:30:06.613347710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 18:30:06.637904 systemd[1]: Started cri-containerd-0b01ad16b8c0c89016e8c45b6545bd62e2e72b4e02bc29f978e9b6740f85cbc1.scope - libcontainer container 0b01ad16b8c0c89016e8c45b6545bd62e2e72b4e02bc29f978e9b6740f85cbc1. Jan 30 18:30:06.672933 containerd[1503]: time="2025-01-30T18:30:06.672743259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-59cd8,Uid:520c3df4-471d-42fe-a007-29ae42503a9c,Namespace:calico-system,Attempt:0,} returns sandbox id \"0b01ad16b8c0c89016e8c45b6545bd62e2e72b4e02bc29f978e9b6740f85cbc1\"" Jan 30 18:30:07.855080 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2989512597.mount: Deactivated successfully. Jan 30 18:30:08.121223 kubelet[2657]: E0130 18:30:08.120921 2657 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zzj4j" podUID="24904d3d-1841-497d-aeec-b53f05fc7a48" Jan 30 18:30:08.790821 containerd[1503]: time="2025-01-30T18:30:08.790772073Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:30:08.791634 containerd[1503]: time="2025-01-30T18:30:08.791571682Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 30 18:30:08.792276 containerd[1503]: time="2025-01-30T18:30:08.792086789Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:30:08.793859 containerd[1503]: time="2025-01-30T18:30:08.793832350Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:30:08.795840 containerd[1503]: time="2025-01-30T18:30:08.795808717Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.450074582s" Jan 30 18:30:08.796575 containerd[1503]: time="2025-01-30T18:30:08.795934577Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 30 18:30:08.800577 containerd[1503]: time="2025-01-30T18:30:08.800558199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 30 18:30:08.816384 containerd[1503]: time="2025-01-30T18:30:08.816349856Z" level=info msg="CreateContainer within sandbox \"5c316e2679283c45cb4a4c5b1c13e00d6ff254afd93204a912a4cdf2800faf02\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 30 18:30:08.827065 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount34839820.mount: Deactivated successfully. Jan 30 18:30:08.830443 containerd[1503]: time="2025-01-30T18:30:08.830346577Z" level=info msg="CreateContainer within sandbox \"5c316e2679283c45cb4a4c5b1c13e00d6ff254afd93204a912a4cdf2800faf02\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ac8b3f5e817431f8ad234f4882e4cdd1c7adbd57c8cee68f212758bd9e2fe1c8\"" Jan 30 18:30:08.831919 containerd[1503]: time="2025-01-30T18:30:08.830921550Z" level=info msg="StartContainer for \"ac8b3f5e817431f8ad234f4882e4cdd1c7adbd57c8cee68f212758bd9e2fe1c8\"" Jan 30 18:30:08.869866 systemd[1]: Started cri-containerd-ac8b3f5e817431f8ad234f4882e4cdd1c7adbd57c8cee68f212758bd9e2fe1c8.scope - libcontainer container ac8b3f5e817431f8ad234f4882e4cdd1c7adbd57c8cee68f212758bd9e2fe1c8. Jan 30 18:30:08.926107 containerd[1503]: time="2025-01-30T18:30:08.926062850Z" level=info msg="StartContainer for \"ac8b3f5e817431f8ad234f4882e4cdd1c7adbd57c8cee68f212758bd9e2fe1c8\" returns successfully" Jan 30 18:30:09.231619 kubelet[2657]: E0130 18:30:09.230930 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:09.231619 kubelet[2657]: W0130 18:30:09.230967 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:09.231619 kubelet[2657]: E0130 18:30:09.230996 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:09.232952 kubelet[2657]: E0130 18:30:09.231849 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:09.232952 kubelet[2657]: W0130 18:30:09.231866 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:09.232952 kubelet[2657]: E0130 18:30:09.231881 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:09.232952 kubelet[2657]: E0130 18:30:09.232510 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:09.232952 kubelet[2657]: W0130 18:30:09.232522 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:09.232952 kubelet[2657]: E0130 18:30:09.232536 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:09.232952 kubelet[2657]: E0130 18:30:09.232842 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:09.232952 kubelet[2657]: W0130 18:30:09.232853 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:09.232952 kubelet[2657]: E0130 18:30:09.232867 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:09.233864 kubelet[2657]: E0130 18:30:09.233843 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:09.233864 kubelet[2657]: W0130 18:30:09.233855 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:09.233962 kubelet[2657]: E0130 18:30:09.233869 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:09.234122 kubelet[2657]: E0130 18:30:09.234108 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:09.234122 kubelet[2657]: W0130 18:30:09.234121 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:09.234217 kubelet[2657]: E0130 18:30:09.234131 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:09.234318 kubelet[2657]: E0130 18:30:09.234307 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:09.234318 kubelet[2657]: W0130 18:30:09.234318 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:09.234408 kubelet[2657]: E0130 18:30:09.234328 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:09.235621 kubelet[2657]: E0130 18:30:09.235595 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:09.235621 kubelet[2657]: W0130 18:30:09.235616 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:09.235760 kubelet[2657]: E0130 18:30:09.235632 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:09.235927 kubelet[2657]: E0130 18:30:09.235913 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:09.235986 kubelet[2657]: W0130 18:30:09.235926 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:09.235986 kubelet[2657]: E0130 18:30:09.235937 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:09.236143 kubelet[2657]: E0130 18:30:09.236131 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:09.236143 kubelet[2657]: W0130 18:30:09.236142 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:09.236225 kubelet[2657]: E0130 18:30:09.236156 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:09.236332 kubelet[2657]: E0130 18:30:09.236321 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:09.236380 kubelet[2657]: W0130 18:30:09.236332 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:09.236380 kubelet[2657]: E0130 18:30:09.236341 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:09.237279 kubelet[2657]: E0130 18:30:09.237253 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:09.237279 kubelet[2657]: W0130 18:30:09.237270 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:09.237407 kubelet[2657]: E0130 18:30:09.237283 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:09.237747 kubelet[2657]: E0130 18:30:09.237503 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:09.237747 kubelet[2657]: W0130 18:30:09.237515 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:09.237747 kubelet[2657]: E0130 18:30:09.237526 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:09.237747 kubelet[2657]: E0130 18:30:09.237746 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:09.237946 kubelet[2657]: W0130 18:30:09.237755 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:09.237946 kubelet[2657]: E0130 18:30:09.237765 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:09.238766 kubelet[2657]: E0130 18:30:09.238518 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:09.238766 kubelet[2657]: W0130 18:30:09.238532 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:09.238766 kubelet[2657]: E0130 18:30:09.238546 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:09.328597 kubelet[2657]: E0130 18:30:09.328343 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:09.328597 kubelet[2657]: W0130 18:30:09.328388 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:09.328597 kubelet[2657]: E0130 18:30:09.328426 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:09.329145 kubelet[2657]: E0130 18:30:09.328860 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:09.329145 kubelet[2657]: W0130 18:30:09.328874 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:09.329145 kubelet[2657]: E0130 18:30:09.328893 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:09.329391 kubelet[2657]: E0130 18:30:09.329271 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:09.329391 kubelet[2657]: W0130 18:30:09.329285 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:09.329391 kubelet[2657]: E0130 18:30:09.329302 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:09.330582 kubelet[2657]: E0130 18:30:09.329629 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:09.330582 kubelet[2657]: W0130 18:30:09.329642 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:09.330582 kubelet[2657]: E0130 18:30:09.329670 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:09.330582 kubelet[2657]: E0130 18:30:09.330269 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:09.330582 kubelet[2657]: W0130 18:30:09.330304 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:09.330582 kubelet[2657]: E0130 18:30:09.330358 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:09.331497 kubelet[2657]: E0130 18:30:09.330768 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:09.331497 kubelet[2657]: W0130 18:30:09.330788 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:09.331497 kubelet[2657]: E0130 18:30:09.330826 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:09.332143 kubelet[2657]: E0130 18:30:09.331793 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:09.332143 kubelet[2657]: W0130 18:30:09.331815 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:09.332143 kubelet[2657]: E0130 18:30:09.331869 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:09.332501 kubelet[2657]: E0130 18:30:09.332477 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:09.332635 kubelet[2657]: W0130 18:30:09.332614 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:09.332845 kubelet[2657]: E0130 18:30:09.332817 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:09.333515 kubelet[2657]: E0130 18:30:09.333313 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:09.333515 kubelet[2657]: W0130 18:30:09.333339 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:09.333515 kubelet[2657]: E0130 18:30:09.333387 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:09.333908 kubelet[2657]: E0130 18:30:09.333885 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:09.333982 kubelet[2657]: W0130 18:30:09.333911 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:09.334982 kubelet[2657]: E0130 18:30:09.334032 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:09.334982 kubelet[2657]: E0130 18:30:09.334406 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:09.334982 kubelet[2657]: W0130 18:30:09.334427 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:09.334982 kubelet[2657]: E0130 18:30:09.334851 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:09.334982 kubelet[2657]: E0130 18:30:09.334913 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:09.334982 kubelet[2657]: W0130 18:30:09.334931 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:09.334982 kubelet[2657]: E0130 18:30:09.334969 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:09.335452 kubelet[2657]: E0130 18:30:09.335425 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:09.335530 kubelet[2657]: W0130 18:30:09.335458 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:09.335530 kubelet[2657]: E0130 18:30:09.335482 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:09.336253 kubelet[2657]: E0130 18:30:09.336025 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:09.336253 kubelet[2657]: W0130 18:30:09.336047 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:09.336253 kubelet[2657]: E0130 18:30:09.336090 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:09.336582 kubelet[2657]: E0130 18:30:09.336410 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:09.336582 kubelet[2657]: W0130 18:30:09.336424 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:09.336582 kubelet[2657]: E0130 18:30:09.336440 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:09.336880 kubelet[2657]: E0130 18:30:09.336757 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:09.336880 kubelet[2657]: W0130 18:30:09.336770 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:09.336880 kubelet[2657]: E0130 18:30:09.336784 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:09.337515 kubelet[2657]: E0130 18:30:09.337485 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:09.337515 kubelet[2657]: W0130 18:30:09.337507 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:09.337962 kubelet[2657]: E0130 18:30:09.337806 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:09.337962 kubelet[2657]: E0130 18:30:09.337824 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:09.337962 kubelet[2657]: W0130 18:30:09.337895 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:09.337962 kubelet[2657]: E0130 18:30:09.337923 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:10.124531 kubelet[2657]: E0130 18:30:10.122197 2657 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zzj4j" podUID="24904d3d-1841-497d-aeec-b53f05fc7a48" Jan 30 18:30:10.221072 kubelet[2657]: I0130 18:30:10.221040 2657 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 18:30:10.246163 kubelet[2657]: E0130 18:30:10.246066 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:10.246163 kubelet[2657]: W0130 18:30:10.246115 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:10.246163 kubelet[2657]: E0130 18:30:10.246189 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:10.247792 kubelet[2657]: E0130 18:30:10.246981 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:10.247792 kubelet[2657]: W0130 18:30:10.246991 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:10.247792 kubelet[2657]: E0130 18:30:10.247005 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:10.247792 kubelet[2657]: E0130 18:30:10.247565 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:10.247792 kubelet[2657]: W0130 18:30:10.247574 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:10.247792 kubelet[2657]: E0130 18:30:10.247585 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:10.248307 kubelet[2657]: E0130 18:30:10.247897 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:10.248307 kubelet[2657]: W0130 18:30:10.247905 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:10.248307 kubelet[2657]: E0130 18:30:10.247925 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:10.248307 kubelet[2657]: E0130 18:30:10.248159 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:10.248307 kubelet[2657]: W0130 18:30:10.248167 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:10.248307 kubelet[2657]: E0130 18:30:10.248177 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:10.248826 kubelet[2657]: E0130 18:30:10.248350 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:10.248826 kubelet[2657]: W0130 18:30:10.248359 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:10.248826 kubelet[2657]: E0130 18:30:10.248367 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:10.248826 kubelet[2657]: E0130 18:30:10.248550 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:10.248826 kubelet[2657]: W0130 18:30:10.248558 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:10.248826 kubelet[2657]: E0130 18:30:10.248566 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:10.248826 kubelet[2657]: E0130 18:30:10.248753 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:10.248826 kubelet[2657]: W0130 18:30:10.248760 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:10.248826 kubelet[2657]: E0130 18:30:10.248768 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:10.252210 kubelet[2657]: E0130 18:30:10.248947 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:10.252210 kubelet[2657]: W0130 18:30:10.248968 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:10.252210 kubelet[2657]: E0130 18:30:10.248977 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:10.252210 kubelet[2657]: E0130 18:30:10.249136 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:10.252210 kubelet[2657]: W0130 18:30:10.249143 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:10.252210 kubelet[2657]: E0130 18:30:10.249151 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:10.252210 kubelet[2657]: E0130 18:30:10.249548 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:10.252210 kubelet[2657]: W0130 18:30:10.249556 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:10.252210 kubelet[2657]: E0130 18:30:10.249563 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:10.252210 kubelet[2657]: E0130 18:30:10.249851 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:10.253044 kubelet[2657]: W0130 18:30:10.249860 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:10.253044 kubelet[2657]: E0130 18:30:10.249870 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:10.253044 kubelet[2657]: E0130 18:30:10.251285 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:10.253044 kubelet[2657]: W0130 18:30:10.251293 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:10.253044 kubelet[2657]: E0130 18:30:10.251302 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:10.253044 kubelet[2657]: E0130 18:30:10.251491 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:10.253044 kubelet[2657]: W0130 18:30:10.251509 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:10.253044 kubelet[2657]: E0130 18:30:10.251518 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:10.253044 kubelet[2657]: E0130 18:30:10.251714 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:10.253044 kubelet[2657]: W0130 18:30:10.251721 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:10.253893 kubelet[2657]: E0130 18:30:10.251729 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:10.337381 kubelet[2657]: E0130 18:30:10.337278 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:10.337381 kubelet[2657]: W0130 18:30:10.337307 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:10.337381 kubelet[2657]: E0130 18:30:10.337622 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:10.338517 kubelet[2657]: E0130 18:30:10.338363 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:10.338517 kubelet[2657]: W0130 18:30:10.338377 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:10.338517 kubelet[2657]: E0130 18:30:10.338395 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:10.339100 kubelet[2657]: E0130 18:30:10.338951 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:10.339100 kubelet[2657]: W0130 18:30:10.338964 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:10.339100 kubelet[2657]: E0130 18:30:10.338997 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:10.339716 kubelet[2657]: E0130 18:30:10.339593 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:10.339716 kubelet[2657]: W0130 18:30:10.339605 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:10.339716 kubelet[2657]: E0130 18:30:10.339635 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:10.340215 kubelet[2657]: E0130 18:30:10.340071 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:10.340215 kubelet[2657]: W0130 18:30:10.340084 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:10.340215 kubelet[2657]: E0130 18:30:10.340105 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:10.340487 kubelet[2657]: E0130 18:30:10.340284 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:10.340487 kubelet[2657]: W0130 18:30:10.340295 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:10.340487 kubelet[2657]: E0130 18:30:10.340317 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:10.340929 kubelet[2657]: E0130 18:30:10.340780 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:10.340929 kubelet[2657]: W0130 18:30:10.340795 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:10.340929 kubelet[2657]: E0130 18:30:10.340824 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:10.341334 kubelet[2657]: E0130 18:30:10.341107 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:10.341334 kubelet[2657]: W0130 18:30:10.341117 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:10.341334 kubelet[2657]: E0130 18:30:10.341133 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:10.341761 kubelet[2657]: E0130 18:30:10.341750 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:10.341926 kubelet[2657]: W0130 18:30:10.341858 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:10.341926 kubelet[2657]: E0130 18:30:10.341892 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:10.342521 kubelet[2657]: E0130 18:30:10.342387 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:10.342521 kubelet[2657]: W0130 18:30:10.342400 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:10.342727 kubelet[2657]: E0130 18:30:10.342467 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:10.342883 kubelet[2657]: E0130 18:30:10.342804 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:10.342883 kubelet[2657]: W0130 18:30:10.342812 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:10.342883 kubelet[2657]: E0130 18:30:10.342836 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:10.343324 kubelet[2657]: E0130 18:30:10.343283 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:10.343324 kubelet[2657]: W0130 18:30:10.343294 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:10.343564 kubelet[2657]: E0130 18:30:10.343313 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:10.344433 kubelet[2657]: E0130 18:30:10.344413 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:10.344433 kubelet[2657]: W0130 18:30:10.344434 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:10.344647 kubelet[2657]: E0130 18:30:10.344456 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:10.345089 kubelet[2657]: E0130 18:30:10.345076 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:10.346624 kubelet[2657]: W0130 18:30:10.345173 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:10.346624 kubelet[2657]: E0130 18:30:10.345194 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:10.346624 kubelet[2657]: E0130 18:30:10.345933 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:10.346624 kubelet[2657]: W0130 18:30:10.345955 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:10.346624 kubelet[2657]: E0130 18:30:10.345972 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:10.346624 kubelet[2657]: E0130 18:30:10.346571 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:10.346624 kubelet[2657]: W0130 18:30:10.346612 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:10.347105 kubelet[2657]: E0130 18:30:10.346673 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:10.348201 kubelet[2657]: E0130 18:30:10.347952 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:10.348201 kubelet[2657]: W0130 18:30:10.347969 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:10.348201 kubelet[2657]: E0130 18:30:10.347983 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:10.348852 kubelet[2657]: E0130 18:30:10.348590 2657 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:30:10.348852 kubelet[2657]: W0130 18:30:10.348601 2657 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:30:10.348852 kubelet[2657]: E0130 18:30:10.348613 2657 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:30:10.395714 containerd[1503]: time="2025-01-30T18:30:10.393018684Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:30:10.395714 containerd[1503]: time="2025-01-30T18:30:10.394481629Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 30 18:30:10.395714 containerd[1503]: time="2025-01-30T18:30:10.395083840Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:30:10.397326 containerd[1503]: time="2025-01-30T18:30:10.397005470Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:30:10.398275 containerd[1503]: time="2025-01-30T18:30:10.397746219Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.597074863s" Jan 30 18:30:10.398275 containerd[1503]: time="2025-01-30T18:30:10.397780985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 30 18:30:10.401986 containerd[1503]: time="2025-01-30T18:30:10.401914957Z" level=info msg="CreateContainer within sandbox \"0b01ad16b8c0c89016e8c45b6545bd62e2e72b4e02bc29f978e9b6740f85cbc1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 30 18:30:10.414847 containerd[1503]: time="2025-01-30T18:30:10.414752778Z" level=info msg="CreateContainer within sandbox \"0b01ad16b8c0c89016e8c45b6545bd62e2e72b4e02bc29f978e9b6740f85cbc1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1052a105c32161b8858456a4acd89555e8d73860d3a543fde409927eef41849d\"" Jan 30 18:30:10.417866 containerd[1503]: time="2025-01-30T18:30:10.416621491Z" level=info msg="StartContainer for \"1052a105c32161b8858456a4acd89555e8d73860d3a543fde409927eef41849d\"" Jan 30 18:30:10.461929 systemd[1]: Started cri-containerd-1052a105c32161b8858456a4acd89555e8d73860d3a543fde409927eef41849d.scope - libcontainer container 1052a105c32161b8858456a4acd89555e8d73860d3a543fde409927eef41849d. Jan 30 18:30:10.509147 containerd[1503]: time="2025-01-30T18:30:10.509095606Z" level=info msg="StartContainer for \"1052a105c32161b8858456a4acd89555e8d73860d3a543fde409927eef41849d\" returns successfully" Jan 30 18:30:10.539982 systemd[1]: cri-containerd-1052a105c32161b8858456a4acd89555e8d73860d3a543fde409927eef41849d.scope: Deactivated successfully. Jan 30 18:30:10.580477 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1052a105c32161b8858456a4acd89555e8d73860d3a543fde409927eef41849d-rootfs.mount: Deactivated successfully. Jan 30 18:30:10.630762 containerd[1503]: time="2025-01-30T18:30:10.593759698Z" level=info msg="shim disconnected" id=1052a105c32161b8858456a4acd89555e8d73860d3a543fde409927eef41849d namespace=k8s.io Jan 30 18:30:10.631062 containerd[1503]: time="2025-01-30T18:30:10.630773662Z" level=warning msg="cleaning up after shim disconnected" id=1052a105c32161b8858456a4acd89555e8d73860d3a543fde409927eef41849d namespace=k8s.io Jan 30 18:30:10.631062 containerd[1503]: time="2025-01-30T18:30:10.630797497Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 18:30:11.232716 containerd[1503]: time="2025-01-30T18:30:11.232642266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 30 18:30:11.255941 kubelet[2657]: I0130 18:30:11.255804 2657 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-8d87bbbf8-f96tl" podStartSLOduration=3.800586456 podStartE2EDuration="6.255762082s" podCreationTimestamp="2025-01-30 18:30:05 +0000 UTC" firstStartedPulling="2025-01-30 18:30:06.343702316 +0000 UTC m=+14.350458510" lastFinishedPulling="2025-01-30 18:30:08.798877944 +0000 UTC m=+16.805634136" observedRunningTime="2025-01-30 18:30:09.243183749 +0000 UTC m=+17.249939966" watchObservedRunningTime="2025-01-30 18:30:11.255762082 +0000 UTC m=+19.262518395" Jan 30 18:30:12.121880 kubelet[2657]: E0130 18:30:12.121465 2657 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zzj4j" podUID="24904d3d-1841-497d-aeec-b53f05fc7a48" Jan 30 18:30:14.120750 kubelet[2657]: E0130 18:30:14.120655 2657 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zzj4j" podUID="24904d3d-1841-497d-aeec-b53f05fc7a48" Jan 30 18:30:16.122721 kubelet[2657]: E0130 18:30:16.121872 2657 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zzj4j" podUID="24904d3d-1841-497d-aeec-b53f05fc7a48" Jan 30 18:30:16.652999 containerd[1503]: time="2025-01-30T18:30:16.652901932Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:30:16.654831 containerd[1503]: time="2025-01-30T18:30:16.654247581Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 30 18:30:16.655457 containerd[1503]: time="2025-01-30T18:30:16.655404576Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:30:16.658642 containerd[1503]: time="2025-01-30T18:30:16.658591878Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:30:16.660120 containerd[1503]: time="2025-01-30T18:30:16.660087666Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.427369213s" Jan 30 18:30:16.660185 containerd[1503]: time="2025-01-30T18:30:16.660125002Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 30 18:30:16.664008 containerd[1503]: time="2025-01-30T18:30:16.663977479Z" level=info msg="CreateContainer within sandbox \"0b01ad16b8c0c89016e8c45b6545bd62e2e72b4e02bc29f978e9b6740f85cbc1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 18:30:16.687208 containerd[1503]: time="2025-01-30T18:30:16.687083753Z" level=info msg="CreateContainer within sandbox \"0b01ad16b8c0c89016e8c45b6545bd62e2e72b4e02bc29f978e9b6740f85cbc1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1c3dac774d7de67e6b294ba820b1dc6b59ce9e90ffbd54a6354245195c87f9f7\"" Jan 30 18:30:16.688216 containerd[1503]: time="2025-01-30T18:30:16.687832956Z" level=info msg="StartContainer for \"1c3dac774d7de67e6b294ba820b1dc6b59ce9e90ffbd54a6354245195c87f9f7\"" Jan 30 18:30:16.750874 systemd[1]: Started cri-containerd-1c3dac774d7de67e6b294ba820b1dc6b59ce9e90ffbd54a6354245195c87f9f7.scope - libcontainer container 1c3dac774d7de67e6b294ba820b1dc6b59ce9e90ffbd54a6354245195c87f9f7. Jan 30 18:30:16.781257 containerd[1503]: time="2025-01-30T18:30:16.781196877Z" level=info msg="StartContainer for \"1c3dac774d7de67e6b294ba820b1dc6b59ce9e90ffbd54a6354245195c87f9f7\" returns successfully" Jan 30 18:30:17.356821 systemd[1]: cri-containerd-1c3dac774d7de67e6b294ba820b1dc6b59ce9e90ffbd54a6354245195c87f9f7.scope: Deactivated successfully. Jan 30 18:30:17.391643 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c3dac774d7de67e6b294ba820b1dc6b59ce9e90ffbd54a6354245195c87f9f7-rootfs.mount: Deactivated successfully. Jan 30 18:30:17.456431 kubelet[2657]: I0130 18:30:17.456333 2657 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 30 18:30:17.462965 containerd[1503]: time="2025-01-30T18:30:17.462574749Z" level=info msg="shim disconnected" id=1c3dac774d7de67e6b294ba820b1dc6b59ce9e90ffbd54a6354245195c87f9f7 namespace=k8s.io Jan 30 18:30:17.462965 containerd[1503]: time="2025-01-30T18:30:17.462659362Z" level=warning msg="cleaning up after shim disconnected" id=1c3dac774d7de67e6b294ba820b1dc6b59ce9e90ffbd54a6354245195c87f9f7 namespace=k8s.io Jan 30 18:30:17.462965 containerd[1503]: time="2025-01-30T18:30:17.462671263Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 18:30:17.509537 kubelet[2657]: W0130 18:30:17.509385 2657 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:srv-eex0h.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'srv-eex0h.gb1.brightbox.com' and this object Jan 30 18:30:17.509537 kubelet[2657]: E0130 18:30:17.509441 2657 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:srv-eex0h.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-eex0h.gb1.brightbox.com' and this object" logger="UnhandledError" Jan 30 18:30:17.516652 systemd[1]: Created slice kubepods-burstable-podbb725008_8b91_4bc2_9fc5_057d3a965b20.slice - libcontainer container kubepods-burstable-podbb725008_8b91_4bc2_9fc5_057d3a965b20.slice. Jan 30 18:30:17.534873 systemd[1]: Created slice kubepods-burstable-pod16dcc974_7fc7_4198_87ca_318a9eb2503b.slice - libcontainer container kubepods-burstable-pod16dcc974_7fc7_4198_87ca_318a9eb2503b.slice. Jan 30 18:30:17.545725 systemd[1]: Created slice kubepods-besteffort-pod276afab4_5fe5_4590_b5dc_be8776d1a75c.slice - libcontainer container kubepods-besteffort-pod276afab4_5fe5_4590_b5dc_be8776d1a75c.slice. Jan 30 18:30:17.556829 systemd[1]: Created slice kubepods-besteffort-pod5d4f1662_4fa4_4882_ad86_86d80d22e48c.slice - libcontainer container kubepods-besteffort-pod5d4f1662_4fa4_4882_ad86_86d80d22e48c.slice. Jan 30 18:30:17.564819 systemd[1]: Created slice kubepods-besteffort-pod5f4842b6_cff0_4aec_9953_449edcf4eb42.slice - libcontainer container kubepods-besteffort-pod5f4842b6_cff0_4aec_9953_449edcf4eb42.slice. Jan 30 18:30:17.594365 kubelet[2657]: I0130 18:30:17.594233 2657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7sbk\" (UniqueName: \"kubernetes.io/projected/bb725008-8b91-4bc2-9fc5-057d3a965b20-kube-api-access-d7sbk\") pod \"coredns-6f6b679f8f-np6kd\" (UID: \"bb725008-8b91-4bc2-9fc5-057d3a965b20\") " pod="kube-system/coredns-6f6b679f8f-np6kd" Jan 30 18:30:17.594365 kubelet[2657]: I0130 18:30:17.594293 2657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf8rq\" (UniqueName: \"kubernetes.io/projected/5d4f1662-4fa4-4882-ad86-86d80d22e48c-kube-api-access-xf8rq\") pod \"calico-apiserver-65c57d4b87-mkzvs\" (UID: \"5d4f1662-4fa4-4882-ad86-86d80d22e48c\") " pod="calico-apiserver/calico-apiserver-65c57d4b87-mkzvs" Jan 30 18:30:17.595278 kubelet[2657]: I0130 18:30:17.594319 2657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bb725008-8b91-4bc2-9fc5-057d3a965b20-config-volume\") pod \"coredns-6f6b679f8f-np6kd\" (UID: \"bb725008-8b91-4bc2-9fc5-057d3a965b20\") " pod="kube-system/coredns-6f6b679f8f-np6kd" Jan 30 18:30:17.595441 kubelet[2657]: I0130 18:30:17.595374 2657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/276afab4-5fe5-4590-b5dc-be8776d1a75c-calico-apiserver-certs\") pod \"calico-apiserver-65c57d4b87-ttjhg\" (UID: \"276afab4-5fe5-4590-b5dc-be8776d1a75c\") " pod="calico-apiserver/calico-apiserver-65c57d4b87-ttjhg" Jan 30 18:30:17.595441 kubelet[2657]: I0130 18:30:17.595424 2657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16dcc974-7fc7-4198-87ca-318a9eb2503b-config-volume\") pod \"coredns-6f6b679f8f-g2vfs\" (UID: \"16dcc974-7fc7-4198-87ca-318a9eb2503b\") " pod="kube-system/coredns-6f6b679f8f-g2vfs" Jan 30 18:30:17.595537 kubelet[2657]: I0130 18:30:17.595447 2657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5d4f1662-4fa4-4882-ad86-86d80d22e48c-calico-apiserver-certs\") pod \"calico-apiserver-65c57d4b87-mkzvs\" (UID: \"5d4f1662-4fa4-4882-ad86-86d80d22e48c\") " pod="calico-apiserver/calico-apiserver-65c57d4b87-mkzvs" Jan 30 18:30:17.595537 kubelet[2657]: I0130 18:30:17.595472 2657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhx75\" (UniqueName: \"kubernetes.io/projected/5f4842b6-cff0-4aec-9953-449edcf4eb42-kube-api-access-qhx75\") pod \"calico-kube-controllers-567c559546-mqg9t\" (UID: \"5f4842b6-cff0-4aec-9953-449edcf4eb42\") " pod="calico-system/calico-kube-controllers-567c559546-mqg9t" Jan 30 18:30:17.595606 kubelet[2657]: I0130 18:30:17.595510 2657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x857\" (UniqueName: \"kubernetes.io/projected/276afab4-5fe5-4590-b5dc-be8776d1a75c-kube-api-access-2x857\") pod \"calico-apiserver-65c57d4b87-ttjhg\" (UID: \"276afab4-5fe5-4590-b5dc-be8776d1a75c\") " pod="calico-apiserver/calico-apiserver-65c57d4b87-ttjhg" Jan 30 18:30:17.595606 kubelet[2657]: I0130 18:30:17.595560 2657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f4842b6-cff0-4aec-9953-449edcf4eb42-tigera-ca-bundle\") pod \"calico-kube-controllers-567c559546-mqg9t\" (UID: \"5f4842b6-cff0-4aec-9953-449edcf4eb42\") " pod="calico-system/calico-kube-controllers-567c559546-mqg9t" Jan 30 18:30:17.596008 kubelet[2657]: I0130 18:30:17.595581 2657 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4tmh\" (UniqueName: \"kubernetes.io/projected/16dcc974-7fc7-4198-87ca-318a9eb2503b-kube-api-access-b4tmh\") pod \"coredns-6f6b679f8f-g2vfs\" (UID: \"16dcc974-7fc7-4198-87ca-318a9eb2503b\") " pod="kube-system/coredns-6f6b679f8f-g2vfs" Jan 30 18:30:17.854397 containerd[1503]: time="2025-01-30T18:30:17.854238612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65c57d4b87-ttjhg,Uid:276afab4-5fe5-4590-b5dc-be8776d1a75c,Namespace:calico-apiserver,Attempt:0,}" Jan 30 18:30:17.865867 containerd[1503]: time="2025-01-30T18:30:17.865286928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65c57d4b87-mkzvs,Uid:5d4f1662-4fa4-4882-ad86-86d80d22e48c,Namespace:calico-apiserver,Attempt:0,}" Jan 30 18:30:17.885982 containerd[1503]: time="2025-01-30T18:30:17.885637304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-567c559546-mqg9t,Uid:5f4842b6-cff0-4aec-9953-449edcf4eb42,Namespace:calico-system,Attempt:0,}" Jan 30 18:30:18.120242 containerd[1503]: time="2025-01-30T18:30:18.119744070Z" level=error msg="Failed to destroy network for sandbox \"3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:30:18.120701 containerd[1503]: time="2025-01-30T18:30:18.120492536Z" level=error msg="Failed to destroy network for sandbox \"30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:30:18.131225 containerd[1503]: time="2025-01-30T18:30:18.125897113Z" level=error msg="encountered an error cleaning up failed sandbox \"30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:30:18.131225 containerd[1503]: time="2025-01-30T18:30:18.125967933Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65c57d4b87-ttjhg,Uid:276afab4-5fe5-4590-b5dc-be8776d1a75c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:30:18.131225 containerd[1503]: time="2025-01-30T18:30:18.126322082Z" level=error msg="encountered an error cleaning up failed sandbox \"3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:30:18.131225 containerd[1503]: time="2025-01-30T18:30:18.126367666Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-567c559546-mqg9t,Uid:5f4842b6-cff0-4aec-9953-449edcf4eb42,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:30:18.131034 systemd[1]: Created slice kubepods-besteffort-pod24904d3d_1841_497d_aeec_b53f05fc7a48.slice - libcontainer container kubepods-besteffort-pod24904d3d_1841_497d_aeec_b53f05fc7a48.slice. Jan 30 18:30:18.133495 kubelet[2657]: E0130 18:30:18.127268 2657 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:30:18.133495 kubelet[2657]: E0130 18:30:18.127330 2657 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65c57d4b87-ttjhg" Jan 30 18:30:18.133495 kubelet[2657]: E0130 18:30:18.127363 2657 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65c57d4b87-ttjhg" Jan 30 18:30:18.133919 kubelet[2657]: E0130 18:30:18.127413 2657 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-65c57d4b87-ttjhg_calico-apiserver(276afab4-5fe5-4590-b5dc-be8776d1a75c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-65c57d4b87-ttjhg_calico-apiserver(276afab4-5fe5-4590-b5dc-be8776d1a75c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65c57d4b87-ttjhg" podUID="276afab4-5fe5-4590-b5dc-be8776d1a75c" Jan 30 18:30:18.135086 kubelet[2657]: E0130 18:30:18.134806 2657 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:30:18.135086 kubelet[2657]: E0130 18:30:18.134858 2657 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-567c559546-mqg9t" Jan 30 18:30:18.135086 kubelet[2657]: E0130 18:30:18.134877 2657 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-567c559546-mqg9t" Jan 30 18:30:18.135227 containerd[1503]: time="2025-01-30T18:30:18.134944815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zzj4j,Uid:24904d3d-1841-497d-aeec-b53f05fc7a48,Namespace:calico-system,Attempt:0,}" Jan 30 18:30:18.135268 kubelet[2657]: E0130 18:30:18.134917 2657 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-567c559546-mqg9t_calico-system(5f4842b6-cff0-4aec-9953-449edcf4eb42)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-567c559546-mqg9t_calico-system(5f4842b6-cff0-4aec-9953-449edcf4eb42)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-567c559546-mqg9t" podUID="5f4842b6-cff0-4aec-9953-449edcf4eb42" Jan 30 18:30:18.138095 containerd[1503]: time="2025-01-30T18:30:18.138066247Z" level=error msg="Failed to destroy network for sandbox \"3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:30:18.138765 containerd[1503]: time="2025-01-30T18:30:18.138736866Z" level=error msg="encountered an error cleaning up failed sandbox \"3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:30:18.151129 containerd[1503]: time="2025-01-30T18:30:18.151087583Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65c57d4b87-mkzvs,Uid:5d4f1662-4fa4-4882-ad86-86d80d22e48c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:30:18.151589 kubelet[2657]: E0130 18:30:18.151537 2657 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:30:18.151714 kubelet[2657]: E0130 18:30:18.151625 2657 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65c57d4b87-mkzvs" Jan 30 18:30:18.151940 kubelet[2657]: E0130 18:30:18.151666 2657 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65c57d4b87-mkzvs" Jan 30 18:30:18.151940 kubelet[2657]: E0130 18:30:18.151791 2657 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-65c57d4b87-mkzvs_calico-apiserver(5d4f1662-4fa4-4882-ad86-86d80d22e48c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-65c57d4b87-mkzvs_calico-apiserver(5d4f1662-4fa4-4882-ad86-86d80d22e48c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65c57d4b87-mkzvs" podUID="5d4f1662-4fa4-4882-ad86-86d80d22e48c" Jan 30 18:30:18.218671 containerd[1503]: time="2025-01-30T18:30:18.218620439Z" level=error msg="Failed to destroy network for sandbox \"a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:30:18.219308 containerd[1503]: time="2025-01-30T18:30:18.219272727Z" level=error msg="encountered an error cleaning up failed sandbox \"a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:30:18.219437 containerd[1503]: time="2025-01-30T18:30:18.219416506Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zzj4j,Uid:24904d3d-1841-497d-aeec-b53f05fc7a48,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:30:18.220122 kubelet[2657]: E0130 18:30:18.219744 2657 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:30:18.220122 kubelet[2657]: E0130 18:30:18.219807 2657 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zzj4j" Jan 30 18:30:18.220122 kubelet[2657]: E0130 18:30:18.219829 2657 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zzj4j" Jan 30 18:30:18.220278 kubelet[2657]: E0130 18:30:18.219882 2657 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zzj4j_calico-system(24904d3d-1841-497d-aeec-b53f05fc7a48)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zzj4j_calico-system(24904d3d-1841-497d-aeec-b53f05fc7a48)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zzj4j" podUID="24904d3d-1841-497d-aeec-b53f05fc7a48" Jan 30 18:30:18.290137 kubelet[2657]: I0130 18:30:18.289924 2657 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34" Jan 30 18:30:18.303339 containerd[1503]: time="2025-01-30T18:30:18.301113977Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 30 18:30:18.310241 containerd[1503]: time="2025-01-30T18:30:18.309892236Z" level=info msg="StopPodSandbox for \"30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34\"" Jan 30 18:30:18.311703 containerd[1503]: time="2025-01-30T18:30:18.311643749Z" level=info msg="Ensure that sandbox 30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34 in task-service has been cleanup successfully" Jan 30 18:30:18.353829 kubelet[2657]: I0130 18:30:18.353226 2657 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4" Jan 30 18:30:18.356641 containerd[1503]: time="2025-01-30T18:30:18.356596789Z" level=info msg="StopPodSandbox for \"a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4\"" Jan 30 18:30:18.359158 containerd[1503]: time="2025-01-30T18:30:18.359112108Z" level=info msg="Ensure that sandbox a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4 in task-service has been cleanup successfully" Jan 30 18:30:18.366940 kubelet[2657]: I0130 18:30:18.366570 2657 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a" Jan 30 18:30:18.379408 containerd[1503]: time="2025-01-30T18:30:18.376748510Z" level=info msg="StopPodSandbox for \"3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a\"" Jan 30 18:30:18.379408 containerd[1503]: time="2025-01-30T18:30:18.377373957Z" level=info msg="Ensure that sandbox 3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a in task-service has been cleanup successfully" Jan 30 18:30:18.389873 kubelet[2657]: I0130 18:30:18.388484 2657 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3" Jan 30 18:30:18.390315 containerd[1503]: time="2025-01-30T18:30:18.389113563Z" level=info msg="StopPodSandbox for \"3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3\"" Jan 30 18:30:18.390315 containerd[1503]: time="2025-01-30T18:30:18.389356971Z" level=info msg="Ensure that sandbox 3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3 in task-service has been cleanup successfully" Jan 30 18:30:18.446782 containerd[1503]: time="2025-01-30T18:30:18.446524594Z" level=error msg="StopPodSandbox for \"3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a\" failed" error="failed to destroy network for sandbox \"3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:30:18.447524 kubelet[2657]: E0130 18:30:18.447474 2657 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a" Jan 30 18:30:18.447645 kubelet[2657]: E0130 18:30:18.447572 2657 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a"} Jan 30 18:30:18.447705 kubelet[2657]: E0130 18:30:18.447664 2657 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5f4842b6-cff0-4aec-9953-449edcf4eb42\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 18:30:18.447876 kubelet[2657]: E0130 18:30:18.447846 2657 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5f4842b6-cff0-4aec-9953-449edcf4eb42\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-567c559546-mqg9t" podUID="5f4842b6-cff0-4aec-9953-449edcf4eb42" Jan 30 18:30:18.458348 containerd[1503]: time="2025-01-30T18:30:18.458289819Z" level=error msg="StopPodSandbox for \"30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34\" failed" error="failed to destroy network for sandbox \"30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:30:18.458851 kubelet[2657]: E0130 18:30:18.458584 2657 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34" Jan 30 18:30:18.458851 kubelet[2657]: E0130 18:30:18.458644 2657 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34"} Jan 30 18:30:18.458851 kubelet[2657]: E0130 18:30:18.458699 2657 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"276afab4-5fe5-4590-b5dc-be8776d1a75c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 18:30:18.458851 kubelet[2657]: E0130 18:30:18.458729 2657 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"276afab4-5fe5-4590-b5dc-be8776d1a75c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65c57d4b87-ttjhg" podUID="276afab4-5fe5-4590-b5dc-be8776d1a75c" Jan 30 18:30:18.463983 containerd[1503]: time="2025-01-30T18:30:18.463818555Z" level=error msg="StopPodSandbox for \"a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4\" failed" error="failed to destroy network for sandbox \"a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:30:18.464296 kubelet[2657]: E0130 18:30:18.464114 2657 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4" Jan 30 18:30:18.464296 kubelet[2657]: E0130 18:30:18.464159 2657 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4"} Jan 30 18:30:18.464296 kubelet[2657]: E0130 18:30:18.464188 2657 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"24904d3d-1841-497d-aeec-b53f05fc7a48\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 18:30:18.464296 kubelet[2657]: E0130 18:30:18.464209 2657 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"24904d3d-1841-497d-aeec-b53f05fc7a48\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zzj4j" podUID="24904d3d-1841-497d-aeec-b53f05fc7a48" Jan 30 18:30:18.469639 containerd[1503]: time="2025-01-30T18:30:18.469580081Z" level=error msg="StopPodSandbox for \"3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3\" failed" error="failed to destroy network for sandbox \"3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:30:18.470104 kubelet[2657]: E0130 18:30:18.469817 2657 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3" Jan 30 18:30:18.470104 kubelet[2657]: E0130 18:30:18.469871 2657 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3"} Jan 30 18:30:18.470104 kubelet[2657]: E0130 18:30:18.469914 2657 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5d4f1662-4fa4-4882-ad86-86d80d22e48c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 18:30:18.470104 kubelet[2657]: E0130 18:30:18.469957 2657 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5d4f1662-4fa4-4882-ad86-86d80d22e48c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65c57d4b87-mkzvs" podUID="5d4f1662-4fa4-4882-ad86-86d80d22e48c" Jan 30 18:30:18.733666 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3-shm.mount: Deactivated successfully. Jan 30 18:30:18.735940 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34-shm.mount: Deactivated successfully. Jan 30 18:30:18.742355 containerd[1503]: time="2025-01-30T18:30:18.742318431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-np6kd,Uid:bb725008-8b91-4bc2-9fc5-057d3a965b20,Namespace:kube-system,Attempt:0,}" Jan 30 18:30:18.744302 containerd[1503]: time="2025-01-30T18:30:18.744046267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-g2vfs,Uid:16dcc974-7fc7-4198-87ca-318a9eb2503b,Namespace:kube-system,Attempt:0,}" Jan 30 18:30:18.838761 containerd[1503]: time="2025-01-30T18:30:18.838633217Z" level=error msg="Failed to destroy network for sandbox \"0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:30:18.840118 containerd[1503]: time="2025-01-30T18:30:18.839924731Z" level=error msg="encountered an error cleaning up failed sandbox \"0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:30:18.840118 containerd[1503]: time="2025-01-30T18:30:18.839988027Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-np6kd,Uid:bb725008-8b91-4bc2-9fc5-057d3a965b20,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:30:18.841814 kubelet[2657]: E0130 18:30:18.840233 2657 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:30:18.841814 kubelet[2657]: E0130 18:30:18.840294 2657 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-np6kd" Jan 30 18:30:18.841814 kubelet[2657]: E0130 18:30:18.840315 2657 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-np6kd" Jan 30 18:30:18.841963 kubelet[2657]: E0130 18:30:18.840368 2657 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-np6kd_kube-system(bb725008-8b91-4bc2-9fc5-057d3a965b20)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-np6kd_kube-system(bb725008-8b91-4bc2-9fc5-057d3a965b20)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-np6kd" podUID="bb725008-8b91-4bc2-9fc5-057d3a965b20" Jan 30 18:30:18.850010 containerd[1503]: time="2025-01-30T18:30:18.849962343Z" level=error msg="Failed to destroy network for sandbox \"7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:30:18.850542 containerd[1503]: time="2025-01-30T18:30:18.850508013Z" level=error msg="encountered an error cleaning up failed sandbox \"7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:30:18.850616 containerd[1503]: time="2025-01-30T18:30:18.850588771Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-g2vfs,Uid:16dcc974-7fc7-4198-87ca-318a9eb2503b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:30:18.851263 kubelet[2657]: E0130 18:30:18.850819 2657 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:30:18.851263 kubelet[2657]: E0130 18:30:18.850874 2657 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-g2vfs" Jan 30 18:30:18.851263 kubelet[2657]: E0130 18:30:18.850913 2657 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-g2vfs" Jan 30 18:30:18.851391 kubelet[2657]: E0130 18:30:18.850973 2657 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-g2vfs_kube-system(16dcc974-7fc7-4198-87ca-318a9eb2503b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-g2vfs_kube-system(16dcc974-7fc7-4198-87ca-318a9eb2503b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-g2vfs" podUID="16dcc974-7fc7-4198-87ca-318a9eb2503b" Jan 30 18:30:19.394949 kubelet[2657]: I0130 18:30:19.393928 2657 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a" Jan 30 18:30:19.396259 containerd[1503]: time="2025-01-30T18:30:19.395443284Z" level=info msg="StopPodSandbox for \"7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a\"" Jan 30 18:30:19.396259 containerd[1503]: time="2025-01-30T18:30:19.395842379Z" level=info msg="Ensure that sandbox 7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a in task-service has been cleanup successfully" Jan 30 18:30:19.405646 kubelet[2657]: I0130 18:30:19.404122 2657 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779" Jan 30 18:30:19.406126 containerd[1503]: time="2025-01-30T18:30:19.405858584Z" level=info msg="StopPodSandbox for \"0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779\"" Jan 30 18:30:19.406126 containerd[1503]: time="2025-01-30T18:30:19.406083748Z" level=info msg="Ensure that sandbox 0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779 in task-service has been cleanup successfully" Jan 30 18:30:19.445903 containerd[1503]: time="2025-01-30T18:30:19.445850628Z" level=error msg="StopPodSandbox for \"0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779\" failed" error="failed to destroy network for sandbox \"0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:30:19.446430 kubelet[2657]: E0130 18:30:19.446377 2657 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779" Jan 30 18:30:19.446508 kubelet[2657]: E0130 18:30:19.446442 2657 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779"} Jan 30 18:30:19.446508 kubelet[2657]: E0130 18:30:19.446483 2657 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bb725008-8b91-4bc2-9fc5-057d3a965b20\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 18:30:19.446808 kubelet[2657]: E0130 18:30:19.446508 2657 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bb725008-8b91-4bc2-9fc5-057d3a965b20\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-np6kd" podUID="bb725008-8b91-4bc2-9fc5-057d3a965b20" Jan 30 18:30:19.447508 containerd[1503]: time="2025-01-30T18:30:19.447471569Z" level=error msg="StopPodSandbox for \"7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a\" failed" error="failed to destroy network for sandbox \"7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:30:19.447815 kubelet[2657]: E0130 18:30:19.447677 2657 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a" Jan 30 18:30:19.447815 kubelet[2657]: E0130 18:30:19.447741 2657 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a"} Jan 30 18:30:19.447815 kubelet[2657]: E0130 18:30:19.447767 2657 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"16dcc974-7fc7-4198-87ca-318a9eb2503b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 18:30:19.447815 kubelet[2657]: E0130 18:30:19.447791 2657 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"16dcc974-7fc7-4198-87ca-318a9eb2503b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-g2vfs" podUID="16dcc974-7fc7-4198-87ca-318a9eb2503b" Jan 30 18:30:19.722455 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a-shm.mount: Deactivated successfully. Jan 30 18:30:19.722590 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779-shm.mount: Deactivated successfully. Jan 30 18:30:25.912386 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2974847882.mount: Deactivated successfully. Jan 30 18:30:26.066058 containerd[1503]: time="2025-01-30T18:30:26.065186182Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:30:26.069440 containerd[1503]: time="2025-01-30T18:30:26.053426897Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 30 18:30:26.080602 containerd[1503]: time="2025-01-30T18:30:26.080570251Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:30:26.081764 containerd[1503]: time="2025-01-30T18:30:26.081741339Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:30:26.084421 containerd[1503]: time="2025-01-30T18:30:26.084366755Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 7.780135363s" Jan 30 18:30:26.084503 containerd[1503]: time="2025-01-30T18:30:26.084419460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 30 18:30:26.173476 containerd[1503]: time="2025-01-30T18:30:26.173187545Z" level=info msg="CreateContainer within sandbox \"0b01ad16b8c0c89016e8c45b6545bd62e2e72b4e02bc29f978e9b6740f85cbc1\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 30 18:30:26.302567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3017744727.mount: Deactivated successfully. Jan 30 18:30:26.317290 containerd[1503]: time="2025-01-30T18:30:26.317143837Z" level=info msg="CreateContainer within sandbox \"0b01ad16b8c0c89016e8c45b6545bd62e2e72b4e02bc29f978e9b6740f85cbc1\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"890caac6e5f356a4246366f175e47036cc76aba350df969000d332ef2d505b67\"" Jan 30 18:30:26.323269 containerd[1503]: time="2025-01-30T18:30:26.323210016Z" level=info msg="StartContainer for \"890caac6e5f356a4246366f175e47036cc76aba350df969000d332ef2d505b67\"" Jan 30 18:30:26.459840 systemd[1]: Started cri-containerd-890caac6e5f356a4246366f175e47036cc76aba350df969000d332ef2d505b67.scope - libcontainer container 890caac6e5f356a4246366f175e47036cc76aba350df969000d332ef2d505b67. Jan 30 18:30:26.507917 containerd[1503]: time="2025-01-30T18:30:26.507118941Z" level=info msg="StartContainer for \"890caac6e5f356a4246366f175e47036cc76aba350df969000d332ef2d505b67\" returns successfully" Jan 30 18:30:26.632928 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 30 18:30:26.634202 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 30 18:30:27.552623 kubelet[2657]: I0130 18:30:27.545550 2657 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-59cd8" podStartSLOduration=3.097460508 podStartE2EDuration="22.50862261s" podCreationTimestamp="2025-01-30 18:30:05 +0000 UTC" firstStartedPulling="2025-01-30 18:30:06.674021963 +0000 UTC m=+14.680778160" lastFinishedPulling="2025-01-30 18:30:26.085184068 +0000 UTC m=+34.091940262" observedRunningTime="2025-01-30 18:30:27.501314381 +0000 UTC m=+35.508070599" watchObservedRunningTime="2025-01-30 18:30:27.50862261 +0000 UTC m=+35.515378851" Jan 30 18:30:27.570159 systemd[1]: run-containerd-runc-k8s.io-890caac6e5f356a4246366f175e47036cc76aba350df969000d332ef2d505b67-runc.WwJY3B.mount: Deactivated successfully. Jan 30 18:30:28.478326 systemd[1]: run-containerd-runc-k8s.io-890caac6e5f356a4246366f175e47036cc76aba350df969000d332ef2d505b67-runc.7gIi6o.mount: Deactivated successfully. Jan 30 18:30:29.486033 systemd[1]: run-containerd-runc-k8s.io-890caac6e5f356a4246366f175e47036cc76aba350df969000d332ef2d505b67-runc.0wllTO.mount: Deactivated successfully. Jan 30 18:30:30.124717 containerd[1503]: time="2025-01-30T18:30:30.122969014Z" level=info msg="StopPodSandbox for \"30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34\"" Jan 30 18:30:30.334864 containerd[1503]: 2025-01-30 18:30:30.209 [INFO][3969] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34" Jan 30 18:30:30.334864 containerd[1503]: 2025-01-30 18:30:30.209 [INFO][3969] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34" iface="eth0" netns="/var/run/netns/cni-2a926696-ebb4-cc1b-e9ff-539499c62b2c" Jan 30 18:30:30.334864 containerd[1503]: 2025-01-30 18:30:30.209 [INFO][3969] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34" iface="eth0" netns="/var/run/netns/cni-2a926696-ebb4-cc1b-e9ff-539499c62b2c" Jan 30 18:30:30.334864 containerd[1503]: 2025-01-30 18:30:30.211 [INFO][3969] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34" iface="eth0" netns="/var/run/netns/cni-2a926696-ebb4-cc1b-e9ff-539499c62b2c" Jan 30 18:30:30.334864 containerd[1503]: 2025-01-30 18:30:30.211 [INFO][3969] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34" Jan 30 18:30:30.334864 containerd[1503]: 2025-01-30 18:30:30.211 [INFO][3969] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34" Jan 30 18:30:30.334864 containerd[1503]: 2025-01-30 18:30:30.312 [INFO][3976] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34" HandleID="k8s-pod-network.30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34" Workload="srv--eex0h.gb1.brightbox.com-k8s-calico--apiserver--65c57d4b87--ttjhg-eth0" Jan 30 18:30:30.334864 containerd[1503]: 2025-01-30 18:30:30.313 [INFO][3976] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 18:30:30.334864 containerd[1503]: 2025-01-30 18:30:30.313 [INFO][3976] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 18:30:30.334864 containerd[1503]: 2025-01-30 18:30:30.323 [WARNING][3976] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34" HandleID="k8s-pod-network.30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34" Workload="srv--eex0h.gb1.brightbox.com-k8s-calico--apiserver--65c57d4b87--ttjhg-eth0" Jan 30 18:30:30.334864 containerd[1503]: 2025-01-30 18:30:30.323 [INFO][3976] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34" HandleID="k8s-pod-network.30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34" Workload="srv--eex0h.gb1.brightbox.com-k8s-calico--apiserver--65c57d4b87--ttjhg-eth0" Jan 30 18:30:30.334864 containerd[1503]: 2025-01-30 18:30:30.327 [INFO][3976] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 18:30:30.334864 containerd[1503]: 2025-01-30 18:30:30.330 [INFO][3969] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34" Jan 30 18:30:30.341431 systemd[1]: run-netns-cni\x2d2a926696\x2debb4\x2dcc1b\x2de9ff\x2d539499c62b2c.mount: Deactivated successfully. Jan 30 18:30:30.348805 containerd[1503]: time="2025-01-30T18:30:30.348648358Z" level=info msg="TearDown network for sandbox \"30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34\" successfully" Jan 30 18:30:30.348805 containerd[1503]: time="2025-01-30T18:30:30.348799881Z" level=info msg="StopPodSandbox for \"30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34\" returns successfully" Jan 30 18:30:30.349896 containerd[1503]: time="2025-01-30T18:30:30.349869313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65c57d4b87-ttjhg,Uid:276afab4-5fe5-4590-b5dc-be8776d1a75c,Namespace:calico-apiserver,Attempt:1,}" Jan 30 18:30:30.613429 systemd-networkd[1424]: cali22be1702655: Link UP Jan 30 18:30:30.613671 systemd-networkd[1424]: cali22be1702655: Gained carrier Jan 30 18:30:30.642778 containerd[1503]: 2025-01-30 18:30:30.424 [INFO][3983] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 18:30:30.642778 containerd[1503]: 2025-01-30 18:30:30.445 [INFO][3983] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--eex0h.gb1.brightbox.com-k8s-calico--apiserver--65c57d4b87--ttjhg-eth0 calico-apiserver-65c57d4b87- calico-apiserver 276afab4-5fe5-4590-b5dc-be8776d1a75c 730 0 2025-01-30 18:30:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:65c57d4b87 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-eex0h.gb1.brightbox.com calico-apiserver-65c57d4b87-ttjhg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali22be1702655 [] []}} ContainerID="8fb83c43fb27f8d875226d2cfcce76b8c1280642c7cf7c6a3d6634adb3db39c6" Namespace="calico-apiserver" Pod="calico-apiserver-65c57d4b87-ttjhg" WorkloadEndpoint="srv--eex0h.gb1.brightbox.com-k8s-calico--apiserver--65c57d4b87--ttjhg-" Jan 30 18:30:30.642778 containerd[1503]: 2025-01-30 18:30:30.445 [INFO][3983] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8fb83c43fb27f8d875226d2cfcce76b8c1280642c7cf7c6a3d6634adb3db39c6" Namespace="calico-apiserver" Pod="calico-apiserver-65c57d4b87-ttjhg" WorkloadEndpoint="srv--eex0h.gb1.brightbox.com-k8s-calico--apiserver--65c57d4b87--ttjhg-eth0" Jan 30 18:30:30.642778 containerd[1503]: 2025-01-30 18:30:30.520 [INFO][4004] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8fb83c43fb27f8d875226d2cfcce76b8c1280642c7cf7c6a3d6634adb3db39c6" HandleID="k8s-pod-network.8fb83c43fb27f8d875226d2cfcce76b8c1280642c7cf7c6a3d6634adb3db39c6" Workload="srv--eex0h.gb1.brightbox.com-k8s-calico--apiserver--65c57d4b87--ttjhg-eth0" Jan 30 18:30:30.642778 containerd[1503]: 2025-01-30 18:30:30.536 [INFO][4004] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8fb83c43fb27f8d875226d2cfcce76b8c1280642c7cf7c6a3d6634adb3db39c6" HandleID="k8s-pod-network.8fb83c43fb27f8d875226d2cfcce76b8c1280642c7cf7c6a3d6634adb3db39c6" Workload="srv--eex0h.gb1.brightbox.com-k8s-calico--apiserver--65c57d4b87--ttjhg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050d40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-eex0h.gb1.brightbox.com", "pod":"calico-apiserver-65c57d4b87-ttjhg", "timestamp":"2025-01-30 18:30:30.520771417 +0000 UTC"}, Hostname:"srv-eex0h.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 18:30:30.642778 containerd[1503]: 2025-01-30 18:30:30.536 [INFO][4004] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 18:30:30.642778 containerd[1503]: 2025-01-30 18:30:30.536 [INFO][4004] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 18:30:30.642778 containerd[1503]: 2025-01-30 18:30:30.538 [INFO][4004] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-eex0h.gb1.brightbox.com' Jan 30 18:30:30.642778 containerd[1503]: 2025-01-30 18:30:30.544 [INFO][4004] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8fb83c43fb27f8d875226d2cfcce76b8c1280642c7cf7c6a3d6634adb3db39c6" host="srv-eex0h.gb1.brightbox.com" Jan 30 18:30:30.642778 containerd[1503]: 2025-01-30 18:30:30.550 [INFO][4004] ipam/ipam.go 372: Looking up existing affinities for host host="srv-eex0h.gb1.brightbox.com" Jan 30 18:30:30.642778 containerd[1503]: 2025-01-30 18:30:30.559 [INFO][4004] ipam/ipam.go 489: Trying affinity for 192.168.69.0/26 host="srv-eex0h.gb1.brightbox.com" Jan 30 18:30:30.642778 containerd[1503]: 2025-01-30 18:30:30.562 [INFO][4004] ipam/ipam.go 155: Attempting to load block cidr=192.168.69.0/26 host="srv-eex0h.gb1.brightbox.com" Jan 30 18:30:30.642778 containerd[1503]: 2025-01-30 18:30:30.565 [INFO][4004] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.69.0/26 host="srv-eex0h.gb1.brightbox.com" Jan 30 18:30:30.642778 containerd[1503]: 2025-01-30 18:30:30.566 [INFO][4004] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.69.0/26 handle="k8s-pod-network.8fb83c43fb27f8d875226d2cfcce76b8c1280642c7cf7c6a3d6634adb3db39c6" host="srv-eex0h.gb1.brightbox.com" Jan 30 18:30:30.642778 containerd[1503]: 2025-01-30 18:30:30.568 [INFO][4004] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8fb83c43fb27f8d875226d2cfcce76b8c1280642c7cf7c6a3d6634adb3db39c6 Jan 30 18:30:30.642778 containerd[1503]: 2025-01-30 18:30:30.577 [INFO][4004] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.69.0/26 handle="k8s-pod-network.8fb83c43fb27f8d875226d2cfcce76b8c1280642c7cf7c6a3d6634adb3db39c6" host="srv-eex0h.gb1.brightbox.com" Jan 30 18:30:30.642778 containerd[1503]: 2025-01-30 18:30:30.583 [INFO][4004] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.69.1/26] block=192.168.69.0/26 handle="k8s-pod-network.8fb83c43fb27f8d875226d2cfcce76b8c1280642c7cf7c6a3d6634adb3db39c6" host="srv-eex0h.gb1.brightbox.com" Jan 30 18:30:30.642778 containerd[1503]: 2025-01-30 18:30:30.583 [INFO][4004] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.69.1/26] handle="k8s-pod-network.8fb83c43fb27f8d875226d2cfcce76b8c1280642c7cf7c6a3d6634adb3db39c6" host="srv-eex0h.gb1.brightbox.com" Jan 30 18:30:30.642778 containerd[1503]: 2025-01-30 18:30:30.583 [INFO][4004] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 18:30:30.642778 containerd[1503]: 2025-01-30 18:30:30.583 [INFO][4004] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.69.1/26] IPv6=[] ContainerID="8fb83c43fb27f8d875226d2cfcce76b8c1280642c7cf7c6a3d6634adb3db39c6" HandleID="k8s-pod-network.8fb83c43fb27f8d875226d2cfcce76b8c1280642c7cf7c6a3d6634adb3db39c6" Workload="srv--eex0h.gb1.brightbox.com-k8s-calico--apiserver--65c57d4b87--ttjhg-eth0" Jan 30 18:30:30.644428 containerd[1503]: 2025-01-30 18:30:30.588 [INFO][3983] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8fb83c43fb27f8d875226d2cfcce76b8c1280642c7cf7c6a3d6634adb3db39c6" Namespace="calico-apiserver" Pod="calico-apiserver-65c57d4b87-ttjhg" WorkloadEndpoint="srv--eex0h.gb1.brightbox.com-k8s-calico--apiserver--65c57d4b87--ttjhg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--eex0h.gb1.brightbox.com-k8s-calico--apiserver--65c57d4b87--ttjhg-eth0", GenerateName:"calico-apiserver-65c57d4b87-", Namespace:"calico-apiserver", SelfLink:"", UID:"276afab4-5fe5-4590-b5dc-be8776d1a75c", ResourceVersion:"730", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 18, 30, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65c57d4b87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-eex0h.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-65c57d4b87-ttjhg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali22be1702655", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 18:30:30.644428 containerd[1503]: 2025-01-30 18:30:30.589 [INFO][3983] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.69.1/32] ContainerID="8fb83c43fb27f8d875226d2cfcce76b8c1280642c7cf7c6a3d6634adb3db39c6" Namespace="calico-apiserver" Pod="calico-apiserver-65c57d4b87-ttjhg" WorkloadEndpoint="srv--eex0h.gb1.brightbox.com-k8s-calico--apiserver--65c57d4b87--ttjhg-eth0" Jan 30 18:30:30.644428 containerd[1503]: 2025-01-30 18:30:30.589 [INFO][3983] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali22be1702655 ContainerID="8fb83c43fb27f8d875226d2cfcce76b8c1280642c7cf7c6a3d6634adb3db39c6" Namespace="calico-apiserver" Pod="calico-apiserver-65c57d4b87-ttjhg" WorkloadEndpoint="srv--eex0h.gb1.brightbox.com-k8s-calico--apiserver--65c57d4b87--ttjhg-eth0" Jan 30 18:30:30.644428 containerd[1503]: 2025-01-30 18:30:30.612 [INFO][3983] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8fb83c43fb27f8d875226d2cfcce76b8c1280642c7cf7c6a3d6634adb3db39c6" Namespace="calico-apiserver" Pod="calico-apiserver-65c57d4b87-ttjhg" WorkloadEndpoint="srv--eex0h.gb1.brightbox.com-k8s-calico--apiserver--65c57d4b87--ttjhg-eth0" Jan 30 18:30:30.644428 containerd[1503]: 2025-01-30 18:30:30.615 [INFO][3983] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8fb83c43fb27f8d875226d2cfcce76b8c1280642c7cf7c6a3d6634adb3db39c6" Namespace="calico-apiserver" Pod="calico-apiserver-65c57d4b87-ttjhg" WorkloadEndpoint="srv--eex0h.gb1.brightbox.com-k8s-calico--apiserver--65c57d4b87--ttjhg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--eex0h.gb1.brightbox.com-k8s-calico--apiserver--65c57d4b87--ttjhg-eth0", GenerateName:"calico-apiserver-65c57d4b87-", Namespace:"calico-apiserver", SelfLink:"", UID:"276afab4-5fe5-4590-b5dc-be8776d1a75c", ResourceVersion:"730", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 18, 30, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65c57d4b87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-eex0h.gb1.brightbox.com", ContainerID:"8fb83c43fb27f8d875226d2cfcce76b8c1280642c7cf7c6a3d6634adb3db39c6", Pod:"calico-apiserver-65c57d4b87-ttjhg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali22be1702655", MAC:"2e:5a:d2:aa:7a:99", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 18:30:30.644428 containerd[1503]: 2025-01-30 18:30:30.634 [INFO][3983] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8fb83c43fb27f8d875226d2cfcce76b8c1280642c7cf7c6a3d6634adb3db39c6" Namespace="calico-apiserver" Pod="calico-apiserver-65c57d4b87-ttjhg" WorkloadEndpoint="srv--eex0h.gb1.brightbox.com-k8s-calico--apiserver--65c57d4b87--ttjhg-eth0" Jan 30 18:30:30.678862 containerd[1503]: time="2025-01-30T18:30:30.678616835Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 18:30:30.678862 containerd[1503]: time="2025-01-30T18:30:30.678720593Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 18:30:30.678862 containerd[1503]: time="2025-01-30T18:30:30.678797792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 18:30:30.679626 containerd[1503]: time="2025-01-30T18:30:30.679557594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 18:30:30.701706 systemd[1]: run-containerd-runc-k8s.io-8fb83c43fb27f8d875226d2cfcce76b8c1280642c7cf7c6a3d6634adb3db39c6-runc.8S9Iel.mount: Deactivated successfully. Jan 30 18:30:30.708838 systemd[1]: Started cri-containerd-8fb83c43fb27f8d875226d2cfcce76b8c1280642c7cf7c6a3d6634adb3db39c6.scope - libcontainer container 8fb83c43fb27f8d875226d2cfcce76b8c1280642c7cf7c6a3d6634adb3db39c6. Jan 30 18:30:30.758421 containerd[1503]: time="2025-01-30T18:30:30.758373614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65c57d4b87-ttjhg,Uid:276afab4-5fe5-4590-b5dc-be8776d1a75c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"8fb83c43fb27f8d875226d2cfcce76b8c1280642c7cf7c6a3d6634adb3db39c6\"" Jan 30 18:30:30.760057 containerd[1503]: time="2025-01-30T18:30:30.760032665Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 18:30:31.122290 containerd[1503]: time="2025-01-30T18:30:31.122212688Z" level=info msg="StopPodSandbox for \"3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3\"" Jan 30 18:30:31.228719 containerd[1503]: 2025-01-30 18:30:31.188 [INFO][4087] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3" Jan 30 18:30:31.228719 containerd[1503]: 2025-01-30 18:30:31.188 [INFO][4087] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3" iface="eth0" netns="/var/run/netns/cni-84651d23-2be2-4c53-9ff7-42710783fdc9" Jan 30 18:30:31.228719 containerd[1503]: 2025-01-30 18:30:31.188 [INFO][4087] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3" iface="eth0" netns="/var/run/netns/cni-84651d23-2be2-4c53-9ff7-42710783fdc9" Jan 30 18:30:31.228719 containerd[1503]: 2025-01-30 18:30:31.189 [INFO][4087] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3" iface="eth0" netns="/var/run/netns/cni-84651d23-2be2-4c53-9ff7-42710783fdc9" Jan 30 18:30:31.228719 containerd[1503]: 2025-01-30 18:30:31.189 [INFO][4087] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3" Jan 30 18:30:31.228719 containerd[1503]: 2025-01-30 18:30:31.189 [INFO][4087] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3" Jan 30 18:30:31.228719 containerd[1503]: 2025-01-30 18:30:31.215 [INFO][4094] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3" HandleID="k8s-pod-network.3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3" Workload="srv--eex0h.gb1.brightbox.com-k8s-calico--apiserver--65c57d4b87--mkzvs-eth0" Jan 30 18:30:31.228719 containerd[1503]: 2025-01-30 18:30:31.215 [INFO][4094] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 18:30:31.228719 containerd[1503]: 2025-01-30 18:30:31.215 [INFO][4094] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 18:30:31.228719 containerd[1503]: 2025-01-30 18:30:31.221 [WARNING][4094] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3" HandleID="k8s-pod-network.3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3" Workload="srv--eex0h.gb1.brightbox.com-k8s-calico--apiserver--65c57d4b87--mkzvs-eth0" Jan 30 18:30:31.228719 containerd[1503]: 2025-01-30 18:30:31.221 [INFO][4094] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3" HandleID="k8s-pod-network.3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3" Workload="srv--eex0h.gb1.brightbox.com-k8s-calico--apiserver--65c57d4b87--mkzvs-eth0" Jan 30 18:30:31.228719 containerd[1503]: 2025-01-30 18:30:31.223 [INFO][4094] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 18:30:31.228719 containerd[1503]: 2025-01-30 18:30:31.225 [INFO][4087] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3" Jan 30 18:30:31.229578 systemd[1]: run-netns-cni\x2d84651d23\x2d2be2\x2d4c53\x2d9ff7\x2d42710783fdc9.mount: Deactivated successfully. Jan 30 18:30:31.230966 containerd[1503]: time="2025-01-30T18:30:31.229731272Z" level=info msg="TearDown network for sandbox \"3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3\" successfully" Jan 30 18:30:31.230966 containerd[1503]: time="2025-01-30T18:30:31.229760719Z" level=info msg="StopPodSandbox for \"3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3\" returns successfully" Jan 30 18:30:31.234714 containerd[1503]: time="2025-01-30T18:30:31.234607754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65c57d4b87-mkzvs,Uid:5d4f1662-4fa4-4882-ad86-86d80d22e48c,Namespace:calico-apiserver,Attempt:1,}" Jan 30 18:30:31.398148 systemd-networkd[1424]: calif599f1b5e86: Link UP Jan 30 18:30:31.399318 systemd-networkd[1424]: calif599f1b5e86: Gained carrier Jan 30 18:30:31.415334 containerd[1503]: 2025-01-30 18:30:31.267 [INFO][4101] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 18:30:31.415334 containerd[1503]: 2025-01-30 18:30:31.283 [INFO][4101] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--eex0h.gb1.brightbox.com-k8s-calico--apiserver--65c57d4b87--mkzvs-eth0 calico-apiserver-65c57d4b87- calico-apiserver 5d4f1662-4fa4-4882-ad86-86d80d22e48c 739 0 2025-01-30 18:30:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:65c57d4b87 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-eex0h.gb1.brightbox.com calico-apiserver-65c57d4b87-mkzvs eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif599f1b5e86 [] []}} ContainerID="982bed15449830f251f195d602270c34faddf4500315b6a5cff908d99d0f4d8b" Namespace="calico-apiserver" Pod="calico-apiserver-65c57d4b87-mkzvs" WorkloadEndpoint="srv--eex0h.gb1.brightbox.com-k8s-calico--apiserver--65c57d4b87--mkzvs-" Jan 30 18:30:31.415334 containerd[1503]: 2025-01-30 18:30:31.283 [INFO][4101] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="982bed15449830f251f195d602270c34faddf4500315b6a5cff908d99d0f4d8b" Namespace="calico-apiserver" Pod="calico-apiserver-65c57d4b87-mkzvs" WorkloadEndpoint="srv--eex0h.gb1.brightbox.com-k8s-calico--apiserver--65c57d4b87--mkzvs-eth0" Jan 30 18:30:31.415334 containerd[1503]: 2025-01-30 18:30:31.340 [INFO][4111] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="982bed15449830f251f195d602270c34faddf4500315b6a5cff908d99d0f4d8b" HandleID="k8s-pod-network.982bed15449830f251f195d602270c34faddf4500315b6a5cff908d99d0f4d8b" Workload="srv--eex0h.gb1.brightbox.com-k8s-calico--apiserver--65c57d4b87--mkzvs-eth0" Jan 30 18:30:31.415334 containerd[1503]: 2025-01-30 18:30:31.351 [INFO][4111] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="982bed15449830f251f195d602270c34faddf4500315b6a5cff908d99d0f4d8b" HandleID="k8s-pod-network.982bed15449830f251f195d602270c34faddf4500315b6a5cff908d99d0f4d8b" Workload="srv--eex0h.gb1.brightbox.com-k8s-calico--apiserver--65c57d4b87--mkzvs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ad620), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-eex0h.gb1.brightbox.com", "pod":"calico-apiserver-65c57d4b87-mkzvs", "timestamp":"2025-01-30 18:30:31.340617611 +0000 UTC"}, Hostname:"srv-eex0h.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 18:30:31.415334 containerd[1503]: 2025-01-30 18:30:31.351 [INFO][4111] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 18:30:31.415334 containerd[1503]: 2025-01-30 18:30:31.351 [INFO][4111] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 18:30:31.415334 containerd[1503]: 2025-01-30 18:30:31.351 [INFO][4111] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-eex0h.gb1.brightbox.com' Jan 30 18:30:31.415334 containerd[1503]: 2025-01-30 18:30:31.355 [INFO][4111] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.982bed15449830f251f195d602270c34faddf4500315b6a5cff908d99d0f4d8b" host="srv-eex0h.gb1.brightbox.com" Jan 30 18:30:31.415334 containerd[1503]: 2025-01-30 18:30:31.362 [INFO][4111] ipam/ipam.go 372: Looking up existing affinities for host host="srv-eex0h.gb1.brightbox.com" Jan 30 18:30:31.415334 containerd[1503]: 2025-01-30 18:30:31.368 [INFO][4111] ipam/ipam.go 489: Trying affinity for 192.168.69.0/26 host="srv-eex0h.gb1.brightbox.com" Jan 30 18:30:31.415334 containerd[1503]: 2025-01-30 18:30:31.371 [INFO][4111] ipam/ipam.go 155: Attempting to load block cidr=192.168.69.0/26 host="srv-eex0h.gb1.brightbox.com" Jan 30 18:30:31.415334 containerd[1503]: 2025-01-30 18:30:31.374 [INFO][4111] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.69.0/26 host="srv-eex0h.gb1.brightbox.com" Jan 30 18:30:31.415334 containerd[1503]: 2025-01-30 18:30:31.374 [INFO][4111] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.69.0/26 handle="k8s-pod-network.982bed15449830f251f195d602270c34faddf4500315b6a5cff908d99d0f4d8b" host="srv-eex0h.gb1.brightbox.com" Jan 30 18:30:31.415334 containerd[1503]: 2025-01-30 18:30:31.376 [INFO][4111] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.982bed15449830f251f195d602270c34faddf4500315b6a5cff908d99d0f4d8b Jan 30 18:30:31.415334 containerd[1503]: 2025-01-30 18:30:31.381 [INFO][4111] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.69.0/26 handle="k8s-pod-network.982bed15449830f251f195d602270c34faddf4500315b6a5cff908d99d0f4d8b" host="srv-eex0h.gb1.brightbox.com" Jan 30 18:30:31.415334 containerd[1503]: 2025-01-30 18:30:31.390 [INFO][4111] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.69.2/26] block=192.168.69.0/26 handle="k8s-pod-network.982bed15449830f251f195d602270c34faddf4500315b6a5cff908d99d0f4d8b" host="srv-eex0h.gb1.brightbox.com" Jan 30 18:30:31.415334 containerd[1503]: 2025-01-30 18:30:31.390 [INFO][4111] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.69.2/26] handle="k8s-pod-network.982bed15449830f251f195d602270c34faddf4500315b6a5cff908d99d0f4d8b" host="srv-eex0h.gb1.brightbox.com" Jan 30 18:30:31.415334 containerd[1503]: 2025-01-30 18:30:31.390 [INFO][4111] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 18:30:31.415334 containerd[1503]: 2025-01-30 18:30:31.390 [INFO][4111] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.69.2/26] IPv6=[] ContainerID="982bed15449830f251f195d602270c34faddf4500315b6a5cff908d99d0f4d8b" HandleID="k8s-pod-network.982bed15449830f251f195d602270c34faddf4500315b6a5cff908d99d0f4d8b" Workload="srv--eex0h.gb1.brightbox.com-k8s-calico--apiserver--65c57d4b87--mkzvs-eth0" Jan 30 18:30:31.416329 containerd[1503]: 2025-01-30 18:30:31.394 [INFO][4101] cni-plugin/k8s.go 386: Populated endpoint ContainerID="982bed15449830f251f195d602270c34faddf4500315b6a5cff908d99d0f4d8b" Namespace="calico-apiserver" Pod="calico-apiserver-65c57d4b87-mkzvs" WorkloadEndpoint="srv--eex0h.gb1.brightbox.com-k8s-calico--apiserver--65c57d4b87--mkzvs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--eex0h.gb1.brightbox.com-k8s-calico--apiserver--65c57d4b87--mkzvs-eth0", GenerateName:"calico-apiserver-65c57d4b87-", Namespace:"calico-apiserver", SelfLink:"", UID:"5d4f1662-4fa4-4882-ad86-86d80d22e48c", ResourceVersion:"739", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 18, 30, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65c57d4b87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-eex0h.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-65c57d4b87-mkzvs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif599f1b5e86", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 18:30:31.416329 containerd[1503]: 2025-01-30 18:30:31.394 [INFO][4101] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.69.2/32] ContainerID="982bed15449830f251f195d602270c34faddf4500315b6a5cff908d99d0f4d8b" Namespace="calico-apiserver" Pod="calico-apiserver-65c57d4b87-mkzvs" WorkloadEndpoint="srv--eex0h.gb1.brightbox.com-k8s-calico--apiserver--65c57d4b87--mkzvs-eth0" Jan 30 18:30:31.416329 containerd[1503]: 2025-01-30 18:30:31.394 [INFO][4101] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif599f1b5e86 ContainerID="982bed15449830f251f195d602270c34faddf4500315b6a5cff908d99d0f4d8b" Namespace="calico-apiserver" Pod="calico-apiserver-65c57d4b87-mkzvs" WorkloadEndpoint="srv--eex0h.gb1.brightbox.com-k8s-calico--apiserver--65c57d4b87--mkzvs-eth0" Jan 30 18:30:31.416329 containerd[1503]: 2025-01-30 18:30:31.397 [INFO][4101] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="982bed15449830f251f195d602270c34faddf4500315b6a5cff908d99d0f4d8b" Namespace="calico-apiserver" Pod="calico-apiserver-65c57d4b87-mkzvs" WorkloadEndpoint="srv--eex0h.gb1.brightbox.com-k8s-calico--apiserver--65c57d4b87--mkzvs-eth0" Jan 30 18:30:31.416329 containerd[1503]: 2025-01-30 18:30:31.398 [INFO][4101] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="982bed15449830f251f195d602270c34faddf4500315b6a5cff908d99d0f4d8b" Namespace="calico-apiserver" Pod="calico-apiserver-65c57d4b87-mkzvs" WorkloadEndpoint="srv--eex0h.gb1.brightbox.com-k8s-calico--apiserver--65c57d4b87--mkzvs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--eex0h.gb1.brightbox.com-k8s-calico--apiserver--65c57d4b87--mkzvs-eth0", GenerateName:"calico-apiserver-65c57d4b87-", Namespace:"calico-apiserver", SelfLink:"", UID:"5d4f1662-4fa4-4882-ad86-86d80d22e48c", ResourceVersion:"739", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 18, 30, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65c57d4b87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-eex0h.gb1.brightbox.com", ContainerID:"982bed15449830f251f195d602270c34faddf4500315b6a5cff908d99d0f4d8b", Pod:"calico-apiserver-65c57d4b87-mkzvs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif599f1b5e86", MAC:"4e:7e:10:5a:36:2b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 18:30:31.416329 containerd[1503]: 2025-01-30 18:30:31.411 [INFO][4101] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="982bed15449830f251f195d602270c34faddf4500315b6a5cff908d99d0f4d8b" Namespace="calico-apiserver" Pod="calico-apiserver-65c57d4b87-mkzvs" WorkloadEndpoint="srv--eex0h.gb1.brightbox.com-k8s-calico--apiserver--65c57d4b87--mkzvs-eth0" Jan 30 18:30:31.442308 containerd[1503]: time="2025-01-30T18:30:31.441994929Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 18:30:31.442308 containerd[1503]: time="2025-01-30T18:30:31.442072732Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 18:30:31.442308 containerd[1503]: time="2025-01-30T18:30:31.442087701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 18:30:31.442308 containerd[1503]: time="2025-01-30T18:30:31.442167931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 18:30:31.477009 systemd[1]: Started cri-containerd-982bed15449830f251f195d602270c34faddf4500315b6a5cff908d99d0f4d8b.scope - libcontainer container 982bed15449830f251f195d602270c34faddf4500315b6a5cff908d99d0f4d8b. Jan 30 18:30:31.546803 containerd[1503]: time="2025-01-30T18:30:31.546667277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65c57d4b87-mkzvs,Uid:5d4f1662-4fa4-4882-ad86-86d80d22e48c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"982bed15449830f251f195d602270c34faddf4500315b6a5cff908d99d0f4d8b\"" Jan 30 18:30:32.137767 containerd[1503]: time="2025-01-30T18:30:32.137294361Z" level=info msg="StopPodSandbox for \"7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a\"" Jan 30 18:30:32.137767 containerd[1503]: time="2025-01-30T18:30:32.137382338Z" level=info msg="StopPodSandbox for \"0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779\"" Jan 30 18:30:32.140957 containerd[1503]: time="2025-01-30T18:30:32.140491398Z" level=info msg="StopPodSandbox for \"3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a\"" Jan 30 18:30:32.141360 containerd[1503]: time="2025-01-30T18:30:32.141327941Z" level=info msg="StopPodSandbox for \"a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4\"" Jan 30 18:30:32.307609 systemd-networkd[1424]: cali22be1702655: Gained IPv6LL Jan 30 18:30:32.436015 containerd[1503]: 2025-01-30 18:30:32.340 [INFO][4251] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779" Jan 30 18:30:32.436015 containerd[1503]: 2025-01-30 18:30:32.343 [INFO][4251] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779" iface="eth0" netns="/var/run/netns/cni-1eae2606-5746-c4aa-6aba-57f67eed0a00" Jan 30 18:30:32.436015 containerd[1503]: 2025-01-30 18:30:32.343 [INFO][4251] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779" iface="eth0" netns="/var/run/netns/cni-1eae2606-5746-c4aa-6aba-57f67eed0a00" Jan 30 18:30:32.436015 containerd[1503]: 2025-01-30 18:30:32.344 [INFO][4251] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779" iface="eth0" netns="/var/run/netns/cni-1eae2606-5746-c4aa-6aba-57f67eed0a00" Jan 30 18:30:32.436015 containerd[1503]: 2025-01-30 18:30:32.344 [INFO][4251] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779" Jan 30 18:30:32.436015 containerd[1503]: 2025-01-30 18:30:32.344 [INFO][4251] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779" Jan 30 18:30:32.436015 containerd[1503]: 2025-01-30 18:30:32.399 [INFO][4280] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779" HandleID="k8s-pod-network.0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779" Workload="srv--eex0h.gb1.brightbox.com-k8s-coredns--6f6b679f8f--np6kd-eth0" Jan 30 18:30:32.436015 containerd[1503]: 2025-01-30 18:30:32.400 [INFO][4280] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 18:30:32.436015 containerd[1503]: 2025-01-30 18:30:32.400 [INFO][4280] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 18:30:32.436015 containerd[1503]: 2025-01-30 18:30:32.420 [WARNING][4280] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779" HandleID="k8s-pod-network.0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779" Workload="srv--eex0h.gb1.brightbox.com-k8s-coredns--6f6b679f8f--np6kd-eth0" Jan 30 18:30:32.436015 containerd[1503]: 2025-01-30 18:30:32.421 [INFO][4280] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779" HandleID="k8s-pod-network.0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779" Workload="srv--eex0h.gb1.brightbox.com-k8s-coredns--6f6b679f8f--np6kd-eth0" Jan 30 18:30:32.436015 containerd[1503]: 2025-01-30 18:30:32.426 [INFO][4280] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 18:30:32.436015 containerd[1503]: 2025-01-30 18:30:32.430 [INFO][4251] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779" Jan 30 18:30:32.438048 containerd[1503]: time="2025-01-30T18:30:32.437905297Z" level=info msg="TearDown network for sandbox \"0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779\" successfully" Jan 30 18:30:32.438247 systemd[1]: run-netns-cni\x2d1eae2606\x2d5746\x2dc4aa\x2d6aba\x2d57f67eed0a00.mount: Deactivated successfully. Jan 30 18:30:32.439477 containerd[1503]: time="2025-01-30T18:30:32.439149518Z" level=info msg="StopPodSandbox for \"0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779\" returns successfully" Jan 30 18:30:32.440629 containerd[1503]: time="2025-01-30T18:30:32.440593626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-np6kd,Uid:bb725008-8b91-4bc2-9fc5-057d3a965b20,Namespace:kube-system,Attempt:1,}" Jan 30 18:30:32.457958 containerd[1503]: 2025-01-30 18:30:32.268 [INFO][4246] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a" Jan 30 18:30:32.457958 containerd[1503]: 2025-01-30 18:30:32.269 [INFO][4246] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a" iface="eth0" netns="/var/run/netns/cni-ebcd7034-a95f-2439-b6b0-5e4181b1fd4b" Jan 30 18:30:32.457958 containerd[1503]: 2025-01-30 18:30:32.271 [INFO][4246] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a" iface="eth0" netns="/var/run/netns/cni-ebcd7034-a95f-2439-b6b0-5e4181b1fd4b" Jan 30 18:30:32.457958 containerd[1503]: 2025-01-30 18:30:32.272 [INFO][4246] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a" iface="eth0" netns="/var/run/netns/cni-ebcd7034-a95f-2439-b6b0-5e4181b1fd4b" Jan 30 18:30:32.457958 containerd[1503]: 2025-01-30 18:30:32.272 [INFO][4246] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a" Jan 30 18:30:32.457958 containerd[1503]: 2025-01-30 18:30:32.272 [INFO][4246] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a" Jan 30 18:30:32.457958 containerd[1503]: 2025-01-30 18:30:32.402 [INFO][4269] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a" HandleID="k8s-pod-network.7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a" Workload="srv--eex0h.gb1.brightbox.com-k8s-coredns--6f6b679f8f--g2vfs-eth0" Jan 30 18:30:32.457958 containerd[1503]: 2025-01-30 18:30:32.402 [INFO][4269] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 18:30:32.457958 containerd[1503]: 2025-01-30 18:30:32.426 [INFO][4269] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 18:30:32.457958 containerd[1503]: 2025-01-30 18:30:32.439 [WARNING][4269] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a" HandleID="k8s-pod-network.7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a" Workload="srv--eex0h.gb1.brightbox.com-k8s-coredns--6f6b679f8f--g2vfs-eth0" Jan 30 18:30:32.457958 containerd[1503]: 2025-01-30 18:30:32.440 [INFO][4269] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a" HandleID="k8s-pod-network.7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a" Workload="srv--eex0h.gb1.brightbox.com-k8s-coredns--6f6b679f8f--g2vfs-eth0" Jan 30 18:30:32.457958 containerd[1503]: 2025-01-30 18:30:32.444 [INFO][4269] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 18:30:32.457958 containerd[1503]: 2025-01-30 18:30:32.448 [INFO][4246] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a" Jan 30 18:30:32.461143 containerd[1503]: time="2025-01-30T18:30:32.458255149Z" level=info msg="TearDown network for sandbox \"7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a\" successfully" Jan 30 18:30:32.461143 containerd[1503]: time="2025-01-30T18:30:32.458277037Z" level=info msg="StopPodSandbox for \"7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a\" returns successfully" Jan 30 18:30:32.462163 containerd[1503]: time="2025-01-30T18:30:32.461659785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-g2vfs,Uid:16dcc974-7fc7-4198-87ca-318a9eb2503b,Namespace:kube-system,Attempt:1,}" Jan 30 18:30:32.464101 systemd[1]: run-netns-cni\x2debcd7034\x2da95f\x2d2439\x2db6b0\x2d5e4181b1fd4b.mount: Deactivated successfully. Jan 30 18:30:32.476321 containerd[1503]: 2025-01-30 18:30:32.290 [INFO][4247] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a" Jan 30 18:30:32.476321 containerd[1503]: 2025-01-30 18:30:32.290 [INFO][4247] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a" iface="eth0" netns="/var/run/netns/cni-4a8dd5da-7c50-4833-0d88-8eda84145914" Jan 30 18:30:32.476321 containerd[1503]: 2025-01-30 18:30:32.291 [INFO][4247] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a" iface="eth0" netns="/var/run/netns/cni-4a8dd5da-7c50-4833-0d88-8eda84145914" Jan 30 18:30:32.476321 containerd[1503]: 2025-01-30 18:30:32.292 [INFO][4247] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a" iface="eth0" netns="/var/run/netns/cni-4a8dd5da-7c50-4833-0d88-8eda84145914" Jan 30 18:30:32.476321 containerd[1503]: 2025-01-30 18:30:32.292 [INFO][4247] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a" Jan 30 18:30:32.476321 containerd[1503]: 2025-01-30 18:30:32.292 [INFO][4247] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a" Jan 30 18:30:32.476321 containerd[1503]: 2025-01-30 18:30:32.409 [INFO][4274] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a" HandleID="k8s-pod-network.3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a" Workload="srv--eex0h.gb1.brightbox.com-k8s-calico--kube--controllers--567c559546--mqg9t-eth0" Jan 30 18:30:32.476321 containerd[1503]: 2025-01-30 18:30:32.409 [INFO][4274] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 18:30:32.476321 containerd[1503]: 2025-01-30 18:30:32.444 [INFO][4274] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 18:30:32.476321 containerd[1503]: 2025-01-30 18:30:32.453 [WARNING][4274] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a" HandleID="k8s-pod-network.3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a" Workload="srv--eex0h.gb1.brightbox.com-k8s-calico--kube--controllers--567c559546--mqg9t-eth0" Jan 30 18:30:32.476321 containerd[1503]: 2025-01-30 18:30:32.453 [INFO][4274] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a" HandleID="k8s-pod-network.3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a" Workload="srv--eex0h.gb1.brightbox.com-k8s-calico--kube--controllers--567c559546--mqg9t-eth0" Jan 30 18:30:32.476321 containerd[1503]: 2025-01-30 18:30:32.458 [INFO][4274] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 18:30:32.476321 containerd[1503]: 2025-01-30 18:30:32.467 [INFO][4247] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a" Jan 30 18:30:32.476321 containerd[1503]: time="2025-01-30T18:30:32.476231635Z" level=info msg="TearDown network for sandbox \"3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a\" successfully" Jan 30 18:30:32.476321 containerd[1503]: time="2025-01-30T18:30:32.476250631Z" level=info msg="StopPodSandbox for \"3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a\" returns successfully" Jan 30 18:30:32.480194 containerd[1503]: time="2025-01-30T18:30:32.477423914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-567c559546-mqg9t,Uid:5f4842b6-cff0-4aec-9953-449edcf4eb42,Namespace:calico-system,Attempt:1,}" Jan 30 18:30:32.501720 containerd[1503]: 2025-01-30 18:30:32.345 [INFO][4252] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4" Jan 30 18:30:32.501720 containerd[1503]: 2025-01-30 18:30:32.345 [INFO][4252] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4" iface="eth0" netns="/var/run/netns/cni-ad142a24-19b3-bb7b-61d9-0bbf62c036e0" Jan 30 18:30:32.501720 containerd[1503]: 2025-01-30 18:30:32.347 [INFO][4252] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4" iface="eth0" netns="/var/run/netns/cni-ad142a24-19b3-bb7b-61d9-0bbf62c036e0" Jan 30 18:30:32.501720 containerd[1503]: 2025-01-30 18:30:32.348 [INFO][4252] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4" iface="eth0" netns="/var/run/netns/cni-ad142a24-19b3-bb7b-61d9-0bbf62c036e0" Jan 30 18:30:32.501720 containerd[1503]: 2025-01-30 18:30:32.348 [INFO][4252] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4" Jan 30 18:30:32.501720 containerd[1503]: 2025-01-30 18:30:32.348 [INFO][4252] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4" Jan 30 18:30:32.501720 containerd[1503]: 2025-01-30 18:30:32.427 [INFO][4284] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4" HandleID="k8s-pod-network.a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4" Workload="srv--eex0h.gb1.brightbox.com-k8s-csi--node--driver--zzj4j-eth0" Jan 30 18:30:32.501720 containerd[1503]: 2025-01-30 18:30:32.428 [INFO][4284] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 18:30:32.501720 containerd[1503]: 2025-01-30 18:30:32.458 [INFO][4284] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 18:30:32.501720 containerd[1503]: 2025-01-30 18:30:32.485 [WARNING][4284] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4" HandleID="k8s-pod-network.a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4" Workload="srv--eex0h.gb1.brightbox.com-k8s-csi--node--driver--zzj4j-eth0" Jan 30 18:30:32.501720 containerd[1503]: 2025-01-30 18:30:32.485 [INFO][4284] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4" HandleID="k8s-pod-network.a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4" Workload="srv--eex0h.gb1.brightbox.com-k8s-csi--node--driver--zzj4j-eth0" Jan 30 18:30:32.501720 containerd[1503]: 2025-01-30 18:30:32.489 [INFO][4284] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 18:30:32.501720 containerd[1503]: 2025-01-30 18:30:32.493 [INFO][4252] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4" Jan 30 18:30:32.503205 containerd[1503]: time="2025-01-30T18:30:32.503064219Z" level=info msg="TearDown network for sandbox \"a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4\" successfully" Jan 30 18:30:32.504281 containerd[1503]: time="2025-01-30T18:30:32.503443014Z" level=info msg="StopPodSandbox for \"a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4\" returns successfully" Jan 30 18:30:32.504718 containerd[1503]: time="2025-01-30T18:30:32.504516255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zzj4j,Uid:24904d3d-1841-497d-aeec-b53f05fc7a48,Namespace:calico-system,Attempt:1,}" Jan 30 18:30:32.697206 systemd[1]: run-netns-cni\x2dad142a24\x2d19b3\x2dbb7b\x2d61d9\x2d0bbf62c036e0.mount: Deactivated successfully. Jan 30 18:30:32.697292 systemd[1]: run-netns-cni\x2d4a8dd5da\x2d7c50\x2d4833\x2d0d88\x2d8eda84145914.mount: Deactivated successfully. Jan 30 18:30:32.808783 systemd-networkd[1424]: caliaa7ca067dea: Link UP Jan 30 18:30:32.809768 systemd-networkd[1424]: caliaa7ca067dea: Gained carrier Jan 30 18:30:32.854096 containerd[1503]: 2025-01-30 18:30:32.557 [INFO][4296] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 18:30:32.854096 containerd[1503]: 2025-01-30 18:30:32.578 [INFO][4296] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--eex0h.gb1.brightbox.com-k8s-coredns--6f6b679f8f--np6kd-eth0 coredns-6f6b679f8f- kube-system bb725008-8b91-4bc2-9fc5-057d3a965b20 751 0 2025-01-30 18:29:58 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-eex0h.gb1.brightbox.com coredns-6f6b679f8f-np6kd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliaa7ca067dea [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="58564a12b66b8d2f7b923303733f285f7ad2ca76965bece21e0216d9dffb9362" Namespace="kube-system" Pod="coredns-6f6b679f8f-np6kd" WorkloadEndpoint="srv--eex0h.gb1.brightbox.com-k8s-coredns--6f6b679f8f--np6kd-" Jan 30 18:30:32.854096 containerd[1503]: 2025-01-30 18:30:32.578 [INFO][4296] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="58564a12b66b8d2f7b923303733f285f7ad2ca76965bece21e0216d9dffb9362" Namespace="kube-system" Pod="coredns-6f6b679f8f-np6kd" WorkloadEndpoint="srv--eex0h.gb1.brightbox.com-k8s-coredns--6f6b679f8f--np6kd-eth0" Jan 30 18:30:32.854096 containerd[1503]: 2025-01-30 18:30:32.684 [INFO][4345] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="58564a12b66b8d2f7b923303733f285f7ad2ca76965bece21e0216d9dffb9362" HandleID="k8s-pod-network.58564a12b66b8d2f7b923303733f285f7ad2ca76965bece21e0216d9dffb9362" Workload="srv--eex0h.gb1.brightbox.com-k8s-coredns--6f6b679f8f--np6kd-eth0" Jan 30 18:30:32.854096 containerd[1503]: 2025-01-30 18:30:32.720 [INFO][4345] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="58564a12b66b8d2f7b923303733f285f7ad2ca76965bece21e0216d9dffb9362" HandleID="k8s-pod-network.58564a12b66b8d2f7b923303733f285f7ad2ca76965bece21e0216d9dffb9362" Workload="srv--eex0h.gb1.brightbox.com-k8s-coredns--6f6b679f8f--np6kd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002a6df0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-eex0h.gb1.brightbox.com", "pod":"coredns-6f6b679f8f-np6kd", "timestamp":"2025-01-30 18:30:32.684850423 +0000 UTC"}, Hostname:"srv-eex0h.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 18:30:32.854096 containerd[1503]: 2025-01-30 18:30:32.721 [INFO][4345] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 18:30:32.854096 containerd[1503]: 2025-01-30 18:30:32.721 [INFO][4345] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 18:30:32.854096 containerd[1503]: 2025-01-30 18:30:32.721 [INFO][4345] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-eex0h.gb1.brightbox.com' Jan 30 18:30:32.854096 containerd[1503]: 2025-01-30 18:30:32.724 [INFO][4345] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.58564a12b66b8d2f7b923303733f285f7ad2ca76965bece21e0216d9dffb9362" host="srv-eex0h.gb1.brightbox.com" Jan 30 18:30:32.854096 containerd[1503]: 2025-01-30 18:30:32.734 [INFO][4345] ipam/ipam.go 372: Looking up existing affinities for host host="srv-eex0h.gb1.brightbox.com" Jan 30 18:30:32.854096 containerd[1503]: 2025-01-30 18:30:32.753 [INFO][4345] ipam/ipam.go 489: Trying affinity for 192.168.69.0/26 host="srv-eex0h.gb1.brightbox.com" Jan 30 18:30:32.854096 containerd[1503]: 2025-01-30 18:30:32.757 [INFO][4345] ipam/ipam.go 155: Attempting to load block cidr=192.168.69.0/26 host="srv-eex0h.gb1.brightbox.com" Jan 30 18:30:32.854096 containerd[1503]: 2025-01-30 18:30:32.759 [INFO][4345] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.69.0/26 host="srv-eex0h.gb1.brightbox.com" Jan 30 18:30:32.854096 containerd[1503]: 2025-01-30 18:30:32.760 [INFO][4345] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.69.0/26 handle="k8s-pod-network.58564a12b66b8d2f7b923303733f285f7ad2ca76965bece21e0216d9dffb9362" host="srv-eex0h.gb1.brightbox.com" Jan 30 18:30:32.854096 containerd[1503]: 2025-01-30 18:30:32.763 [INFO][4345] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.58564a12b66b8d2f7b923303733f285f7ad2ca76965bece21e0216d9dffb9362 Jan 30 18:30:32.854096 containerd[1503]: 2025-01-30 18:30:32.776 [INFO][4345] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.69.0/26 handle="k8s-pod-network.58564a12b66b8d2f7b923303733f285f7ad2ca76965bece21e0216d9dffb9362" host="srv-eex0h.gb1.brightbox.com" Jan 30 18:30:32.854096 containerd[1503]: 2025-01-30 18:30:32.791 [INFO][4345] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.69.3/26] block=192.168.69.0/26 handle="k8s-pod-network.58564a12b66b8d2f7b923303733f285f7ad2ca76965bece21e0216d9dffb9362" host="srv-eex0h.gb1.brightbox.com" Jan 30 18:30:32.854096 containerd[1503]: 2025-01-30 18:30:32.791 [INFO][4345] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.69.3/26] handle="k8s-pod-network.58564a12b66b8d2f7b923303733f285f7ad2ca76965bece21e0216d9dffb9362" host="srv-eex0h.gb1.brightbox.com" Jan 30 18:30:32.854096 containerd[1503]: 2025-01-30 18:30:32.792 [INFO][4345] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 18:30:32.854096 containerd[1503]: 2025-01-30 18:30:32.792 [INFO][4345] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.69.3/26] IPv6=[] ContainerID="58564a12b66b8d2f7b923303733f285f7ad2ca76965bece21e0216d9dffb9362" HandleID="k8s-pod-network.58564a12b66b8d2f7b923303733f285f7ad2ca76965bece21e0216d9dffb9362" Workload="srv--eex0h.gb1.brightbox.com-k8s-coredns--6f6b679f8f--np6kd-eth0" Jan 30 18:30:32.854987 containerd[1503]: 2025-01-30 18:30:32.802 [INFO][4296] cni-plugin/k8s.go 386: Populated endpoint ContainerID="58564a12b66b8d2f7b923303733f285f7ad2ca76965bece21e0216d9dffb9362" Namespace="kube-system" Pod="coredns-6f6b679f8f-np6kd" WorkloadEndpoint="srv--eex0h.gb1.brightbox.com-k8s-coredns--6f6b679f8f--np6kd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--eex0h.gb1.brightbox.com-k8s-coredns--6f6b679f8f--np6kd-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"bb725008-8b91-4bc2-9fc5-057d3a965b20", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 18, 29, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-eex0h.gb1.brightbox.com", ContainerID:"", Pod:"coredns-6f6b679f8f-np6kd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaa7ca067dea", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 18:30:32.854987 containerd[1503]: 2025-01-30 18:30:32.802 [INFO][4296] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.69.3/32] ContainerID="58564a12b66b8d2f7b923303733f285f7ad2ca76965bece21e0216d9dffb9362" Namespace="kube-system" Pod="coredns-6f6b679f8f-np6kd" WorkloadEndpoint="srv--eex0h.gb1.brightbox.com-k8s-coredns--6f6b679f8f--np6kd-eth0" Jan 30 18:30:32.854987 containerd[1503]: 2025-01-30 18:30:32.802 [INFO][4296] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaa7ca067dea ContainerID="58564a12b66b8d2f7b923303733f285f7ad2ca76965bece21e0216d9dffb9362" Namespace="kube-system" Pod="coredns-6f6b679f8f-np6kd" WorkloadEndpoint="srv--eex0h.gb1.brightbox.com-k8s-coredns--6f6b679f8f--np6kd-eth0" Jan 30 18:30:32.854987 containerd[1503]: 2025-01-30 18:30:32.811 [INFO][4296] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="58564a12b66b8d2f7b923303733f285f7ad2ca76965bece21e0216d9dffb9362" Namespace="kube-system" Pod="coredns-6f6b679f8f-np6kd" WorkloadEndpoint="srv--eex0h.gb1.brightbox.com-k8s-coredns--6f6b679f8f--np6kd-eth0" Jan 30 18:30:32.854987 containerd[1503]: 2025-01-30 18:30:32.818 [INFO][4296] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="58564a12b66b8d2f7b923303733f285f7ad2ca76965bece21e0216d9dffb9362" Namespace="kube-system" Pod="coredns-6f6b679f8f-np6kd" WorkloadEndpoint="srv--eex0h.gb1.brightbox.com-k8s-coredns--6f6b679f8f--np6kd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--eex0h.gb1.brightbox.com-k8s-coredns--6f6b679f8f--np6kd-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"bb725008-8b91-4bc2-9fc5-057d3a965b20", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 18, 29, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-eex0h.gb1.brightbox.com", ContainerID:"58564a12b66b8d2f7b923303733f285f7ad2ca76965bece21e0216d9dffb9362", Pod:"coredns-6f6b679f8f-np6kd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaa7ca067dea", MAC:"9a:8c:36:3e:12:8e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 18:30:32.854987 containerd[1503]: 2025-01-30 18:30:32.843 [INFO][4296] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="58564a12b66b8d2f7b923303733f285f7ad2ca76965bece21e0216d9dffb9362" Namespace="kube-system" Pod="coredns-6f6b679f8f-np6kd" WorkloadEndpoint="srv--eex0h.gb1.brightbox.com-k8s-coredns--6f6b679f8f--np6kd-eth0" Jan 30 18:30:32.917320 containerd[1503]: time="2025-01-30T18:30:32.915512344Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 18:30:32.917320 containerd[1503]: time="2025-01-30T18:30:32.915571236Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 18:30:32.917320 containerd[1503]: time="2025-01-30T18:30:32.915582913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 18:30:32.917320 containerd[1503]: time="2025-01-30T18:30:32.915655294Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 18:30:32.961087 systemd[1]: Started cri-containerd-58564a12b66b8d2f7b923303733f285f7ad2ca76965bece21e0216d9dffb9362.scope - libcontainer container 58564a12b66b8d2f7b923303733f285f7ad2ca76965bece21e0216d9dffb9362. Jan 30 18:30:33.009300 systemd-networkd[1424]: caliefe9481fc58: Link UP Jan 30 18:30:33.011866 systemd-networkd[1424]: caliefe9481fc58: Gained carrier Jan 30 18:30:33.061443 containerd[1503]: 2025-01-30 18:30:32.567 [INFO][4306] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 18:30:33.061443 containerd[1503]: 2025-01-30 18:30:32.590 [INFO][4306] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--eex0h.gb1.brightbox.com-k8s-coredns--6f6b679f8f--g2vfs-eth0 coredns-6f6b679f8f- kube-system 16dcc974-7fc7-4198-87ca-318a9eb2503b 749 0 2025-01-30 18:29:58 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-eex0h.gb1.brightbox.com coredns-6f6b679f8f-g2vfs eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliefe9481fc58 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="35bc300ad4bb89b6218b43ed92da09be712678abaa64ba440907e8d5a0d807f4" Namespace="kube-system" Pod="coredns-6f6b679f8f-g2vfs" WorkloadEndpoint="srv--eex0h.gb1.brightbox.com-k8s-coredns--6f6b679f8f--g2vfs-" Jan 30 18:30:33.061443 containerd[1503]: 2025-01-30 18:30:32.590 [INFO][4306] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="35bc300ad4bb89b6218b43ed92da09be712678abaa64ba440907e8d5a0d807f4" Namespace="kube-system" Pod="coredns-6f6b679f8f-g2vfs" WorkloadEndpoint="srv--eex0h.gb1.brightbox.com-k8s-coredns--6f6b679f8f--g2vfs-eth0" Jan 30 18:30:33.061443 containerd[1503]: 2025-01-30 18:30:32.699 [INFO][4346] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="35bc300ad4bb89b6218b43ed92da09be712678abaa64ba440907e8d5a0d807f4" HandleID="k8s-pod-network.35bc300ad4bb89b6218b43ed92da09be712678abaa64ba440907e8d5a0d807f4" Workload="srv--eex0h.gb1.brightbox.com-k8s-coredns--6f6b679f8f--g2vfs-eth0" Jan 30 18:30:33.061443 containerd[1503]: 2025-01-30 18:30:32.724 [INFO][4346] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="35bc300ad4bb89b6218b43ed92da09be712678abaa64ba440907e8d5a0d807f4" HandleID="k8s-pod-network.35bc300ad4bb89b6218b43ed92da09be712678abaa64ba440907e8d5a0d807f4" Workload="srv--eex0h.gb1.brightbox.com-k8s-coredns--6f6b679f8f--g2vfs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003199b0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-eex0h.gb1.brightbox.com", "pod":"coredns-6f6b679f8f-g2vfs", "timestamp":"2025-01-30 18:30:32.69917024 +0000 UTC"}, Hostname:"srv-eex0h.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 18:30:33.061443 containerd[1503]: 2025-01-30 18:30:32.725 [INFO][4346] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 18:30:33.061443 containerd[1503]: 2025-01-30 18:30:32.792 [INFO][4346] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 18:30:33.061443 containerd[1503]: 2025-01-30 18:30:32.792 [INFO][4346] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-eex0h.gb1.brightbox.com' Jan 30 18:30:33.061443 containerd[1503]: 2025-01-30 18:30:32.827 [INFO][4346] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.35bc300ad4bb89b6218b43ed92da09be712678abaa64ba440907e8d5a0d807f4" host="srv-eex0h.gb1.brightbox.com" Jan 30 18:30:33.061443 containerd[1503]: 2025-01-30 18:30:32.847 [INFO][4346] ipam/ipam.go 372: Looking up existing affinities for host host="srv-eex0h.gb1.brightbox.com" Jan 30 18:30:33.061443 containerd[1503]: 2025-01-30 18:30:32.909 [INFO][4346] ipam/ipam.go 489: Trying affinity for 192.168.69.0/26 host="srv-eex0h.gb1.brightbox.com" Jan 30 18:30:33.061443 containerd[1503]: 2025-01-30 18:30:32.914 [INFO][4346] ipam/ipam.go 155: Attempting to load block cidr=192.168.69.0/26 host="srv-eex0h.gb1.brightbox.com" Jan 30 18:30:33.061443 containerd[1503]: 2025-01-30 18:30:32.940 [INFO][4346] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.69.0/26 host="srv-eex0h.gb1.brightbox.com" Jan 30 18:30:33.061443 containerd[1503]: 2025-01-30 18:30:32.940 [INFO][4346] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.69.0/26 handle="k8s-pod-network.35bc300ad4bb89b6218b43ed92da09be712678abaa64ba440907e8d5a0d807f4" host="srv-eex0h.gb1.brightbox.com" Jan 30 18:30:33.061443 containerd[1503]: 2025-01-30 18:30:32.952 [INFO][4346] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.35bc300ad4bb89b6218b43ed92da09be712678abaa64ba440907e8d5a0d807f4 Jan 30 18:30:33.061443 containerd[1503]: 2025-01-30 18:30:32.972 [INFO][4346] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.69.0/26 handle="k8s-pod-network.35bc300ad4bb89b6218b43ed92da09be712678abaa64ba440907e8d5a0d807f4" host="srv-eex0h.gb1.brightbox.com" Jan 30 18:30:33.061443 containerd[1503]: 2025-01-30 18:30:32.996 [INFO][4346] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.69.4/26] block=192.168.69.0/26 handle="k8s-pod-network.35bc300ad4bb89b6218b43ed92da09be712678abaa64ba440907e8d5a0d807f4" host="srv-eex0h.gb1.brightbox.com" Jan 30 18:30:33.061443 containerd[1503]: 2025-01-30 18:30:32.996 [INFO][4346] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.69.4/26] handle="k8s-pod-network.35bc300ad4bb89b6218b43ed92da09be712678abaa64ba440907e8d5a0d807f4" host="srv-eex0h.gb1.brightbox.com" Jan 30 18:30:33.061443 containerd[1503]: 2025-01-30 18:30:32.996 [INFO][4346] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 18:30:33.061443 containerd[1503]: 2025-01-30 18:30:32.998 [INFO][4346] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.69.4/26] IPv6=[] ContainerID="35bc300ad4bb89b6218b43ed92da09be712678abaa64ba440907e8d5a0d807f4" HandleID="k8s-pod-network.35bc300ad4bb89b6218b43ed92da09be712678abaa64ba440907e8d5a0d807f4" Workload="srv--eex0h.gb1.brightbox.com-k8s-coredns--6f6b679f8f--g2vfs-eth0" Jan 30 18:30:33.062556 containerd[1503]: 2025-01-30 18:30:33.002 [INFO][4306] cni-plugin/k8s.go 386: Populated endpoint ContainerID="35bc300ad4bb89b6218b43ed92da09be712678abaa64ba440907e8d5a0d807f4" Namespace="kube-system" Pod="coredns-6f6b679f8f-g2vfs" WorkloadEndpoint="srv--eex0h.gb1.brightbox.com-k8s-coredns--6f6b679f8f--g2vfs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--eex0h.gb1.brightbox.com-k8s-coredns--6f6b679f8f--g2vfs-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"16dcc974-7fc7-4198-87ca-318a9eb2503b", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 18, 29, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-eex0h.gb1.brightbox.com", ContainerID:"", Pod:"coredns-6f6b679f8f-g2vfs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliefe9481fc58", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 18:30:33.062556 containerd[1503]: 2025-01-30 18:30:33.003 [INFO][4306] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.69.4/32] ContainerID="35bc300ad4bb89b6218b43ed92da09be712678abaa64ba440907e8d5a0d807f4" Namespace="kube-system" Pod="coredns-6f6b679f8f-g2vfs" WorkloadEndpoint="srv--eex0h.gb1.brightbox.com-k8s-coredns--6f6b679f8f--g2vfs-eth0" Jan 30 18:30:33.062556 containerd[1503]: 2025-01-30 18:30:33.003 [INFO][4306] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliefe9481fc58 ContainerID="35bc300ad4bb89b6218b43ed92da09be712678abaa64ba440907e8d5a0d807f4" Namespace="kube-system" Pod="coredns-6f6b679f8f-g2vfs" WorkloadEndpoint="srv--eex0h.gb1.brightbox.com-k8s-coredns--6f6b679f8f--g2vfs-eth0" Jan 30 18:30:33.062556 containerd[1503]: 2025-01-30 18:30:33.013 [INFO][4306] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="35bc300ad4bb89b6218b43ed92da09be712678abaa64ba440907e8d5a0d807f4" Namespace="kube-system" Pod="coredns-6f6b679f8f-g2vfs" WorkloadEndpoint="srv--eex0h.gb1.brightbox.com-k8s-coredns--6f6b679f8f--g2vfs-eth0" Jan 30 18:30:33.062556 containerd[1503]: 2025-01-30 18:30:33.016 [INFO][4306] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="35bc300ad4bb89b6218b43ed92da09be712678abaa64ba440907e8d5a0d807f4" Namespace="kube-system" Pod="coredns-6f6b679f8f-g2vfs" WorkloadEndpoint="srv--eex0h.gb1.brightbox.com-k8s-coredns--6f6b679f8f--g2vfs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--eex0h.gb1.brightbox.com-k8s-coredns--6f6b679f8f--g2vfs-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"16dcc974-7fc7-4198-87ca-318a9eb2503b", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 18, 29, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-eex0h.gb1.brightbox.com", ContainerID:"35bc300ad4bb89b6218b43ed92da09be712678abaa64ba440907e8d5a0d807f4", Pod:"coredns-6f6b679f8f-g2vfs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliefe9481fc58", MAC:"9a:f6:b8:a7:20:c5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 18:30:33.062556 containerd[1503]: 2025-01-30 18:30:33.054 [INFO][4306] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="35bc300ad4bb89b6218b43ed92da09be712678abaa64ba440907e8d5a0d807f4" Namespace="kube-system" Pod="coredns-6f6b679f8f-g2vfs" WorkloadEndpoint="srv--eex0h.gb1.brightbox.com-k8s-coredns--6f6b679f8f--g2vfs-eth0" Jan 30 18:30:33.074192 systemd-networkd[1424]: calif599f1b5e86: Gained IPv6LL Jan 30 18:30:33.130285 containerd[1503]: time="2025-01-30T18:30:33.130133685Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 18:30:33.130285 containerd[1503]: time="2025-01-30T18:30:33.130220954Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 18:30:33.130285 containerd[1503]: time="2025-01-30T18:30:33.130242453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 18:30:33.132347 containerd[1503]: time="2025-01-30T18:30:33.132093465Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 18:30:33.193717 containerd[1503]: time="2025-01-30T18:30:33.193354146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-np6kd,Uid:bb725008-8b91-4bc2-9fc5-057d3a965b20,Namespace:kube-system,Attempt:1,} returns sandbox id \"58564a12b66b8d2f7b923303733f285f7ad2ca76965bece21e0216d9dffb9362\"" Jan 30 18:30:33.199844 systemd[1]: Started cri-containerd-35bc300ad4bb89b6218b43ed92da09be712678abaa64ba440907e8d5a0d807f4.scope - libcontainer container 35bc300ad4bb89b6218b43ed92da09be712678abaa64ba440907e8d5a0d807f4. Jan 30 18:30:33.208554 containerd[1503]: time="2025-01-30T18:30:33.208518249Z" level=info msg="CreateContainer within sandbox \"58564a12b66b8d2f7b923303733f285f7ad2ca76965bece21e0216d9dffb9362\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 18:30:33.229647 systemd-networkd[1424]: cali7068132e6d3: Link UP Jan 30 18:30:33.234100 systemd-networkd[1424]: cali7068132e6d3: Gained carrier Jan 30 18:30:33.263243 containerd[1503]: time="2025-01-30T18:30:33.262313465Z" level=info msg="CreateContainer within sandbox \"58564a12b66b8d2f7b923303733f285f7ad2ca76965bece21e0216d9dffb9362\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5787ded22b77625cbb754f5184e054cf70a5a53561b0c7d13da5bdffe3f980ef\"" Jan 30 18:30:33.268043 containerd[1503]: time="2025-01-30T18:30:33.266460545Z" level=info msg="StartContainer for \"5787ded22b77625cbb754f5184e054cf70a5a53561b0c7d13da5bdffe3f980ef\"" Jan 30 18:30:33.274757 containerd[1503]: 2025-01-30 18:30:32.615 [INFO][4326] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 18:30:33.274757 containerd[1503]: 2025-01-30 18:30:32.645 [INFO][4326] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--eex0h.gb1.brightbox.com-k8s-csi--node--driver--zzj4j-eth0 csi-node-driver- calico-system 24904d3d-1841-497d-aeec-b53f05fc7a48 752 0 2025-01-30 18:30:06 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-eex0h.gb1.brightbox.com csi-node-driver-zzj4j eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali7068132e6d3 [] []}} ContainerID="3d131c530c44f19831a68c8458ef40e0c18d0d38b73f1568eb6551845bb6126c" Namespace="calico-system" Pod="csi-node-driver-zzj4j" WorkloadEndpoint="srv--eex0h.gb1.brightbox.com-k8s-csi--node--driver--zzj4j-" Jan 30 18:30:33.274757 containerd[1503]: 2025-01-30 18:30:32.646 [INFO][4326] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3d131c530c44f19831a68c8458ef40e0c18d0d38b73f1568eb6551845bb6126c" Namespace="calico-system" Pod="csi-node-driver-zzj4j" WorkloadEndpoint="srv--eex0h.gb1.brightbox.com-k8s-csi--node--driver--zzj4j-eth0" Jan 30 18:30:33.274757 containerd[1503]: 2025-01-30 18:30:32.847 [INFO][4360] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3d131c530c44f19831a68c8458ef40e0c18d0d38b73f1568eb6551845bb6126c" HandleID="k8s-pod-network.3d131c530c44f19831a68c8458ef40e0c18d0d38b73f1568eb6551845bb6126c" Workload="srv--eex0h.gb1.brightbox.com-k8s-csi--node--driver--zzj4j-eth0" Jan 30 18:30:33.274757 containerd[1503]: 2025-01-30 18:30:32.952 [INFO][4360] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3d131c530c44f19831a68c8458ef40e0c18d0d38b73f1568eb6551845bb6126c" HandleID="k8s-pod-network.3d131c530c44f19831a68c8458ef40e0c18d0d38b73f1568eb6551845bb6126c" Workload="srv--eex0h.gb1.brightbox.com-k8s-csi--node--driver--zzj4j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0006101d0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-eex0h.gb1.brightbox.com", "pod":"csi-node-driver-zzj4j", "timestamp":"2025-01-30 18:30:32.847464447 +0000 UTC"}, Hostname:"srv-eex0h.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 18:30:33.274757 containerd[1503]: 2025-01-30 18:30:32.952 [INFO][4360] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 18:30:33.274757 containerd[1503]: 2025-01-30 18:30:32.997 [INFO][4360] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 18:30:33.274757 containerd[1503]: 2025-01-30 18:30:32.999 [INFO][4360] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-eex0h.gb1.brightbox.com' Jan 30 18:30:33.274757 containerd[1503]: 2025-01-30 18:30:33.013 [INFO][4360] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3d131c530c44f19831a68c8458ef40e0c18d0d38b73f1568eb6551845bb6126c" host="srv-eex0h.gb1.brightbox.com" Jan 30 18:30:33.274757 containerd[1503]: 2025-01-30 18:30:33.047 [INFO][4360] ipam/ipam.go 372: Looking up existing affinities for host host="srv-eex0h.gb1.brightbox.com" Jan 30 18:30:33.274757 containerd[1503]: 2025-01-30 18:30:33.098 [INFO][4360] ipam/ipam.go 489: Trying affinity for 192.168.69.0/26 host="srv-eex0h.gb1.brightbox.com" Jan 30 18:30:33.274757 containerd[1503]: 2025-01-30 18:30:33.103 [INFO][4360] ipam/ipam.go 155: Attempting to load block cidr=192.168.69.0/26 host="srv-eex0h.gb1.brightbox.com" Jan 30 18:30:33.274757 containerd[1503]: 2025-01-30 18:30:33.114 [INFO][4360] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.69.0/26 host="srv-eex0h.gb1.brightbox.com" Jan 30 18:30:33.274757 containerd[1503]: 2025-01-30 18:30:33.115 [INFO][4360] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.69.0/26 handle="k8s-pod-network.3d131c530c44f19831a68c8458ef40e0c18d0d38b73f1568eb6551845bb6126c" host="srv-eex0h.gb1.brightbox.com" Jan 30 18:30:33.274757 containerd[1503]: 2025-01-30 18:30:33.126 [INFO][4360] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3d131c530c44f19831a68c8458ef40e0c18d0d38b73f1568eb6551845bb6126c Jan 30 18:30:33.274757 containerd[1503]: 2025-01-30 18:30:33.157 [INFO][4360] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.69.0/26 handle="k8s-pod-network.3d131c530c44f19831a68c8458ef40e0c18d0d38b73f1568eb6551845bb6126c" host="srv-eex0h.gb1.brightbox.com" Jan 30 18:30:33.274757 containerd[1503]: 2025-01-30 18:30:33.198 [INFO][4360] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.69.5/26] block=192.168.69.0/26 handle="k8s-pod-network.3d131c530c44f19831a68c8458ef40e0c18d0d38b73f1568eb6551845bb6126c" host="srv-eex0h.gb1.brightbox.com" Jan 30 18:30:33.274757 containerd[1503]: 2025-01-30 18:30:33.199 [INFO][4360] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.69.5/26] handle="k8s-pod-network.3d131c530c44f19831a68c8458ef40e0c18d0d38b73f1568eb6551845bb6126c" host="srv-eex0h.gb1.brightbox.com" Jan 30 18:30:33.274757 containerd[1503]: 2025-01-30 18:30:33.200 [INFO][4360] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 18:30:33.274757 containerd[1503]: 2025-01-30 18:30:33.201 [INFO][4360] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.69.5/26] IPv6=[] ContainerID="3d131c530c44f19831a68c8458ef40e0c18d0d38b73f1568eb6551845bb6126c" HandleID="k8s-pod-network.3d131c530c44f19831a68c8458ef40e0c18d0d38b73f1568eb6551845bb6126c" Workload="srv--eex0h.gb1.brightbox.com-k8s-csi--node--driver--zzj4j-eth0" Jan 30 18:30:33.275481 containerd[1503]: 2025-01-30 18:30:33.214 [INFO][4326] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3d131c530c44f19831a68c8458ef40e0c18d0d38b73f1568eb6551845bb6126c" Namespace="calico-system" Pod="csi-node-driver-zzj4j" WorkloadEndpoint="srv--eex0h.gb1.brightbox.com-k8s-csi--node--driver--zzj4j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--eex0h.gb1.brightbox.com-k8s-csi--node--driver--zzj4j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"24904d3d-1841-497d-aeec-b53f05fc7a48", ResourceVersion:"752", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 18, 30, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-eex0h.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-zzj4j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.69.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7068132e6d3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 18:30:33.275481 containerd[1503]: 2025-01-30 18:30:33.214 [INFO][4326] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.69.5/32] ContainerID="3d131c530c44f19831a68c8458ef40e0c18d0d38b73f1568eb6551845bb6126c" Namespace="calico-system" Pod="csi-node-driver-zzj4j" WorkloadEndpoint="srv--eex0h.gb1.brightbox.com-k8s-csi--node--driver--zzj4j-eth0" Jan 30 18:30:33.275481 containerd[1503]: 2025-01-30 18:30:33.214 [INFO][4326] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7068132e6d3 ContainerID="3d131c530c44f19831a68c8458ef40e0c18d0d38b73f1568eb6551845bb6126c" Namespace="calico-system" Pod="csi-node-driver-zzj4j" WorkloadEndpoint="srv--eex0h.gb1.brightbox.com-k8s-csi--node--driver--zzj4j-eth0" Jan 30 18:30:33.275481 containerd[1503]: 2025-01-30 18:30:33.240 [INFO][4326] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3d131c530c44f19831a68c8458ef40e0c18d0d38b73f1568eb6551845bb6126c" Namespace="calico-system" Pod="csi-node-driver-zzj4j" WorkloadEndpoint="srv--eex0h.gb1.brightbox.com-k8s-csi--node--driver--zzj4j-eth0" Jan 30 18:30:33.275481 containerd[1503]: 2025-01-30 18:30:33.241 [INFO][4326] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3d131c530c44f19831a68c8458ef40e0c18d0d38b73f1568eb6551845bb6126c" Namespace="calico-system" Pod="csi-node-driver-zzj4j" WorkloadEndpoint="srv--eex0h.gb1.brightbox.com-k8s-csi--node--driver--zzj4j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--eex0h.gb1.brightbox.com-k8s-csi--node--driver--zzj4j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"24904d3d-1841-497d-aeec-b53f05fc7a48", ResourceVersion:"752", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 18, 30, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-eex0h.gb1.brightbox.com", ContainerID:"3d131c530c44f19831a68c8458ef40e0c18d0d38b73f1568eb6551845bb6126c", Pod:"csi-node-driver-zzj4j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.69.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7068132e6d3", MAC:"66:7d:20:84:41:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 18:30:33.275481 containerd[1503]: 2025-01-30 18:30:33.267 [INFO][4326] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3d131c530c44f19831a68c8458ef40e0c18d0d38b73f1568eb6551845bb6126c" Namespace="calico-system" Pod="csi-node-driver-zzj4j" WorkloadEndpoint="srv--eex0h.gb1.brightbox.com-k8s-csi--node--driver--zzj4j-eth0" Jan 30 18:30:33.305226 systemd-networkd[1424]: califb65fd49ad8: Link UP Jan 30 18:30:33.307038 systemd-networkd[1424]: califb65fd49ad8: Gained carrier Jan 30 18:30:33.348998 containerd[1503]: 2025-01-30 18:30:32.604 [INFO][4318] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 18:30:33.348998 containerd[1503]: 2025-01-30 18:30:32.645 [INFO][4318] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--eex0h.gb1.brightbox.com-k8s-calico--kube--controllers--567c559546--mqg9t-eth0 calico-kube-controllers-567c559546- calico-system 5f4842b6-cff0-4aec-9953-449edcf4eb42 750 0 2025-01-30 18:30:06 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:567c559546 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-eex0h.gb1.brightbox.com calico-kube-controllers-567c559546-mqg9t eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] califb65fd49ad8 [] []}} ContainerID="c95d065ad8be22672d73ca237059fd7d51d44f05e54d7c08bf8a1ffd0a2bfaea" Namespace="calico-system" Pod="calico-kube-controllers-567c559546-mqg9t" WorkloadEndpoint="srv--eex0h.gb1.brightbox.com-k8s-calico--kube--controllers--567c559546--mqg9t-" Jan 30 18:30:33.348998 containerd[1503]: 2025-01-30 18:30:32.645 [INFO][4318] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c95d065ad8be22672d73ca237059fd7d51d44f05e54d7c08bf8a1ffd0a2bfaea" Namespace="calico-system" Pod="calico-kube-controllers-567c559546-mqg9t" WorkloadEndpoint="srv--eex0h.gb1.brightbox.com-k8s-calico--kube--controllers--567c559546--mqg9t-eth0" Jan 30 18:30:33.348998 containerd[1503]: 2025-01-30 18:30:32.833 [INFO][4367] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c95d065ad8be22672d73ca237059fd7d51d44f05e54d7c08bf8a1ffd0a2bfaea" HandleID="k8s-pod-network.c95d065ad8be22672d73ca237059fd7d51d44f05e54d7c08bf8a1ffd0a2bfaea" Workload="srv--eex0h.gb1.brightbox.com-k8s-calico--kube--controllers--567c559546--mqg9t-eth0" Jan 30 18:30:33.348998 containerd[1503]: 2025-01-30 18:30:32.956 [INFO][4367] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c95d065ad8be22672d73ca237059fd7d51d44f05e54d7c08bf8a1ffd0a2bfaea" HandleID="k8s-pod-network.c95d065ad8be22672d73ca237059fd7d51d44f05e54d7c08bf8a1ffd0a2bfaea" Workload="srv--eex0h.gb1.brightbox.com-k8s-calico--kube--controllers--567c559546--mqg9t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000f6940), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-eex0h.gb1.brightbox.com", "pod":"calico-kube-controllers-567c559546-mqg9t", "timestamp":"2025-01-30 18:30:32.829679197 +0000 UTC"}, Hostname:"srv-eex0h.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 18:30:33.348998 containerd[1503]: 2025-01-30 18:30:32.956 [INFO][4367] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 18:30:33.348998 containerd[1503]: 2025-01-30 18:30:33.202 [INFO][4367] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 18:30:33.348998 containerd[1503]: 2025-01-30 18:30:33.202 [INFO][4367] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-eex0h.gb1.brightbox.com' Jan 30 18:30:33.348998 containerd[1503]: 2025-01-30 18:30:33.220 [INFO][4367] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c95d065ad8be22672d73ca237059fd7d51d44f05e54d7c08bf8a1ffd0a2bfaea" host="srv-eex0h.gb1.brightbox.com" Jan 30 18:30:33.348998 containerd[1503]: 2025-01-30 18:30:33.235 [INFO][4367] ipam/ipam.go 372: Looking up existing affinities for host host="srv-eex0h.gb1.brightbox.com" Jan 30 18:30:33.348998 containerd[1503]: 2025-01-30 18:30:33.247 [INFO][4367] ipam/ipam.go 489: Trying affinity for 192.168.69.0/26 host="srv-eex0h.gb1.brightbox.com" Jan 30 18:30:33.348998 containerd[1503]: 2025-01-30 18:30:33.251 [INFO][4367] ipam/ipam.go 155: Attempting to load block cidr=192.168.69.0/26 host="srv-eex0h.gb1.brightbox.com" Jan 30 18:30:33.348998 containerd[1503]: 2025-01-30 18:30:33.257 [INFO][4367] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.69.0/26 host="srv-eex0h.gb1.brightbox.com" Jan 30 18:30:33.348998 containerd[1503]: 2025-01-30 18:30:33.257 [INFO][4367] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.69.0/26 handle="k8s-pod-network.c95d065ad8be22672d73ca237059fd7d51d44f05e54d7c08bf8a1ffd0a2bfaea" host="srv-eex0h.gb1.brightbox.com" Jan 30 18:30:33.348998 containerd[1503]: 2025-01-30 18:30:33.260 [INFO][4367] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c95d065ad8be22672d73ca237059fd7d51d44f05e54d7c08bf8a1ffd0a2bfaea Jan 30 18:30:33.348998 containerd[1503]: 2025-01-30 18:30:33.269 [INFO][4367] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.69.0/26 handle="k8s-pod-network.c95d065ad8be22672d73ca237059fd7d51d44f05e54d7c08bf8a1ffd0a2bfaea" host="srv-eex0h.gb1.brightbox.com" Jan 30 18:30:33.348998 containerd[1503]: 2025-01-30 18:30:33.284 [INFO][4367] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.69.6/26] block=192.168.69.0/26 handle="k8s-pod-network.c95d065ad8be22672d73ca237059fd7d51d44f05e54d7c08bf8a1ffd0a2bfaea" host="srv-eex0h.gb1.brightbox.com" Jan 30 18:30:33.348998 containerd[1503]: 2025-01-30 18:30:33.284 [INFO][4367] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.69.6/26] handle="k8s-pod-network.c95d065ad8be22672d73ca237059fd7d51d44f05e54d7c08bf8a1ffd0a2bfaea" host="srv-eex0h.gb1.brightbox.com" Jan 30 18:30:33.348998 containerd[1503]: 2025-01-30 18:30:33.284 [INFO][4367] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 18:30:33.348998 containerd[1503]: 2025-01-30 18:30:33.284 [INFO][4367] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.69.6/26] IPv6=[] ContainerID="c95d065ad8be22672d73ca237059fd7d51d44f05e54d7c08bf8a1ffd0a2bfaea" HandleID="k8s-pod-network.c95d065ad8be22672d73ca237059fd7d51d44f05e54d7c08bf8a1ffd0a2bfaea" Workload="srv--eex0h.gb1.brightbox.com-k8s-calico--kube--controllers--567c559546--mqg9t-eth0" Jan 30 18:30:33.350229 containerd[1503]: 2025-01-30 18:30:33.291 [INFO][4318] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c95d065ad8be22672d73ca237059fd7d51d44f05e54d7c08bf8a1ffd0a2bfaea" Namespace="calico-system" Pod="calico-kube-controllers-567c559546-mqg9t" WorkloadEndpoint="srv--eex0h.gb1.brightbox.com-k8s-calico--kube--controllers--567c559546--mqg9t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--eex0h.gb1.brightbox.com-k8s-calico--kube--controllers--567c559546--mqg9t-eth0", GenerateName:"calico-kube-controllers-567c559546-", Namespace:"calico-system", SelfLink:"", UID:"5f4842b6-cff0-4aec-9953-449edcf4eb42", ResourceVersion:"750", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 18, 30, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"567c559546", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-eex0h.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-567c559546-mqg9t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.69.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califb65fd49ad8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 18:30:33.350229 containerd[1503]: 2025-01-30 18:30:33.293 [INFO][4318] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.69.6/32] ContainerID="c95d065ad8be22672d73ca237059fd7d51d44f05e54d7c08bf8a1ffd0a2bfaea" Namespace="calico-system" Pod="calico-kube-controllers-567c559546-mqg9t" WorkloadEndpoint="srv--eex0h.gb1.brightbox.com-k8s-calico--kube--controllers--567c559546--mqg9t-eth0" Jan 30 18:30:33.350229 containerd[1503]: 2025-01-30 18:30:33.294 [INFO][4318] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califb65fd49ad8 ContainerID="c95d065ad8be22672d73ca237059fd7d51d44f05e54d7c08bf8a1ffd0a2bfaea" Namespace="calico-system" Pod="calico-kube-controllers-567c559546-mqg9t" WorkloadEndpoint="srv--eex0h.gb1.brightbox.com-k8s-calico--kube--controllers--567c559546--mqg9t-eth0" Jan 30 18:30:33.350229 containerd[1503]: 2025-01-30 18:30:33.310 [INFO][4318] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c95d065ad8be22672d73ca237059fd7d51d44f05e54d7c08bf8a1ffd0a2bfaea" Namespace="calico-system" Pod="calico-kube-controllers-567c559546-mqg9t" WorkloadEndpoint="srv--eex0h.gb1.brightbox.com-k8s-calico--kube--controllers--567c559546--mqg9t-eth0" Jan 30 18:30:33.350229 containerd[1503]: 2025-01-30 18:30:33.311 [INFO][4318] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c95d065ad8be22672d73ca237059fd7d51d44f05e54d7c08bf8a1ffd0a2bfaea" Namespace="calico-system" Pod="calico-kube-controllers-567c559546-mqg9t" WorkloadEndpoint="srv--eex0h.gb1.brightbox.com-k8s-calico--kube--controllers--567c559546--mqg9t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--eex0h.gb1.brightbox.com-k8s-calico--kube--controllers--567c559546--mqg9t-eth0", GenerateName:"calico-kube-controllers-567c559546-", Namespace:"calico-system", SelfLink:"", UID:"5f4842b6-cff0-4aec-9953-449edcf4eb42", ResourceVersion:"750", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 18, 30, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"567c559546", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-eex0h.gb1.brightbox.com", ContainerID:"c95d065ad8be22672d73ca237059fd7d51d44f05e54d7c08bf8a1ffd0a2bfaea", Pod:"calico-kube-controllers-567c559546-mqg9t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.69.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califb65fd49ad8", MAC:"ae:f4:42:db:dd:6c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 18:30:33.350229 containerd[1503]: 2025-01-30 18:30:33.338 [INFO][4318] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c95d065ad8be22672d73ca237059fd7d51d44f05e54d7c08bf8a1ffd0a2bfaea" Namespace="calico-system" Pod="calico-kube-controllers-567c559546-mqg9t" WorkloadEndpoint="srv--eex0h.gb1.brightbox.com-k8s-calico--kube--controllers--567c559546--mqg9t-eth0" Jan 30 18:30:33.374001 containerd[1503]: time="2025-01-30T18:30:33.372633694Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 18:30:33.374001 containerd[1503]: time="2025-01-30T18:30:33.372794309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 18:30:33.374001 containerd[1503]: time="2025-01-30T18:30:33.372808606Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 18:30:33.372995 systemd[1]: Started cri-containerd-5787ded22b77625cbb754f5184e054cf70a5a53561b0c7d13da5bdffe3f980ef.scope - libcontainer container 5787ded22b77625cbb754f5184e054cf70a5a53561b0c7d13da5bdffe3f980ef. Jan 30 18:30:33.383913 containerd[1503]: time="2025-01-30T18:30:33.375840620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 18:30:33.387593 containerd[1503]: time="2025-01-30T18:30:33.387061317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-g2vfs,Uid:16dcc974-7fc7-4198-87ca-318a9eb2503b,Namespace:kube-system,Attempt:1,} returns sandbox id \"35bc300ad4bb89b6218b43ed92da09be712678abaa64ba440907e8d5a0d807f4\"" Jan 30 18:30:33.398978 containerd[1503]: time="2025-01-30T18:30:33.397782867Z" level=info msg="CreateContainer within sandbox \"35bc300ad4bb89b6218b43ed92da09be712678abaa64ba440907e8d5a0d807f4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 18:30:33.415410 containerd[1503]: time="2025-01-30T18:30:33.415351337Z" level=info msg="CreateContainer within sandbox \"35bc300ad4bb89b6218b43ed92da09be712678abaa64ba440907e8d5a0d807f4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"16e96fd05bcd5e18eab9a3b4c09b9cab514b248281d2d37e299e3b4df3e6ea7f\"" Jan 30 18:30:33.417349 containerd[1503]: time="2025-01-30T18:30:33.417314812Z" level=info msg="StartContainer for \"16e96fd05bcd5e18eab9a3b4c09b9cab514b248281d2d37e299e3b4df3e6ea7f\"" Jan 30 18:30:33.427164 systemd[1]: Started cri-containerd-3d131c530c44f19831a68c8458ef40e0c18d0d38b73f1568eb6551845bb6126c.scope - libcontainer container 3d131c530c44f19831a68c8458ef40e0c18d0d38b73f1568eb6551845bb6126c. Jan 30 18:30:33.461069 containerd[1503]: time="2025-01-30T18:30:33.460801784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 18:30:33.461069 containerd[1503]: time="2025-01-30T18:30:33.460871839Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 18:30:33.461069 containerd[1503]: time="2025-01-30T18:30:33.460884310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 18:30:33.461069 containerd[1503]: time="2025-01-30T18:30:33.460967703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 18:30:33.471669 containerd[1503]: time="2025-01-30T18:30:33.471405350Z" level=info msg="StartContainer for \"5787ded22b77625cbb754f5184e054cf70a5a53561b0c7d13da5bdffe3f980ef\" returns successfully" Jan 30 18:30:33.493539 containerd[1503]: time="2025-01-30T18:30:33.493265653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zzj4j,Uid:24904d3d-1841-497d-aeec-b53f05fc7a48,Namespace:calico-system,Attempt:1,} returns sandbox id \"3d131c530c44f19831a68c8458ef40e0c18d0d38b73f1568eb6551845bb6126c\"" Jan 30 18:30:33.528905 systemd[1]: Started cri-containerd-16e96fd05bcd5e18eab9a3b4c09b9cab514b248281d2d37e299e3b4df3e6ea7f.scope - libcontainer container 16e96fd05bcd5e18eab9a3b4c09b9cab514b248281d2d37e299e3b4df3e6ea7f. Jan 30 18:30:33.531688 systemd[1]: Started cri-containerd-c95d065ad8be22672d73ca237059fd7d51d44f05e54d7c08bf8a1ffd0a2bfaea.scope - libcontainer container c95d065ad8be22672d73ca237059fd7d51d44f05e54d7c08bf8a1ffd0a2bfaea. Jan 30 18:30:33.578673 kubelet[2657]: I0130 18:30:33.578270 2657 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-np6kd" podStartSLOduration=35.578134781 podStartE2EDuration="35.578134781s" podCreationTimestamp="2025-01-30 18:29:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 18:30:33.575899557 +0000 UTC m=+41.582655770" watchObservedRunningTime="2025-01-30 18:30:33.578134781 +0000 UTC m=+41.584890974" Jan 30 18:30:33.650700 containerd[1503]: time="2025-01-30T18:30:33.650643310Z" level=info msg="StartContainer for \"16e96fd05bcd5e18eab9a3b4c09b9cab514b248281d2d37e299e3b4df3e6ea7f\" returns successfully" Jan 30 18:30:33.707652 containerd[1503]: time="2025-01-30T18:30:33.706921914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-567c559546-mqg9t,Uid:5f4842b6-cff0-4aec-9953-449edcf4eb42,Namespace:calico-system,Attempt:1,} returns sandbox id \"c95d065ad8be22672d73ca237059fd7d51d44f05e54d7c08bf8a1ffd0a2bfaea\"" Jan 30 18:30:33.901669 kubelet[2657]: I0130 18:30:33.901507 2657 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 18:30:33.970930 systemd-networkd[1424]: caliaa7ca067dea: Gained IPv6LL Jan 30 18:30:34.289890 systemd-networkd[1424]: caliefe9481fc58: Gained IPv6LL Jan 30 18:30:34.323840 containerd[1503]: time="2025-01-30T18:30:34.323791936Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:30:34.324974 containerd[1503]: time="2025-01-30T18:30:34.324935025Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 30 18:30:34.325590 containerd[1503]: time="2025-01-30T18:30:34.325567291Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:30:34.328712 containerd[1503]: time="2025-01-30T18:30:34.327331170Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:30:34.330799 containerd[1503]: time="2025-01-30T18:30:34.330764588Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 3.570567492s" Jan 30 18:30:34.330958 containerd[1503]: time="2025-01-30T18:30:34.330873122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 18:30:34.332383 containerd[1503]: time="2025-01-30T18:30:34.332053382Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 18:30:34.333241 containerd[1503]: time="2025-01-30T18:30:34.333216619Z" level=info msg="CreateContainer within sandbox \"8fb83c43fb27f8d875226d2cfcce76b8c1280642c7cf7c6a3d6634adb3db39c6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 18:30:34.348314 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3041313096.mount: Deactivated successfully. Jan 30 18:30:34.351800 containerd[1503]: time="2025-01-30T18:30:34.351762829Z" level=info msg="CreateContainer within sandbox \"8fb83c43fb27f8d875226d2cfcce76b8c1280642c7cf7c6a3d6634adb3db39c6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"59ec582a59026d42b2bde897bdf08ae56a5217663a157bb89828ceb37ab8a00f\"" Jan 30 18:30:34.354217 containerd[1503]: time="2025-01-30T18:30:34.353242119Z" level=info msg="StartContainer for \"59ec582a59026d42b2bde897bdf08ae56a5217663a157bb89828ceb37ab8a00f\"" Jan 30 18:30:34.400852 systemd[1]: Started cri-containerd-59ec582a59026d42b2bde897bdf08ae56a5217663a157bb89828ceb37ab8a00f.scope - libcontainer container 59ec582a59026d42b2bde897bdf08ae56a5217663a157bb89828ceb37ab8a00f. Jan 30 18:30:34.454912 containerd[1503]: time="2025-01-30T18:30:34.454872814Z" level=info msg="StartContainer for \"59ec582a59026d42b2bde897bdf08ae56a5217663a157bb89828ceb37ab8a00f\" returns successfully" Jan 30 18:30:34.481810 systemd-networkd[1424]: califb65fd49ad8: Gained IPv6LL Jan 30 18:30:34.580211 kubelet[2657]: I0130 18:30:34.580061 2657 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-g2vfs" podStartSLOduration=36.580032106 podStartE2EDuration="36.580032106s" podCreationTimestamp="2025-01-30 18:29:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 18:30:34.556816462 +0000 UTC m=+42.563572680" watchObservedRunningTime="2025-01-30 18:30:34.580032106 +0000 UTC m=+42.586788405" Jan 30 18:30:34.602162 kubelet[2657]: I0130 18:30:34.602065 2657 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-65c57d4b87-ttjhg" podStartSLOduration=26.029744286 podStartE2EDuration="29.602022936s" podCreationTimestamp="2025-01-30 18:30:05 +0000 UTC" firstStartedPulling="2025-01-30 18:30:30.759662536 +0000 UTC m=+38.766418728" lastFinishedPulling="2025-01-30 18:30:34.331941182 +0000 UTC m=+42.338697378" observedRunningTime="2025-01-30 18:30:34.601377678 +0000 UTC m=+42.608133891" watchObservedRunningTime="2025-01-30 18:30:34.602022936 +0000 UTC m=+42.608779154" Jan 30 18:30:34.702342 containerd[1503]: time="2025-01-30T18:30:34.702276200Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:30:34.703956 containerd[1503]: time="2025-01-30T18:30:34.703916738Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 30 18:30:34.705557 containerd[1503]: time="2025-01-30T18:30:34.705528133Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 373.448249ms" Jan 30 18:30:34.705675 containerd[1503]: time="2025-01-30T18:30:34.705565125Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 18:30:34.707924 containerd[1503]: time="2025-01-30T18:30:34.707895075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 30 18:30:34.711944 containerd[1503]: time="2025-01-30T18:30:34.711905530Z" level=info msg="CreateContainer within sandbox \"982bed15449830f251f195d602270c34faddf4500315b6a5cff908d99d0f4d8b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 18:30:34.721554 containerd[1503]: time="2025-01-30T18:30:34.721512477Z" level=info msg="CreateContainer within sandbox \"982bed15449830f251f195d602270c34faddf4500315b6a5cff908d99d0f4d8b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"92dac284fdbaae466d1c4678ed40a9b4baad75659c0854c1e595e5ea37be2170\"" Jan 30 18:30:34.722864 containerd[1503]: time="2025-01-30T18:30:34.722835503Z" level=info msg="StartContainer for \"92dac284fdbaae466d1c4678ed40a9b4baad75659c0854c1e595e5ea37be2170\"" Jan 30 18:30:34.782856 systemd[1]: Started cri-containerd-92dac284fdbaae466d1c4678ed40a9b4baad75659c0854c1e595e5ea37be2170.scope - libcontainer container 92dac284fdbaae466d1c4678ed40a9b4baad75659c0854c1e595e5ea37be2170. Jan 30 18:30:34.865900 systemd-networkd[1424]: cali7068132e6d3: Gained IPv6LL Jan 30 18:30:34.884457 containerd[1503]: time="2025-01-30T18:30:34.884304658Z" level=info msg="StartContainer for \"92dac284fdbaae466d1c4678ed40a9b4baad75659c0854c1e595e5ea37be2170\" returns successfully" Jan 30 18:30:35.020859 kernel: bpftool[4797]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 30 18:30:35.571510 kubelet[2657]: I0130 18:30:35.571419 2657 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-65c57d4b87-mkzvs" podStartSLOduration=27.412543358 podStartE2EDuration="30.570754083s" podCreationTimestamp="2025-01-30 18:30:05 +0000 UTC" firstStartedPulling="2025-01-30 18:30:31.548330127 +0000 UTC m=+39.555086319" lastFinishedPulling="2025-01-30 18:30:34.706540848 +0000 UTC m=+42.713297044" observedRunningTime="2025-01-30 18:30:35.569114648 +0000 UTC m=+43.575870865" watchObservedRunningTime="2025-01-30 18:30:35.570754083 +0000 UTC m=+43.577510297" Jan 30 18:30:35.794267 systemd-networkd[1424]: vxlan.calico: Link UP Jan 30 18:30:35.794274 systemd-networkd[1424]: vxlan.calico: Gained carrier Jan 30 18:30:36.423158 containerd[1503]: time="2025-01-30T18:30:36.423094337Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:30:36.424425 containerd[1503]: time="2025-01-30T18:30:36.424296741Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 30 18:30:36.426920 containerd[1503]: time="2025-01-30T18:30:36.426051648Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:30:36.428644 containerd[1503]: time="2025-01-30T18:30:36.428621619Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:30:36.430342 containerd[1503]: time="2025-01-30T18:30:36.430301258Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.722374197s" Jan 30 18:30:36.430476 containerd[1503]: time="2025-01-30T18:30:36.430460334Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 30 18:30:36.432334 containerd[1503]: time="2025-01-30T18:30:36.432316316Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 30 18:30:36.438204 containerd[1503]: time="2025-01-30T18:30:36.438148662Z" level=info msg="CreateContainer within sandbox \"3d131c530c44f19831a68c8458ef40e0c18d0d38b73f1568eb6551845bb6126c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 30 18:30:36.465625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1930382345.mount: Deactivated successfully. Jan 30 18:30:36.467659 containerd[1503]: time="2025-01-30T18:30:36.466346813Z" level=info msg="CreateContainer within sandbox \"3d131c530c44f19831a68c8458ef40e0c18d0d38b73f1568eb6551845bb6126c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"6bd4d434eff321a05486c2a6799384c9c0048b74a5c96b3201bd36c5472fec41\"" Jan 30 18:30:36.468965 containerd[1503]: time="2025-01-30T18:30:36.468932635Z" level=info msg="StartContainer for \"6bd4d434eff321a05486c2a6799384c9c0048b74a5c96b3201bd36c5472fec41\"" Jan 30 18:30:36.558146 kubelet[2657]: I0130 18:30:36.558021 2657 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 18:30:36.566020 systemd[1]: Started cri-containerd-6bd4d434eff321a05486c2a6799384c9c0048b74a5c96b3201bd36c5472fec41.scope - libcontainer container 6bd4d434eff321a05486c2a6799384c9c0048b74a5c96b3201bd36c5472fec41. Jan 30 18:30:36.672637 containerd[1503]: time="2025-01-30T18:30:36.672586243Z" level=info msg="StartContainer for \"6bd4d434eff321a05486c2a6799384c9c0048b74a5c96b3201bd36c5472fec41\" returns successfully" Jan 30 18:30:37.747519 systemd-networkd[1424]: vxlan.calico: Gained IPv6LL Jan 30 18:30:39.034333 containerd[1503]: time="2025-01-30T18:30:39.034256203Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:30:39.036016 containerd[1503]: time="2025-01-30T18:30:39.035950374Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 30 18:30:39.036931 containerd[1503]: time="2025-01-30T18:30:39.036587129Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:30:39.039078 containerd[1503]: time="2025-01-30T18:30:39.038807168Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:30:39.040037 containerd[1503]: time="2025-01-30T18:30:39.039532234Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.607099722s" Jan 30 18:30:39.040037 containerd[1503]: time="2025-01-30T18:30:39.039568055Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 30 18:30:39.040653 containerd[1503]: time="2025-01-30T18:30:39.040496350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 30 18:30:39.061149 containerd[1503]: time="2025-01-30T18:30:39.060937441Z" level=info msg="CreateContainer within sandbox \"c95d065ad8be22672d73ca237059fd7d51d44f05e54d7c08bf8a1ffd0a2bfaea\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 30 18:30:39.112311 containerd[1503]: time="2025-01-30T18:30:39.112229042Z" level=info msg="CreateContainer within sandbox \"c95d065ad8be22672d73ca237059fd7d51d44f05e54d7c08bf8a1ffd0a2bfaea\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a93d31689cc8456258ecaa0f9437166c90435d946b60dac86150e1812bf82e75\"" Jan 30 18:30:39.113038 containerd[1503]: time="2025-01-30T18:30:39.112941720Z" level=info msg="StartContainer for \"a93d31689cc8456258ecaa0f9437166c90435d946b60dac86150e1812bf82e75\"" Jan 30 18:30:39.164921 systemd[1]: Started cri-containerd-a93d31689cc8456258ecaa0f9437166c90435d946b60dac86150e1812bf82e75.scope - libcontainer container a93d31689cc8456258ecaa0f9437166c90435d946b60dac86150e1812bf82e75. Jan 30 18:30:39.214399 containerd[1503]: time="2025-01-30T18:30:39.214285590Z" level=info msg="StartContainer for \"a93d31689cc8456258ecaa0f9437166c90435d946b60dac86150e1812bf82e75\" returns successfully" Jan 30 18:30:39.660462 kubelet[2657]: I0130 18:30:39.660218 2657 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-567c559546-mqg9t" podStartSLOduration=28.329503853 podStartE2EDuration="33.660081387s" podCreationTimestamp="2025-01-30 18:30:06 +0000 UTC" firstStartedPulling="2025-01-30 18:30:33.709786433 +0000 UTC m=+41.716542630" lastFinishedPulling="2025-01-30 18:30:39.040363966 +0000 UTC m=+47.047120164" observedRunningTime="2025-01-30 18:30:39.598497988 +0000 UTC m=+47.605254196" watchObservedRunningTime="2025-01-30 18:30:39.660081387 +0000 UTC m=+47.666837592" Jan 30 18:30:40.707509 containerd[1503]: time="2025-01-30T18:30:40.707461035Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:30:40.709341 containerd[1503]: time="2025-01-30T18:30:40.708669574Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 30 18:30:40.709624 containerd[1503]: time="2025-01-30T18:30:40.709601142Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:30:40.717495 containerd[1503]: time="2025-01-30T18:30:40.716275328Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:30:40.718596 containerd[1503]: time="2025-01-30T18:30:40.717990595Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.677463992s" Jan 30 18:30:40.718596 containerd[1503]: time="2025-01-30T18:30:40.718029101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 30 18:30:40.725255 containerd[1503]: time="2025-01-30T18:30:40.725230825Z" level=info msg="CreateContainer within sandbox \"3d131c530c44f19831a68c8458ef40e0c18d0d38b73f1568eb6551845bb6126c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 30 18:30:40.742396 containerd[1503]: time="2025-01-30T18:30:40.742243409Z" level=info msg="CreateContainer within sandbox \"3d131c530c44f19831a68c8458ef40e0c18d0d38b73f1568eb6551845bb6126c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"70406aac357bec742e688d9c4bdea6fc2a57410860bb1a0ae633231a5de1f6ae\"" Jan 30 18:30:40.743728 containerd[1503]: time="2025-01-30T18:30:40.743678802Z" level=info msg="StartContainer for \"70406aac357bec742e688d9c4bdea6fc2a57410860bb1a0ae633231a5de1f6ae\"" Jan 30 18:30:40.787840 systemd[1]: Started cri-containerd-70406aac357bec742e688d9c4bdea6fc2a57410860bb1a0ae633231a5de1f6ae.scope - libcontainer container 70406aac357bec742e688d9c4bdea6fc2a57410860bb1a0ae633231a5de1f6ae. Jan 30 18:30:40.821372 containerd[1503]: time="2025-01-30T18:30:40.821230138Z" level=info msg="StartContainer for \"70406aac357bec742e688d9c4bdea6fc2a57410860bb1a0ae633231a5de1f6ae\" returns successfully" Jan 30 18:30:41.420698 kubelet[2657]: I0130 18:30:41.420627 2657 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 30 18:30:41.422563 kubelet[2657]: I0130 18:30:41.422526 2657 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 30 18:30:41.609755 kubelet[2657]: I0130 18:30:41.609125 2657 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-zzj4j" podStartSLOduration=28.394319694 podStartE2EDuration="35.609100141s" podCreationTimestamp="2025-01-30 18:30:06 +0000 UTC" firstStartedPulling="2025-01-30 18:30:33.504969516 +0000 UTC m=+41.511725713" lastFinishedPulling="2025-01-30 18:30:40.719749962 +0000 UTC m=+48.726506160" observedRunningTime="2025-01-30 18:30:41.607737689 +0000 UTC m=+49.614493908" watchObservedRunningTime="2025-01-30 18:30:41.609100141 +0000 UTC m=+49.615856353" Jan 30 18:30:51.903151 systemd[1]: run-containerd-runc-k8s.io-a93d31689cc8456258ecaa0f9437166c90435d946b60dac86150e1812bf82e75-runc.y7sqN6.mount: Deactivated successfully. Jan 30 18:30:52.155974 containerd[1503]: time="2025-01-30T18:30:52.155528568Z" level=info msg="StopPodSandbox for \"a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4\"" Jan 30 18:30:52.329513 containerd[1503]: 2025-01-30 18:30:52.263 [WARNING][5129] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--eex0h.gb1.brightbox.com-k8s-csi--node--driver--zzj4j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"24904d3d-1841-497d-aeec-b53f05fc7a48", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 18, 30, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-eex0h.gb1.brightbox.com", ContainerID:"3d131c530c44f19831a68c8458ef40e0c18d0d38b73f1568eb6551845bb6126c", Pod:"csi-node-driver-zzj4j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.69.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7068132e6d3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 18:30:52.329513 containerd[1503]: 2025-01-30 18:30:52.265 [INFO][5129] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4" Jan 30 18:30:52.329513 containerd[1503]: 2025-01-30 18:30:52.265 [INFO][5129] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4" iface="eth0" netns="" Jan 30 18:30:52.329513 containerd[1503]: 2025-01-30 18:30:52.265 [INFO][5129] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4" Jan 30 18:30:52.329513 containerd[1503]: 2025-01-30 18:30:52.265 [INFO][5129] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4" Jan 30 18:30:52.329513 containerd[1503]: 2025-01-30 18:30:52.307 [INFO][5135] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4" HandleID="k8s-pod-network.a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4" Workload="srv--eex0h.gb1.brightbox.com-k8s-csi--node--driver--zzj4j-eth0" Jan 30 18:30:52.329513 containerd[1503]: 2025-01-30 18:30:52.308 [INFO][5135] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 18:30:52.329513 containerd[1503]: 2025-01-30 18:30:52.308 [INFO][5135] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 18:30:52.329513 containerd[1503]: 2025-01-30 18:30:52.316 [WARNING][5135] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4" HandleID="k8s-pod-network.a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4" Workload="srv--eex0h.gb1.brightbox.com-k8s-csi--node--driver--zzj4j-eth0" Jan 30 18:30:52.329513 containerd[1503]: 2025-01-30 18:30:52.317 [INFO][5135] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4" HandleID="k8s-pod-network.a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4" Workload="srv--eex0h.gb1.brightbox.com-k8s-csi--node--driver--zzj4j-eth0" Jan 30 18:30:52.329513 containerd[1503]: 2025-01-30 18:30:52.320 [INFO][5135] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 18:30:52.329513 containerd[1503]: 2025-01-30 18:30:52.326 [INFO][5129] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4" Jan 30 18:30:52.334803 containerd[1503]: time="2025-01-30T18:30:52.329599784Z" level=info msg="TearDown network for sandbox \"a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4\" successfully" Jan 30 18:30:52.334803 containerd[1503]: time="2025-01-30T18:30:52.329647130Z" level=info msg="StopPodSandbox for \"a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4\" returns successfully" Jan 30 18:30:52.396111 containerd[1503]: time="2025-01-30T18:30:52.396030635Z" level=info msg="RemovePodSandbox for \"a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4\"" Jan 30 18:30:52.399029 containerd[1503]: time="2025-01-30T18:30:52.398988815Z" level=info msg="Forcibly stopping sandbox \"a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4\"" Jan 30 18:30:52.482334 containerd[1503]: 2025-01-30 18:30:52.445 [WARNING][5153] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--eex0h.gb1.brightbox.com-k8s-csi--node--driver--zzj4j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"24904d3d-1841-497d-aeec-b53f05fc7a48", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 18, 30, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-eex0h.gb1.brightbox.com", ContainerID:"3d131c530c44f19831a68c8458ef40e0c18d0d38b73f1568eb6551845bb6126c", Pod:"csi-node-driver-zzj4j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.69.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7068132e6d3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 18:30:52.482334 containerd[1503]: 2025-01-30 18:30:52.445 [INFO][5153] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4" Jan 30 18:30:52.482334 containerd[1503]: 2025-01-30 18:30:52.445 [INFO][5153] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4" iface="eth0" netns="" Jan 30 18:30:52.482334 containerd[1503]: 2025-01-30 18:30:52.445 [INFO][5153] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4" Jan 30 18:30:52.482334 containerd[1503]: 2025-01-30 18:30:52.445 [INFO][5153] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4" Jan 30 18:30:52.482334 containerd[1503]: 2025-01-30 18:30:52.469 [INFO][5159] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4" HandleID="k8s-pod-network.a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4" Workload="srv--eex0h.gb1.brightbox.com-k8s-csi--node--driver--zzj4j-eth0" Jan 30 18:30:52.482334 containerd[1503]: 2025-01-30 18:30:52.469 [INFO][5159] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 18:30:52.482334 containerd[1503]: 2025-01-30 18:30:52.469 [INFO][5159] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 18:30:52.482334 containerd[1503]: 2025-01-30 18:30:52.476 [WARNING][5159] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4" HandleID="k8s-pod-network.a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4" Workload="srv--eex0h.gb1.brightbox.com-k8s-csi--node--driver--zzj4j-eth0" Jan 30 18:30:52.482334 containerd[1503]: 2025-01-30 18:30:52.476 [INFO][5159] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4" HandleID="k8s-pod-network.a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4" Workload="srv--eex0h.gb1.brightbox.com-k8s-csi--node--driver--zzj4j-eth0" Jan 30 18:30:52.482334 containerd[1503]: 2025-01-30 18:30:52.478 [INFO][5159] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 18:30:52.482334 containerd[1503]: 2025-01-30 18:30:52.480 [INFO][5153] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4" Jan 30 18:30:52.482334 containerd[1503]: time="2025-01-30T18:30:52.482287414Z" level=info msg="TearDown network for sandbox \"a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4\" successfully" Jan 30 18:30:52.517581 containerd[1503]: time="2025-01-30T18:30:52.517420634Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 18:30:52.548751 containerd[1503]: time="2025-01-30T18:30:52.548661304Z" level=info msg="RemovePodSandbox \"a3cf6ff3173324593368d56272092b03495968c36072bc0f6e8b84f09257afe4\" returns successfully" Jan 30 18:30:52.553825 containerd[1503]: time="2025-01-30T18:30:52.553404755Z" level=info msg="StopPodSandbox for \"30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34\"" Jan 30 18:30:52.644185 containerd[1503]: 2025-01-30 18:30:52.601 [WARNING][5177] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--eex0h.gb1.brightbox.com-k8s-calico--apiserver--65c57d4b87--ttjhg-eth0", GenerateName:"calico-apiserver-65c57d4b87-", Namespace:"calico-apiserver", SelfLink:"", UID:"276afab4-5fe5-4590-b5dc-be8776d1a75c", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 18, 30, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65c57d4b87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-eex0h.gb1.brightbox.com", ContainerID:"8fb83c43fb27f8d875226d2cfcce76b8c1280642c7cf7c6a3d6634adb3db39c6", Pod:"calico-apiserver-65c57d4b87-ttjhg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali22be1702655", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 18:30:52.644185 containerd[1503]: 2025-01-30 18:30:52.601 [INFO][5177] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34" Jan 30 18:30:52.644185 containerd[1503]: 2025-01-30 18:30:52.601 [INFO][5177] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34" iface="eth0" netns="" Jan 30 18:30:52.644185 containerd[1503]: 2025-01-30 18:30:52.601 [INFO][5177] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34" Jan 30 18:30:52.644185 containerd[1503]: 2025-01-30 18:30:52.601 [INFO][5177] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34" Jan 30 18:30:52.644185 containerd[1503]: 2025-01-30 18:30:52.626 [INFO][5184] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34" HandleID="k8s-pod-network.30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34" Workload="srv--eex0h.gb1.brightbox.com-k8s-calico--apiserver--65c57d4b87--ttjhg-eth0" Jan 30 18:30:52.644185 containerd[1503]: 2025-01-30 18:30:52.626 [INFO][5184] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 18:30:52.644185 containerd[1503]: 2025-01-30 18:30:52.626 [INFO][5184] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 18:30:52.644185 containerd[1503]: 2025-01-30 18:30:52.637 [WARNING][5184] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34" HandleID="k8s-pod-network.30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34" Workload="srv--eex0h.gb1.brightbox.com-k8s-calico--apiserver--65c57d4b87--ttjhg-eth0" Jan 30 18:30:52.644185 containerd[1503]: 2025-01-30 18:30:52.638 [INFO][5184] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34" HandleID="k8s-pod-network.30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34" Workload="srv--eex0h.gb1.brightbox.com-k8s-calico--apiserver--65c57d4b87--ttjhg-eth0" Jan 30 18:30:52.644185 containerd[1503]: 2025-01-30 18:30:52.639 [INFO][5184] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 18:30:52.644185 containerd[1503]: 2025-01-30 18:30:52.642 [INFO][5177] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34" Jan 30 18:30:52.645476 containerd[1503]: time="2025-01-30T18:30:52.644733922Z" level=info msg="TearDown network for sandbox \"30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34\" successfully" Jan 30 18:30:52.645476 containerd[1503]: time="2025-01-30T18:30:52.644759829Z" level=info msg="StopPodSandbox for \"30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34\" returns successfully" Jan 30 18:30:52.645917 containerd[1503]: time="2025-01-30T18:30:52.645898947Z" level=info msg="RemovePodSandbox for \"30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34\"" Jan 30 18:30:52.646166 containerd[1503]: time="2025-01-30T18:30:52.646019668Z" level=info msg="Forcibly stopping sandbox \"30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34\"" Jan 30 18:30:52.742935 containerd[1503]: 2025-01-30 18:30:52.701 [WARNING][5202] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--eex0h.gb1.brightbox.com-k8s-calico--apiserver--65c57d4b87--ttjhg-eth0", GenerateName:"calico-apiserver-65c57d4b87-", Namespace:"calico-apiserver", SelfLink:"", UID:"276afab4-5fe5-4590-b5dc-be8776d1a75c", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 18, 30, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65c57d4b87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-eex0h.gb1.brightbox.com", ContainerID:"8fb83c43fb27f8d875226d2cfcce76b8c1280642c7cf7c6a3d6634adb3db39c6", Pod:"calico-apiserver-65c57d4b87-ttjhg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali22be1702655", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 18:30:52.742935 containerd[1503]: 2025-01-30 18:30:52.702 [INFO][5202] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34" Jan 30 18:30:52.742935 containerd[1503]: 2025-01-30 18:30:52.702 [INFO][5202] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34" iface="eth0" netns="" Jan 30 18:30:52.742935 containerd[1503]: 2025-01-30 18:30:52.702 [INFO][5202] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34" Jan 30 18:30:52.742935 containerd[1503]: 2025-01-30 18:30:52.702 [INFO][5202] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34" Jan 30 18:30:52.742935 containerd[1503]: 2025-01-30 18:30:52.728 [INFO][5208] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34" HandleID="k8s-pod-network.30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34" Workload="srv--eex0h.gb1.brightbox.com-k8s-calico--apiserver--65c57d4b87--ttjhg-eth0" Jan 30 18:30:52.742935 containerd[1503]: 2025-01-30 18:30:52.728 [INFO][5208] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 18:30:52.742935 containerd[1503]: 2025-01-30 18:30:52.728 [INFO][5208] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 18:30:52.742935 containerd[1503]: 2025-01-30 18:30:52.735 [WARNING][5208] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34" HandleID="k8s-pod-network.30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34" Workload="srv--eex0h.gb1.brightbox.com-k8s-calico--apiserver--65c57d4b87--ttjhg-eth0" Jan 30 18:30:52.742935 containerd[1503]: 2025-01-30 18:30:52.735 [INFO][5208] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34" HandleID="k8s-pod-network.30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34" Workload="srv--eex0h.gb1.brightbox.com-k8s-calico--apiserver--65c57d4b87--ttjhg-eth0" Jan 30 18:30:52.742935 containerd[1503]: 2025-01-30 18:30:52.737 [INFO][5208] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 18:30:52.742935 containerd[1503]: 2025-01-30 18:30:52.739 [INFO][5202] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34" Jan 30 18:30:52.742935 containerd[1503]: time="2025-01-30T18:30:52.741850228Z" level=info msg="TearDown network for sandbox \"30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34\" successfully" Jan 30 18:30:52.750533 containerd[1503]: time="2025-01-30T18:30:52.750445402Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 18:30:52.750637 containerd[1503]: time="2025-01-30T18:30:52.750593421Z" level=info msg="RemovePodSandbox \"30d655abe7712d1cf9aa08d5d0ebb4b8779338bb565eea0fa7531e4350486a34\" returns successfully" Jan 30 18:30:52.751582 containerd[1503]: time="2025-01-30T18:30:52.751541315Z" level=info msg="StopPodSandbox for \"0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779\"" Jan 30 18:30:52.858379 containerd[1503]: 2025-01-30 18:30:52.802 [WARNING][5226] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--eex0h.gb1.brightbox.com-k8s-coredns--6f6b679f8f--np6kd-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"bb725008-8b91-4bc2-9fc5-057d3a965b20", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 18, 29, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-eex0h.gb1.brightbox.com", ContainerID:"58564a12b66b8d2f7b923303733f285f7ad2ca76965bece21e0216d9dffb9362", Pod:"coredns-6f6b679f8f-np6kd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaa7ca067dea", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 18:30:52.858379 containerd[1503]: 2025-01-30 18:30:52.802 [INFO][5226] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779" Jan 30 18:30:52.858379 containerd[1503]: 2025-01-30 18:30:52.802 [INFO][5226] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779" iface="eth0" netns="" Jan 30 18:30:52.858379 containerd[1503]: 2025-01-30 18:30:52.802 [INFO][5226] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779" Jan 30 18:30:52.858379 containerd[1503]: 2025-01-30 18:30:52.802 [INFO][5226] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779" Jan 30 18:30:52.858379 containerd[1503]: 2025-01-30 18:30:52.838 [INFO][5232] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779" HandleID="k8s-pod-network.0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779" Workload="srv--eex0h.gb1.brightbox.com-k8s-coredns--6f6b679f8f--np6kd-eth0" Jan 30 18:30:52.858379 containerd[1503]: 2025-01-30 18:30:52.838 [INFO][5232] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 18:30:52.858379 containerd[1503]: 2025-01-30 18:30:52.838 [INFO][5232] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 18:30:52.858379 containerd[1503]: 2025-01-30 18:30:52.852 [WARNING][5232] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779" HandleID="k8s-pod-network.0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779" Workload="srv--eex0h.gb1.brightbox.com-k8s-coredns--6f6b679f8f--np6kd-eth0" Jan 30 18:30:52.858379 containerd[1503]: 2025-01-30 18:30:52.852 [INFO][5232] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779" HandleID="k8s-pod-network.0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779" Workload="srv--eex0h.gb1.brightbox.com-k8s-coredns--6f6b679f8f--np6kd-eth0" Jan 30 18:30:52.858379 containerd[1503]: 2025-01-30 18:30:52.854 [INFO][5232] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 18:30:52.858379 containerd[1503]: 2025-01-30 18:30:52.856 [INFO][5226] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779" Jan 30 18:30:52.860562 containerd[1503]: time="2025-01-30T18:30:52.858401448Z" level=info msg="TearDown network for sandbox \"0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779\" successfully" Jan 30 18:30:52.860562 containerd[1503]: time="2025-01-30T18:30:52.858435470Z" level=info msg="StopPodSandbox for \"0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779\" returns successfully" Jan 30 18:30:52.860562 containerd[1503]: time="2025-01-30T18:30:52.859316005Z" level=info msg="RemovePodSandbox for \"0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779\"" Jan 30 18:30:52.860562 containerd[1503]: time="2025-01-30T18:30:52.859381802Z" level=info msg="Forcibly stopping sandbox \"0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779\"" Jan 30 18:30:52.947067 containerd[1503]: 2025-01-30 18:30:52.903 [WARNING][5250] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--eex0h.gb1.brightbox.com-k8s-coredns--6f6b679f8f--np6kd-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"bb725008-8b91-4bc2-9fc5-057d3a965b20", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 18, 29, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-eex0h.gb1.brightbox.com", ContainerID:"58564a12b66b8d2f7b923303733f285f7ad2ca76965bece21e0216d9dffb9362", Pod:"coredns-6f6b679f8f-np6kd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaa7ca067dea", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 18:30:52.947067 containerd[1503]: 2025-01-30 18:30:52.903 [INFO][5250] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779" Jan 30 18:30:52.947067 containerd[1503]: 2025-01-30 18:30:52.903 [INFO][5250] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779" iface="eth0" netns="" Jan 30 18:30:52.947067 containerd[1503]: 2025-01-30 18:30:52.903 [INFO][5250] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779" Jan 30 18:30:52.947067 containerd[1503]: 2025-01-30 18:30:52.903 [INFO][5250] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779" Jan 30 18:30:52.947067 containerd[1503]: 2025-01-30 18:30:52.930 [INFO][5256] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779" HandleID="k8s-pod-network.0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779" Workload="srv--eex0h.gb1.brightbox.com-k8s-coredns--6f6b679f8f--np6kd-eth0" Jan 30 18:30:52.947067 containerd[1503]: 2025-01-30 18:30:52.930 [INFO][5256] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 18:30:52.947067 containerd[1503]: 2025-01-30 18:30:52.931 [INFO][5256] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 18:30:52.947067 containerd[1503]: 2025-01-30 18:30:52.937 [WARNING][5256] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779" HandleID="k8s-pod-network.0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779" Workload="srv--eex0h.gb1.brightbox.com-k8s-coredns--6f6b679f8f--np6kd-eth0" Jan 30 18:30:52.947067 containerd[1503]: 2025-01-30 18:30:52.937 [INFO][5256] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779" HandleID="k8s-pod-network.0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779" Workload="srv--eex0h.gb1.brightbox.com-k8s-coredns--6f6b679f8f--np6kd-eth0" Jan 30 18:30:52.947067 containerd[1503]: 2025-01-30 18:30:52.940 [INFO][5256] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 18:30:52.947067 containerd[1503]: 2025-01-30 18:30:52.943 [INFO][5250] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779" Jan 30 18:30:52.947067 containerd[1503]: time="2025-01-30T18:30:52.946991953Z" level=info msg="TearDown network for sandbox \"0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779\" successfully" Jan 30 18:30:52.952851 containerd[1503]: time="2025-01-30T18:30:52.952781051Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 18:30:52.953006 containerd[1503]: time="2025-01-30T18:30:52.952881977Z" level=info msg="RemovePodSandbox \"0ea3103b34cba32a5229e8b648e6264c3664688a4ab1f1e35bcd76eb682c8779\" returns successfully" Jan 30 18:30:52.953600 containerd[1503]: time="2025-01-30T18:30:52.953571109Z" level=info msg="StopPodSandbox for \"3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3\"" Jan 30 18:30:53.085127 containerd[1503]: 2025-01-30 18:30:53.036 [WARNING][5274] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--eex0h.gb1.brightbox.com-k8s-calico--apiserver--65c57d4b87--mkzvs-eth0", GenerateName:"calico-apiserver-65c57d4b87-", Namespace:"calico-apiserver", SelfLink:"", UID:"5d4f1662-4fa4-4882-ad86-86d80d22e48c", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 18, 30, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65c57d4b87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-eex0h.gb1.brightbox.com", ContainerID:"982bed15449830f251f195d602270c34faddf4500315b6a5cff908d99d0f4d8b", Pod:"calico-apiserver-65c57d4b87-mkzvs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif599f1b5e86", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 18:30:53.085127 containerd[1503]: 2025-01-30 18:30:53.037 [INFO][5274] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3" Jan 30 18:30:53.085127 containerd[1503]: 2025-01-30 18:30:53.037 [INFO][5274] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3" iface="eth0" netns="" Jan 30 18:30:53.085127 containerd[1503]: 2025-01-30 18:30:53.037 [INFO][5274] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3" Jan 30 18:30:53.085127 containerd[1503]: 2025-01-30 18:30:53.037 [INFO][5274] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3" Jan 30 18:30:53.085127 containerd[1503]: 2025-01-30 18:30:53.066 [INFO][5280] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3" HandleID="k8s-pod-network.3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3" Workload="srv--eex0h.gb1.brightbox.com-k8s-calico--apiserver--65c57d4b87--mkzvs-eth0" Jan 30 18:30:53.085127 containerd[1503]: 2025-01-30 18:30:53.067 [INFO][5280] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 18:30:53.085127 containerd[1503]: 2025-01-30 18:30:53.067 [INFO][5280] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 18:30:53.085127 containerd[1503]: 2025-01-30 18:30:53.076 [WARNING][5280] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3" HandleID="k8s-pod-network.3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3" Workload="srv--eex0h.gb1.brightbox.com-k8s-calico--apiserver--65c57d4b87--mkzvs-eth0" Jan 30 18:30:53.085127 containerd[1503]: 2025-01-30 18:30:53.076 [INFO][5280] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3" HandleID="k8s-pod-network.3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3" Workload="srv--eex0h.gb1.brightbox.com-k8s-calico--apiserver--65c57d4b87--mkzvs-eth0" Jan 30 18:30:53.085127 containerd[1503]: 2025-01-30 18:30:53.079 [INFO][5280] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 18:30:53.085127 containerd[1503]: 2025-01-30 18:30:53.082 [INFO][5274] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3" Jan 30 18:30:53.085127 containerd[1503]: time="2025-01-30T18:30:53.084874076Z" level=info msg="TearDown network for sandbox \"3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3\" successfully" Jan 30 18:30:53.085127 containerd[1503]: time="2025-01-30T18:30:53.084918925Z" level=info msg="StopPodSandbox for \"3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3\" returns successfully" Jan 30 18:30:53.086382 containerd[1503]: time="2025-01-30T18:30:53.086111243Z" level=info msg="RemovePodSandbox for \"3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3\"" Jan 30 18:30:53.086382 containerd[1503]: time="2025-01-30T18:30:53.086175119Z" level=info msg="Forcibly stopping sandbox \"3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3\"" Jan 30 18:30:53.191198 containerd[1503]: 2025-01-30 18:30:53.133 [WARNING][5298] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--eex0h.gb1.brightbox.com-k8s-calico--apiserver--65c57d4b87--mkzvs-eth0", GenerateName:"calico-apiserver-65c57d4b87-", Namespace:"calico-apiserver", SelfLink:"", UID:"5d4f1662-4fa4-4882-ad86-86d80d22e48c", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 18, 30, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65c57d4b87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-eex0h.gb1.brightbox.com", ContainerID:"982bed15449830f251f195d602270c34faddf4500315b6a5cff908d99d0f4d8b", Pod:"calico-apiserver-65c57d4b87-mkzvs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif599f1b5e86", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 18:30:53.191198 containerd[1503]: 2025-01-30 18:30:53.133 [INFO][5298] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3" Jan 30 18:30:53.191198 containerd[1503]: 2025-01-30 18:30:53.133 [INFO][5298] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3" iface="eth0" netns="" Jan 30 18:30:53.191198 containerd[1503]: 2025-01-30 18:30:53.133 [INFO][5298] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3" Jan 30 18:30:53.191198 containerd[1503]: 2025-01-30 18:30:53.134 [INFO][5298] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3" Jan 30 18:30:53.191198 containerd[1503]: 2025-01-30 18:30:53.170 [INFO][5304] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3" HandleID="k8s-pod-network.3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3" Workload="srv--eex0h.gb1.brightbox.com-k8s-calico--apiserver--65c57d4b87--mkzvs-eth0" Jan 30 18:30:53.191198 containerd[1503]: 2025-01-30 18:30:53.170 [INFO][5304] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 18:30:53.191198 containerd[1503]: 2025-01-30 18:30:53.170 [INFO][5304] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 18:30:53.191198 containerd[1503]: 2025-01-30 18:30:53.182 [WARNING][5304] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3" HandleID="k8s-pod-network.3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3" Workload="srv--eex0h.gb1.brightbox.com-k8s-calico--apiserver--65c57d4b87--mkzvs-eth0" Jan 30 18:30:53.191198 containerd[1503]: 2025-01-30 18:30:53.182 [INFO][5304] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3" HandleID="k8s-pod-network.3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3" Workload="srv--eex0h.gb1.brightbox.com-k8s-calico--apiserver--65c57d4b87--mkzvs-eth0" Jan 30 18:30:53.191198 containerd[1503]: 2025-01-30 18:30:53.185 [INFO][5304] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 18:30:53.191198 containerd[1503]: 2025-01-30 18:30:53.187 [INFO][5298] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3" Jan 30 18:30:53.191198 containerd[1503]: time="2025-01-30T18:30:53.191200472Z" level=info msg="TearDown network for sandbox \"3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3\" successfully" Jan 30 18:30:53.201862 containerd[1503]: time="2025-01-30T18:30:53.201814624Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 18:30:53.202108 containerd[1503]: time="2025-01-30T18:30:53.201880585Z" level=info msg="RemovePodSandbox \"3425496dae1c5541fbe95fcb124b76d322790faeaf77849ffadcc3dfd0f9d5c3\" returns successfully" Jan 30 18:30:53.203228 containerd[1503]: time="2025-01-30T18:30:53.202836307Z" level=info msg="StopPodSandbox for \"3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a\"" Jan 30 18:30:53.299148 containerd[1503]: 2025-01-30 18:30:53.257 [WARNING][5322] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--eex0h.gb1.brightbox.com-k8s-calico--kube--controllers--567c559546--mqg9t-eth0", GenerateName:"calico-kube-controllers-567c559546-", Namespace:"calico-system", SelfLink:"", UID:"5f4842b6-cff0-4aec-9953-449edcf4eb42", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 18, 30, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"567c559546", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-eex0h.gb1.brightbox.com", ContainerID:"c95d065ad8be22672d73ca237059fd7d51d44f05e54d7c08bf8a1ffd0a2bfaea", Pod:"calico-kube-controllers-567c559546-mqg9t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.69.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califb65fd49ad8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 18:30:53.299148 containerd[1503]: 2025-01-30 18:30:53.258 [INFO][5322] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a" Jan 30 18:30:53.299148 containerd[1503]: 2025-01-30 18:30:53.258 [INFO][5322] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a" iface="eth0" netns="" Jan 30 18:30:53.299148 containerd[1503]: 2025-01-30 18:30:53.258 [INFO][5322] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a" Jan 30 18:30:53.299148 containerd[1503]: 2025-01-30 18:30:53.258 [INFO][5322] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a" Jan 30 18:30:53.299148 containerd[1503]: 2025-01-30 18:30:53.283 [INFO][5329] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a" HandleID="k8s-pod-network.3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a" Workload="srv--eex0h.gb1.brightbox.com-k8s-calico--kube--controllers--567c559546--mqg9t-eth0" Jan 30 18:30:53.299148 containerd[1503]: 2025-01-30 18:30:53.283 [INFO][5329] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 18:30:53.299148 containerd[1503]: 2025-01-30 18:30:53.283 [INFO][5329] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 18:30:53.299148 containerd[1503]: 2025-01-30 18:30:53.291 [WARNING][5329] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a" HandleID="k8s-pod-network.3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a" Workload="srv--eex0h.gb1.brightbox.com-k8s-calico--kube--controllers--567c559546--mqg9t-eth0" Jan 30 18:30:53.299148 containerd[1503]: 2025-01-30 18:30:53.291 [INFO][5329] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a" HandleID="k8s-pod-network.3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a" Workload="srv--eex0h.gb1.brightbox.com-k8s-calico--kube--controllers--567c559546--mqg9t-eth0" Jan 30 18:30:53.299148 containerd[1503]: 2025-01-30 18:30:53.294 [INFO][5329] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 18:30:53.299148 containerd[1503]: 2025-01-30 18:30:53.296 [INFO][5322] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a" Jan 30 18:30:53.299836 containerd[1503]: time="2025-01-30T18:30:53.299706599Z" level=info msg="TearDown network for sandbox \"3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a\" successfully" Jan 30 18:30:53.299836 containerd[1503]: time="2025-01-30T18:30:53.299740875Z" level=info msg="StopPodSandbox for \"3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a\" returns successfully" Jan 30 18:30:53.300521 containerd[1503]: time="2025-01-30T18:30:53.300476602Z" level=info msg="RemovePodSandbox for \"3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a\"" Jan 30 18:30:53.300577 containerd[1503]: time="2025-01-30T18:30:53.300523317Z" level=info msg="Forcibly stopping sandbox \"3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a\"" Jan 30 18:30:53.393312 containerd[1503]: 2025-01-30 18:30:53.344 [WARNING][5347] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--eex0h.gb1.brightbox.com-k8s-calico--kube--controllers--567c559546--mqg9t-eth0", GenerateName:"calico-kube-controllers-567c559546-", Namespace:"calico-system", SelfLink:"", UID:"5f4842b6-cff0-4aec-9953-449edcf4eb42", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 18, 30, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"567c559546", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-eex0h.gb1.brightbox.com", ContainerID:"c95d065ad8be22672d73ca237059fd7d51d44f05e54d7c08bf8a1ffd0a2bfaea", Pod:"calico-kube-controllers-567c559546-mqg9t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.69.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califb65fd49ad8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 18:30:53.393312 containerd[1503]: 2025-01-30 18:30:53.345 [INFO][5347] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a" Jan 30 18:30:53.393312 containerd[1503]: 2025-01-30 18:30:53.345 [INFO][5347] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a" iface="eth0" netns="" Jan 30 18:30:53.393312 containerd[1503]: 2025-01-30 18:30:53.345 [INFO][5347] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a" Jan 30 18:30:53.393312 containerd[1503]: 2025-01-30 18:30:53.345 [INFO][5347] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a" Jan 30 18:30:53.393312 containerd[1503]: 2025-01-30 18:30:53.372 [INFO][5354] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a" HandleID="k8s-pod-network.3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a" Workload="srv--eex0h.gb1.brightbox.com-k8s-calico--kube--controllers--567c559546--mqg9t-eth0" Jan 30 18:30:53.393312 containerd[1503]: 2025-01-30 18:30:53.372 [INFO][5354] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 18:30:53.393312 containerd[1503]: 2025-01-30 18:30:53.372 [INFO][5354] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 18:30:53.393312 containerd[1503]: 2025-01-30 18:30:53.382 [WARNING][5354] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a" HandleID="k8s-pod-network.3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a" Workload="srv--eex0h.gb1.brightbox.com-k8s-calico--kube--controllers--567c559546--mqg9t-eth0" Jan 30 18:30:53.393312 containerd[1503]: 2025-01-30 18:30:53.382 [INFO][5354] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a" HandleID="k8s-pod-network.3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a" Workload="srv--eex0h.gb1.brightbox.com-k8s-calico--kube--controllers--567c559546--mqg9t-eth0" Jan 30 18:30:53.393312 containerd[1503]: 2025-01-30 18:30:53.387 [INFO][5354] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 18:30:53.393312 containerd[1503]: 2025-01-30 18:30:53.390 [INFO][5347] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a" Jan 30 18:30:53.393312 containerd[1503]: time="2025-01-30T18:30:53.393266090Z" level=info msg="TearDown network for sandbox \"3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a\" successfully" Jan 30 18:30:53.398778 containerd[1503]: time="2025-01-30T18:30:53.398647742Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 18:30:53.398923 containerd[1503]: time="2025-01-30T18:30:53.398839416Z" level=info msg="RemovePodSandbox \"3d90fe38e5adee5ae7edb2ff5ad7550dc48518245527a42ef60463fd41d8a25a\" returns successfully" Jan 30 18:30:53.399664 containerd[1503]: time="2025-01-30T18:30:53.399599733Z" level=info msg="StopPodSandbox for \"7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a\"" Jan 30 18:30:53.486236 containerd[1503]: 2025-01-30 18:30:53.439 [WARNING][5372] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--eex0h.gb1.brightbox.com-k8s-coredns--6f6b679f8f--g2vfs-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"16dcc974-7fc7-4198-87ca-318a9eb2503b", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 18, 29, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-eex0h.gb1.brightbox.com", ContainerID:"35bc300ad4bb89b6218b43ed92da09be712678abaa64ba440907e8d5a0d807f4", Pod:"coredns-6f6b679f8f-g2vfs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliefe9481fc58", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 18:30:53.486236 containerd[1503]: 2025-01-30 18:30:53.439 [INFO][5372] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a" Jan 30 18:30:53.486236 containerd[1503]: 2025-01-30 18:30:53.439 [INFO][5372] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a" iface="eth0" netns="" Jan 30 18:30:53.486236 containerd[1503]: 2025-01-30 18:30:53.439 [INFO][5372] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a" Jan 30 18:30:53.486236 containerd[1503]: 2025-01-30 18:30:53.439 [INFO][5372] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a" Jan 30 18:30:53.486236 containerd[1503]: 2025-01-30 18:30:53.465 [INFO][5378] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a" HandleID="k8s-pod-network.7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a" Workload="srv--eex0h.gb1.brightbox.com-k8s-coredns--6f6b679f8f--g2vfs-eth0" Jan 30 18:30:53.486236 containerd[1503]: 2025-01-30 18:30:53.465 [INFO][5378] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 18:30:53.486236 containerd[1503]: 2025-01-30 18:30:53.465 [INFO][5378] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 18:30:53.486236 containerd[1503]: 2025-01-30 18:30:53.476 [WARNING][5378] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a" HandleID="k8s-pod-network.7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a" Workload="srv--eex0h.gb1.brightbox.com-k8s-coredns--6f6b679f8f--g2vfs-eth0" Jan 30 18:30:53.486236 containerd[1503]: 2025-01-30 18:30:53.476 [INFO][5378] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a" HandleID="k8s-pod-network.7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a" Workload="srv--eex0h.gb1.brightbox.com-k8s-coredns--6f6b679f8f--g2vfs-eth0" Jan 30 18:30:53.486236 containerd[1503]: 2025-01-30 18:30:53.480 [INFO][5378] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 18:30:53.486236 containerd[1503]: 2025-01-30 18:30:53.482 [INFO][5372] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a" Jan 30 18:30:53.487957 containerd[1503]: time="2025-01-30T18:30:53.486278738Z" level=info msg="TearDown network for sandbox \"7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a\" successfully" Jan 30 18:30:53.487957 containerd[1503]: time="2025-01-30T18:30:53.486315802Z" level=info msg="StopPodSandbox for \"7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a\" returns successfully" Jan 30 18:30:53.487957 containerd[1503]: time="2025-01-30T18:30:53.487003835Z" level=info msg="RemovePodSandbox for \"7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a\"" Jan 30 18:30:53.487957 containerd[1503]: time="2025-01-30T18:30:53.487051328Z" level=info msg="Forcibly stopping sandbox \"7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a\"" Jan 30 18:30:53.584078 containerd[1503]: 2025-01-30 18:30:53.547 [WARNING][5396] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--eex0h.gb1.brightbox.com-k8s-coredns--6f6b679f8f--g2vfs-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"16dcc974-7fc7-4198-87ca-318a9eb2503b", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 18, 29, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-eex0h.gb1.brightbox.com", ContainerID:"35bc300ad4bb89b6218b43ed92da09be712678abaa64ba440907e8d5a0d807f4", Pod:"coredns-6f6b679f8f-g2vfs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliefe9481fc58", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 18:30:53.584078 containerd[1503]: 2025-01-30 18:30:53.547 [INFO][5396] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a" Jan 30 18:30:53.584078 containerd[1503]: 2025-01-30 18:30:53.547 [INFO][5396] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a" iface="eth0" netns="" Jan 30 18:30:53.584078 containerd[1503]: 2025-01-30 18:30:53.547 [INFO][5396] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a" Jan 30 18:30:53.584078 containerd[1503]: 2025-01-30 18:30:53.547 [INFO][5396] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a" Jan 30 18:30:53.584078 containerd[1503]: 2025-01-30 18:30:53.571 [INFO][5403] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a" HandleID="k8s-pod-network.7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a" Workload="srv--eex0h.gb1.brightbox.com-k8s-coredns--6f6b679f8f--g2vfs-eth0" Jan 30 18:30:53.584078 containerd[1503]: 2025-01-30 18:30:53.571 [INFO][5403] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 18:30:53.584078 containerd[1503]: 2025-01-30 18:30:53.571 [INFO][5403] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 18:30:53.584078 containerd[1503]: 2025-01-30 18:30:53.578 [WARNING][5403] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a" HandleID="k8s-pod-network.7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a" Workload="srv--eex0h.gb1.brightbox.com-k8s-coredns--6f6b679f8f--g2vfs-eth0" Jan 30 18:30:53.584078 containerd[1503]: 2025-01-30 18:30:53.578 [INFO][5403] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a" HandleID="k8s-pod-network.7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a" Workload="srv--eex0h.gb1.brightbox.com-k8s-coredns--6f6b679f8f--g2vfs-eth0" Jan 30 18:30:53.584078 containerd[1503]: 2025-01-30 18:30:53.581 [INFO][5403] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 18:30:53.584078 containerd[1503]: 2025-01-30 18:30:53.582 [INFO][5396] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a" Jan 30 18:30:53.585355 containerd[1503]: time="2025-01-30T18:30:53.584095832Z" level=info msg="TearDown network for sandbox \"7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a\" successfully" Jan 30 18:30:53.586319 containerd[1503]: time="2025-01-30T18:30:53.586295157Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 18:30:53.586396 containerd[1503]: time="2025-01-30T18:30:53.586350575Z" level=info msg="RemovePodSandbox \"7d001750bfe96a01d3f8925bf79ee462a8b00cb08fdc9d2aca59c3b02484a92a\" returns successfully" Jan 30 18:30:59.895543 kubelet[2657]: I0130 18:30:59.895271 2657 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 18:31:05.773000 systemd[1]: Started sshd@9-10.244.90.134:22-139.178.89.65:34132.service - OpenSSH per-connection server daemon (139.178.89.65:34132). Jan 30 18:31:06.743499 sshd[5472]: Accepted publickey for core from 139.178.89.65 port 34132 ssh2: RSA SHA256:mi9Ffww0GZqlgqqsrskunsrI33jB/1uB1d3dx4wABvw Jan 30 18:31:06.749123 sshd[5472]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 18:31:06.767506 systemd-logind[1486]: New session 12 of user core. Jan 30 18:31:06.779849 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 18:31:07.983367 sshd[5472]: pam_unix(sshd:session): session closed for user core Jan 30 18:31:07.991367 systemd[1]: sshd@9-10.244.90.134:22-139.178.89.65:34132.service: Deactivated successfully. Jan 30 18:31:07.994437 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 18:31:07.995637 systemd-logind[1486]: Session 12 logged out. Waiting for processes to exit. Jan 30 18:31:07.997204 systemd-logind[1486]: Removed session 12. Jan 30 18:31:13.145190 systemd[1]: Started sshd@10-10.244.90.134:22-139.178.89.65:44544.service - OpenSSH per-connection server daemon (139.178.89.65:44544). Jan 30 18:31:14.050329 sshd[5486]: Accepted publickey for core from 139.178.89.65 port 44544 ssh2: RSA SHA256:mi9Ffww0GZqlgqqsrskunsrI33jB/1uB1d3dx4wABvw Jan 30 18:31:14.060110 sshd[5486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 18:31:14.069481 systemd-logind[1486]: New session 13 of user core. Jan 30 18:31:14.075855 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 18:31:14.795547 sshd[5486]: pam_unix(sshd:session): session closed for user core Jan 30 18:31:14.805253 systemd[1]: sshd@10-10.244.90.134:22-139.178.89.65:44544.service: Deactivated successfully. Jan 30 18:31:14.812044 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 18:31:14.813757 systemd-logind[1486]: Session 13 logged out. Waiting for processes to exit. Jan 30 18:31:14.814988 systemd-logind[1486]: Removed session 13. Jan 30 18:31:19.964048 systemd[1]: Started sshd@11-10.244.90.134:22-139.178.89.65:44560.service - OpenSSH per-connection server daemon (139.178.89.65:44560). Jan 30 18:31:20.867032 sshd[5508]: Accepted publickey for core from 139.178.89.65 port 44560 ssh2: RSA SHA256:mi9Ffww0GZqlgqqsrskunsrI33jB/1uB1d3dx4wABvw Jan 30 18:31:20.870621 sshd[5508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 18:31:20.881178 systemd-logind[1486]: New session 14 of user core. Jan 30 18:31:20.885833 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 18:31:21.588359 sshd[5508]: pam_unix(sshd:session): session closed for user core Jan 30 18:31:21.596594 systemd-logind[1486]: Session 14 logged out. Waiting for processes to exit. Jan 30 18:31:21.597223 systemd[1]: sshd@11-10.244.90.134:22-139.178.89.65:44560.service: Deactivated successfully. Jan 30 18:31:21.601453 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 18:31:21.603238 systemd-logind[1486]: Removed session 14. Jan 30 18:31:21.745467 systemd[1]: Started sshd@12-10.244.90.134:22-139.178.89.65:39494.service - OpenSSH per-connection server daemon (139.178.89.65:39494). Jan 30 18:31:21.936964 systemd[1]: run-containerd-runc-k8s.io-a93d31689cc8456258ecaa0f9437166c90435d946b60dac86150e1812bf82e75-runc.wEclw1.mount: Deactivated successfully. Jan 30 18:31:22.688542 sshd[5523]: Accepted publickey for core from 139.178.89.65 port 39494 ssh2: RSA SHA256:mi9Ffww0GZqlgqqsrskunsrI33jB/1uB1d3dx4wABvw Jan 30 18:31:22.692337 sshd[5523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 18:31:22.702341 systemd-logind[1486]: New session 15 of user core. Jan 30 18:31:22.710997 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 18:31:23.492024 sshd[5523]: pam_unix(sshd:session): session closed for user core Jan 30 18:31:23.500124 systemd-logind[1486]: Session 15 logged out. Waiting for processes to exit. Jan 30 18:31:23.500772 systemd[1]: sshd@12-10.244.90.134:22-139.178.89.65:39494.service: Deactivated successfully. Jan 30 18:31:23.504714 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 18:31:23.508414 systemd-logind[1486]: Removed session 15. Jan 30 18:31:23.654316 systemd[1]: Started sshd@13-10.244.90.134:22-139.178.89.65:39510.service - OpenSSH per-connection server daemon (139.178.89.65:39510). Jan 30 18:31:24.551764 sshd[5553]: Accepted publickey for core from 139.178.89.65 port 39510 ssh2: RSA SHA256:mi9Ffww0GZqlgqqsrskunsrI33jB/1uB1d3dx4wABvw Jan 30 18:31:24.556800 sshd[5553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 18:31:24.566277 systemd-logind[1486]: New session 16 of user core. Jan 30 18:31:24.577855 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 18:31:25.271921 sshd[5553]: pam_unix(sshd:session): session closed for user core Jan 30 18:31:25.281790 systemd[1]: sshd@13-10.244.90.134:22-139.178.89.65:39510.service: Deactivated successfully. Jan 30 18:31:25.284621 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 18:31:25.285908 systemd-logind[1486]: Session 16 logged out. Waiting for processes to exit. Jan 30 18:31:25.287846 systemd-logind[1486]: Removed session 16. Jan 30 18:31:30.441245 systemd[1]: Started sshd@14-10.244.90.134:22-139.178.89.65:39520.service - OpenSSH per-connection server daemon (139.178.89.65:39520). Jan 30 18:31:31.401980 sshd[5592]: Accepted publickey for core from 139.178.89.65 port 39520 ssh2: RSA SHA256:mi9Ffww0GZqlgqqsrskunsrI33jB/1uB1d3dx4wABvw Jan 30 18:31:31.404889 sshd[5592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 18:31:31.413837 systemd-logind[1486]: New session 17 of user core. Jan 30 18:31:31.420870 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 18:31:32.204291 sshd[5592]: pam_unix(sshd:session): session closed for user core Jan 30 18:31:32.214961 systemd[1]: sshd@14-10.244.90.134:22-139.178.89.65:39520.service: Deactivated successfully. Jan 30 18:31:32.219044 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 18:31:32.221599 systemd-logind[1486]: Session 17 logged out. Waiting for processes to exit. Jan 30 18:31:32.225119 systemd-logind[1486]: Removed session 17. Jan 30 18:31:37.369360 systemd[1]: Started sshd@15-10.244.90.134:22-139.178.89.65:56786.service - OpenSSH per-connection server daemon (139.178.89.65:56786). Jan 30 18:31:38.352919 sshd[5610]: Accepted publickey for core from 139.178.89.65 port 56786 ssh2: RSA SHA256:mi9Ffww0GZqlgqqsrskunsrI33jB/1uB1d3dx4wABvw Jan 30 18:31:38.359123 sshd[5610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 18:31:38.368712 systemd-logind[1486]: New session 18 of user core. Jan 30 18:31:38.376166 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 18:31:39.086446 sshd[5610]: pam_unix(sshd:session): session closed for user core Jan 30 18:31:39.097451 systemd[1]: sshd@15-10.244.90.134:22-139.178.89.65:56786.service: Deactivated successfully. Jan 30 18:31:39.101402 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 18:31:39.103898 systemd-logind[1486]: Session 18 logged out. Waiting for processes to exit. Jan 30 18:31:39.105470 systemd-logind[1486]: Removed session 18. Jan 30 18:31:44.259246 systemd[1]: Started sshd@16-10.244.90.134:22-139.178.89.65:58962.service - OpenSSH per-connection server daemon (139.178.89.65:58962). Jan 30 18:31:45.166917 sshd[5623]: Accepted publickey for core from 139.178.89.65 port 58962 ssh2: RSA SHA256:mi9Ffww0GZqlgqqsrskunsrI33jB/1uB1d3dx4wABvw Jan 30 18:31:45.170541 sshd[5623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 18:31:45.181780 systemd-logind[1486]: New session 19 of user core. Jan 30 18:31:45.190841 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 18:31:45.901016 sshd[5623]: pam_unix(sshd:session): session closed for user core Jan 30 18:31:45.909127 systemd[1]: sshd@16-10.244.90.134:22-139.178.89.65:58962.service: Deactivated successfully. Jan 30 18:31:45.913254 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 18:31:45.918392 systemd-logind[1486]: Session 19 logged out. Waiting for processes to exit. Jan 30 18:31:45.921260 systemd-logind[1486]: Removed session 19. Jan 30 18:31:46.061412 systemd[1]: Started sshd@17-10.244.90.134:22-139.178.89.65:58966.service - OpenSSH per-connection server daemon (139.178.89.65:58966). Jan 30 18:31:46.975377 sshd[5636]: Accepted publickey for core from 139.178.89.65 port 58966 ssh2: RSA SHA256:mi9Ffww0GZqlgqqsrskunsrI33jB/1uB1d3dx4wABvw Jan 30 18:31:46.979540 sshd[5636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 18:31:46.988525 systemd-logind[1486]: New session 20 of user core. Jan 30 18:31:46.995823 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 18:31:47.936583 sshd[5636]: pam_unix(sshd:session): session closed for user core Jan 30 18:31:47.952873 systemd[1]: sshd@17-10.244.90.134:22-139.178.89.65:58966.service: Deactivated successfully. Jan 30 18:31:47.955425 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 18:31:47.957668 systemd-logind[1486]: Session 20 logged out. Waiting for processes to exit. Jan 30 18:31:47.962631 systemd-logind[1486]: Removed session 20. Jan 30 18:31:48.122435 systemd[1]: Started sshd@18-10.244.90.134:22-139.178.89.65:58978.service - OpenSSH per-connection server daemon (139.178.89.65:58978). Jan 30 18:31:49.047736 sshd[5647]: Accepted publickey for core from 139.178.89.65 port 58978 ssh2: RSA SHA256:mi9Ffww0GZqlgqqsrskunsrI33jB/1uB1d3dx4wABvw Jan 30 18:31:49.050119 sshd[5647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 18:31:49.056619 systemd-logind[1486]: New session 21 of user core. Jan 30 18:31:49.067967 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 18:31:51.960421 sshd[5647]: pam_unix(sshd:session): session closed for user core Jan 30 18:31:51.972299 systemd[1]: sshd@18-10.244.90.134:22-139.178.89.65:58978.service: Deactivated successfully. Jan 30 18:31:51.975510 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 18:31:51.978201 systemd-logind[1486]: Session 21 logged out. Waiting for processes to exit. Jan 30 18:31:51.980397 systemd-logind[1486]: Removed session 21. Jan 30 18:31:52.102183 systemd[1]: Started sshd@19-10.244.90.134:22-139.178.89.65:40482.service - OpenSSH per-connection server daemon (139.178.89.65:40482). Jan 30 18:31:53.030836 sshd[5687]: Accepted publickey for core from 139.178.89.65 port 40482 ssh2: RSA SHA256:mi9Ffww0GZqlgqqsrskunsrI33jB/1uB1d3dx4wABvw Jan 30 18:31:53.035025 sshd[5687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 18:31:53.047460 systemd-logind[1486]: New session 22 of user core. Jan 30 18:31:53.051976 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 18:31:54.219240 sshd[5687]: pam_unix(sshd:session): session closed for user core Jan 30 18:31:54.229577 systemd[1]: sshd@19-10.244.90.134:22-139.178.89.65:40482.service: Deactivated successfully. Jan 30 18:31:54.232600 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 18:31:54.235144 systemd-logind[1486]: Session 22 logged out. Waiting for processes to exit. Jan 30 18:31:54.237401 systemd-logind[1486]: Removed session 22. Jan 30 18:31:54.383087 systemd[1]: Started sshd@20-10.244.90.134:22-139.178.89.65:40484.service - OpenSSH per-connection server daemon (139.178.89.65:40484). Jan 30 18:31:55.294269 sshd[5700]: Accepted publickey for core from 139.178.89.65 port 40484 ssh2: RSA SHA256:mi9Ffww0GZqlgqqsrskunsrI33jB/1uB1d3dx4wABvw Jan 30 18:31:55.297863 sshd[5700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 18:31:55.309234 systemd-logind[1486]: New session 23 of user core. Jan 30 18:31:55.315900 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 18:31:56.024084 sshd[5700]: pam_unix(sshd:session): session closed for user core Jan 30 18:31:56.031619 systemd[1]: sshd@20-10.244.90.134:22-139.178.89.65:40484.service: Deactivated successfully. Jan 30 18:31:56.032389 systemd-logind[1486]: Session 23 logged out. Waiting for processes to exit. Jan 30 18:31:56.037318 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 18:31:56.040055 systemd-logind[1486]: Removed session 23. Jan 30 18:31:59.382572 systemd[1]: run-containerd-runc-k8s.io-a93d31689cc8456258ecaa0f9437166c90435d946b60dac86150e1812bf82e75-runc.vKvwng.mount: Deactivated successfully. Jan 30 18:32:01.194988 systemd[1]: Started sshd@21-10.244.90.134:22-139.178.89.65:40494.service - OpenSSH per-connection server daemon (139.178.89.65:40494). Jan 30 18:32:02.138291 sshd[5764]: Accepted publickey for core from 139.178.89.65 port 40494 ssh2: RSA SHA256:mi9Ffww0GZqlgqqsrskunsrI33jB/1uB1d3dx4wABvw Jan 30 18:32:02.141930 sshd[5764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 18:32:02.151132 systemd-logind[1486]: New session 24 of user core. Jan 30 18:32:02.158476 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 18:32:03.010224 sshd[5764]: pam_unix(sshd:session): session closed for user core Jan 30 18:32:03.017534 systemd[1]: sshd@21-10.244.90.134:22-139.178.89.65:40494.service: Deactivated successfully. Jan 30 18:32:03.026889 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 18:32:03.029764 systemd-logind[1486]: Session 24 logged out. Waiting for processes to exit. Jan 30 18:32:03.032146 systemd-logind[1486]: Removed session 24. Jan 30 18:32:08.173051 systemd[1]: Started sshd@22-10.244.90.134:22-139.178.89.65:51486.service - OpenSSH per-connection server daemon (139.178.89.65:51486). Jan 30 18:32:09.089647 sshd[5779]: Accepted publickey for core from 139.178.89.65 port 51486 ssh2: RSA SHA256:mi9Ffww0GZqlgqqsrskunsrI33jB/1uB1d3dx4wABvw Jan 30 18:32:09.093525 sshd[5779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 18:32:09.104828 systemd-logind[1486]: New session 25 of user core. Jan 30 18:32:09.110036 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 30 18:32:09.808541 sshd[5779]: pam_unix(sshd:session): session closed for user core Jan 30 18:32:09.816788 systemd[1]: sshd@22-10.244.90.134:22-139.178.89.65:51486.service: Deactivated successfully. Jan 30 18:32:09.820184 systemd[1]: session-25.scope: Deactivated successfully. Jan 30 18:32:09.823627 systemd-logind[1486]: Session 25 logged out. Waiting for processes to exit. Jan 30 18:32:09.825469 systemd-logind[1486]: Removed session 25. Jan 30 18:32:14.977031 systemd[1]: Started sshd@23-10.244.90.134:22-139.178.89.65:35744.service - OpenSSH per-connection server daemon (139.178.89.65:35744). Jan 30 18:32:15.864728 sshd[5808]: Accepted publickey for core from 139.178.89.65 port 35744 ssh2: RSA SHA256:mi9Ffww0GZqlgqqsrskunsrI33jB/1uB1d3dx4wABvw Jan 30 18:32:15.867925 sshd[5808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 18:32:15.878136 systemd-logind[1486]: New session 26 of user core. Jan 30 18:32:15.882928 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 30 18:32:16.569341 sshd[5808]: pam_unix(sshd:session): session closed for user core Jan 30 18:32:16.575318 systemd[1]: sshd@23-10.244.90.134:22-139.178.89.65:35744.service: Deactivated successfully. Jan 30 18:32:16.578388 systemd[1]: session-26.scope: Deactivated successfully. Jan 30 18:32:16.581827 systemd-logind[1486]: Session 26 logged out. Waiting for processes to exit. Jan 30 18:32:16.583275 systemd-logind[1486]: Removed session 26.